text stringlengths 1.36k 1.27M |
|---|
Insulation and material characteristics of smoothskin neoprene vs silicone-coated neoprene in surf wetsuits
Megan Paterson¹ · Bruce Moore¹ · Sean C. Newcomer¹ · Jeff A. Nessler¹✉
Accepted: 11 April 2023
© International Sports Engineering Association 2023
Abstract
Silicone applied to the exterior of jersey-lined neoprene may increase heat absorption and water repulsion without the loss of strength and durability observed in smoothskin neoprene. The purpose of this study was to compare skin temperature under silicone-coated jersey-lined neoprene and smoothskin neoprene during recreational surfing. A secondary purpose was to compare density, tensile strength, and tangent modulus of these materials. Thirty male surfers wore a 2-mm thick wetsuit designed with the chest and back panels on one side constructed of smoothskin neoprene and the other side silicone-coated neoprene \((n = 30)\). Separate surf protocols were carried out in laboratory \((n = 10)\) and field \((n = 20)\) settings while skin temperature was collected bilaterally at the upper chest, upper back, abdomen, and lower back. In the field, skin temperatures under the smoothskin and silicone-coated neoprene were not significantly different at the upper chest, upper back, and lower back. In the laboratory, there were no significant differences in skin temperatures under the two materials at the upper chest and lower back. However, in both studies the skin temperatures were significantly higher under smoothskin neoprene at the abdomen \((p < 0.01)\). In addition, the skin temperature at the upper back in the laboratory study was significantly higher underneath the silicone-coated neoprene \((p < 0.01)\). Silicone-coated neoprene exhibited similar tensile strength but greater tangent modulus compared to smoothskin neoprene. These findings suggest that silicone-coated neoprene and smoothskin neoprene have similar thermal characteristics across most body sites but differ in tensile stiffness.
Keywords Surf · Wetsuit · Smoothskin · Thermoregulation · Silicone · Neoprene
1 Introduction
Competitive and recreational surfing have increased in popularity in recent years [1]. Surfing occurs in diverse environments, including water that is far below body temperature [2–5]. Many surfers wear wetsuits to reduce convective heat loss to improve comfort and prolong the amount of time that they can be submerged in the water without developing hypothermia [3–8]. The market for surfing wetsuits is growing rapidly; in North America alone, the total wetsuit market is projected to reach $300 million in 2022 and surfing wetsuits are expected to account for the largest segment at 45% (Grand View Research, Wetsuit Market Size, Share, Industry Report, 2022).
Despite their popularity and widespread use, recent data suggest that there is potential for innovation and improvement in surfing wetsuits [9]. For example skin temperature, a physiological variable that is often used in apparel research to quantify heat transfer and insulation [10–12], has been reported to decrease significantly within minutes during a typical surf session while wearing a standard 2 mm thick wetsuit [3, 4, 7, 8]. Heat loss does not occur homogenously across the body because regions that interact more with cold water lose heat faster [3, 4, 7, 8]. Further, regions of the body that are more exposed to air were shown to be warmer under wetsuit materials with outer surfaces that more effectively repel water and absorb radiant heat from the sun [5].
The most common wetsuit material comprised a layer of foamed chloroprene (neoprene) sandwiched between layers of nylon fabric or “jersey” [5–7]. In this design or “packaging,” the neoprene provides insulation and the nylon jersey improves
the strength and durability of the material [5]. Smoothskin is another type of neoprene packaging that is made by adding heat and pressure to the outer layer of the chloroprene foam (known as embossing), leaving the exterior with a smooth and shiny surface [5]. Smoothskin neoprene is typically combined with a single layer of nylon jersey covering the interior surface of the wetsuit, but the exterior is comprised solely of embossed chloroprene foam. In a recent study, chest and upper back skin temperatures under smoothskin neoprene were 1.5 °C warmer than skin temperatures under jersey-lined neoprene of the same thickness [5]. These findings suggest that the water repellent and radiant heat absorption properties of the smoothskin material may lead to higher skin temperature [5]. However, smoothskin neoprene appears to be less durable than jersey-laminated neoprene and is prone to tearing [13]. Therefore, it would be advantageous to develop a material that combines the desirable thermal properties of smoothskin neoprene with the durability of nylon jersey laminated to neoprene.
Silicone-coated neoprene is a novel design that may exhibit both properties. Liquid silicone material can be applied to the nylon jersey neoprene packaging through a silk-screening process, and a silicone layer would create a smooth and shiny exterior surface that can reproduce the heat absorption and water repellent properties of smoothskin [14]. Also, since the nylon jersey is retained, the composite material retains the strength and durability provided by two layers of nylon jersey. Further, the intrinsic properties of silicone rubber material may enhance the insulation and durability of the wetsuit.
Currently there are no data comparing skin temperatures under silicone-coated neoprene and smoothskin neoprene. There are also no data comparing strength and stiffness of different wetsuit materials, which can impact the biomechanics of surfer’s movement while wearing a wetsuit [7, 15]. Therefore, the purpose of this study was twofold. The first purpose was to determine if skin temperatures were different under silicone-coated neoprene vs smoothskin neoprene, and a second purpose was to determine whether there are differences in material strength, stiffness, and density between standard jersey-lined neoprene, smoothskin neoprene, and silicone-coated jersey-lined neoprene. It was hypothesized that there would be no differences in skin temperature in the upper torso between silicone-coated neoprene and smoothskin neoprene while surfing in field and laboratory-based experiments. Further, it was hypothesized that silicone-coated neoprene would demonstrate the greatest material strength, stiffness, and density when compared to smoothskin and jersey-lined neoprene.
2 Methods
2.1 Participants
Thirty male recreational surfers from San Diego County between the ages of eighteen and forty-five years old were included in either a field or laboratory experiment (Table 1). This study was limited to male participants due to differences in skin temperature profiles between sexes [3, 4, 8]. All participants had at least one year of surfing experience and reported no known injuries. Volunteers who met the inclusion criteria provided their written informed consent before participation and then provided their physical and demographic characteristics as well as information relating to their surfing experience. These data were self-reported and were not verified by the investigators. All procedures were approved by the Institutional Review Board for Human Subjects’ Protection at California State University of San Marcos (IRB # 1302181).
2.2 Experimental protocol
Upon completion of the informed consent and questionnaire, participants \((n = 20)\) were instrumented with ten iButton DS1921L skin temperature thermistors (Maxim Integrated/Dallas Semiconductor Corp., USA), with an accuracy of ± 0.5 °C per manufacturer specifications [16]. The thermistors were attached to the participant using a waterproof 3 M Tegaderm transparent dressing (Nex-care Tegaderm, USA). The ten thermistors were attached to the participants bilaterally at the upper chest (2 cm inferior to the clavicle), abdomen (5 cm below last palpable rib), upper back (2 cm superior to medial aspect of spine of scapula), lateral border of the scapula, and lower back (5 cm from...
posterior superior iliac spine) (Fig. 1). These sites were selected for consistency with prior research studies [2, 5]. Two additional thermal sensors were placed over the lateral border of the scapula under the standard jersey-lined neoprene packaging for reference to compare with silicone-coated neoprene and smoothskin neoprene (Fig. 2). Skin temperature data were acquired at one-minute intervals for the entire surfing session.
Following instrumentation, participants were fitted into a 2 mm prototype wetsuit with smoothskin neoprene on one half and silicone-coated neoprene on the other (Fig. 2). The experimental materials covered only the torso and shoulders (anterior and posterior), and the arms and legs of the wetsuit were comprised standard jersey-lined neoprene (2 mm thick). The physical appearance of the experimental materials was very similar, and the researcher and the participant were not informed which material was being tested on a specific side of the wetsuit. Wetsuit sizing was based on manufacturer guidelines for participants’ height and weight (Hurley Int., Costa Mesa, CA).
The prototype wetsuits were developed by Hurley International (Costa Mesa, CA) and comprised proprietary neoprene and silicon and were not based on any commercially available model. Six wetsuits (3 sizes: S,M,L and 2 versions: left silicone/right smoothskin and right silicone/left smoothskin) were constructed to allow for randomization of the assignment of materials to each side, thereby eliminating the potential impact that southern sun exposure may have on radiant heat absorption. The smoothskin neoprene occupied more surface area on the wetsuit due to a 2.5 cm margin of jersey lined neoprene between the seams and the silicone-coated neoprene (Fig. 2). After participants had donned their wetsuit, the iButton thermal sensors were palpated from the exterior of the wetsuit to ensure accurate placement under the materials being tested.
For the field study, participants ($n=20$) engaged in surfing for at least an hour, which began once they entered the water and ended when the participant exited the water. The final length of their surf session was left to their discretion. Environmental conditions including ambient air temperature, water temperature, relative humidity, sun exposure and wind speed were recorded from the National Oceanic and Atmospheric Administration’s buoys located offshore during each surf session (surfline.com).
A separate laboratory experiment was performed to control more precisely for water temperature and differences in body heat that might occur due to differences in physical activity. After providing informed consent and completing the surfing and activity questionnaire, a separate group of participants ($n=10$) were instrumented with ten iButton thermal sensors at the anatomical locations described above and fitted into the appropriately sized experimental wetsuit (same wetsuits used for the field study). Participants then completed a predetermined protocol in an outdoor Endless Pool Elite Model (Commercial Elite Endless Pools®, Aston, PA) consisting of a custom-sized pool (2.75 m wide, 4.9 m long) and motorized turbine that can generate a constant flow of water against a paddling surfer. The flow of water can be started and stopped to allow for various activities within the simulated surf session. Water temperature was maintained at a constant 16 °C for all participants. This 60-min protocol consisted of resting, duck diving, and paddling against a 1.4 m/s current. Water velocity was measured and verified at one-minute intervals using a flow-watch flow meter (JDC Electronics, Yverdonles-Bains, Switzerland). This protocol was designed to simulate a typical surf session and has been used previously [5, 7, 8]. The predetermined water flow rate was based on the paddling speed of surfers observed in the field [17]. This protocol was repeated continuously for one hour, alternating the rest breaks between sitting and laying. All procedures took place outdoors during the day to simulate a surf session in the field. The laboratory experiment took place about a month after the field experiment, and the laboratory is located 10 miles (16.1 km) inland from the coast.
At the completion of the field or simulated surf session, participants were asked if one side of the wetsuit covering
### Table 1 Participant characteristics
| | Field study | Laboratory study |
|--------------------------|-------------|------------------|
| Number of participants | $n=20$ | $n=10$ |
| Age (years) | $25 \pm 3$ | $24 \pm 2$ |
| Height (cm) | $181 \pm 6$ | $180 \pm 3$ |
| Mass (kg) | $76 \pm 8$ | $75 \pm 9$ |
| Years surfing | $12 \pm 6$ | $10 \pm 6$ |
| Self-reported competency | $7 \pm 1$ | $7 \pm 2$ |
| Board length (cm) | $190 \pm 37$| $187 \pm 36$ |
All values are reported as mean $\pm$ standard deviation
their torso was warmer and if one side was more comfortable. There were only three responses possible for each question: right side, left side, or no difference. No statistical tests were performed on perception data.
2.3 Material analysis
Following both field and laboratory experiments, eight material specimens were taken from three regions of one wetsuit that had been rinsed with freshwater and dried (24 specimens total). The three regions included the upper back on the left and right-hand sides (smoothskin and silicone-coated neoprene) as well as the upper scapular region of the right-hand side, where the wetsuit comprised standard jersey-lined neoprene. All specimens were taken from the same wetsuit to minimize variation in material properties due to differences in use and exposure to environmental conditions. A specialized die cutter and arbor press were used to ensure uniform size and shape of each specimen (ASTM D412-16 with Dog-Bone Type-C specimen) [18]. Care was taken to ensure that nylon fibers in the jersey material ran the same direction for all samples (parallel to the long axis of the sample) [19]. Dimensions of each specimen were verified using digital calipers (Mitutoyo model 500-196-30) and the mass of each specimen was determined by digital scale (model RD303–300 g/1mgRP, Ruishan).
All specimens were dry and at room temperature when tested. The tensile strength and stiffness of each specimen were evaluated using a material testing device (Instron 34SC-2, Norwood, MA) with a 2kN load cell and specialized pneumatic grippers (Instron 2712–042) with serrated surface to maintain an appropriate grip pressure throughout each test. Air pressure to the pneumatic grips was maintained at 6.2 BAR (90 psi), which results in a clamping force of 500N according to manufacturer specifications (Instron 2712 Series Operator’s Manual). Specimens were placed in the grips by hand, first by closing the specimen into the upper grip (via toggle switch) and allowing it to suspend freely. Alignment was confirmed by visual inspection. If the material was not aligned, it was released from the upper grip, repositioned, and then locked again. Once it was aligned properly, the lower grip was closed using the toggle switch. While this process ensured a relatively consistent level of tension in the sample before testing, each sample was visually inspected for slack and/or tension before testing began. All samples were tested in tension, beginning at resting length and stretched at a rate of 500 mm/min (strain rate 4.35 min\(^{-1}\)) until the sample failed (complete rupture), per ASTM guidelines [18]. Displacement and force were recorded at 50 Hz. Tensile strength was calculated as the maximum tension recorded at any point during the test divided by the cross-sectional area of the mid-section of the dog-bone shaped specimen (engineering stress–strain curve). Because the stress-stain relationship was nonlinear, the tangent modulus was calculated at 10%, 30%, 50%, and 70% strain for each material (before the yield point).
2.4 Statistical analysis
Water temperature, surf duration, wind speed, and air temperatures between the field and lab-based experiments were compared using independent t-tests. Skin temperatures were analyzed using procedures described previously for experiments with a similar design [2, 5, 7, 20]. First, skin temperature time-series data were downloaded from the individual iButton thermistors onto the One Wire Viewer application and copied into an Excel sheet. Then, data from each thermistor were condensed into 12 intervals of time (epochs) by mean skin temperature in increments of five minutes from minute 1 to minute 60. Field sessions that were longer than 60 min were truncated at the 60-min mark. Data were then imported into RStudio (version 1.4.1106, Boston, MA) and a two-way repeated measures ANOVA (2 materials × 12 epochs of time) was used to evaluate skin temperature across time at each of the four thermistor locations. For locations with a significant main effect for material, separate paired t-tests were performed at each epoch, comparing skin temperatures under smoothskin and silicone-coated neoprene. The Benjamini–Hochberg analysis was utilized to account for false discovery rate [21]. A separate, two-way repeated measures ANOVA was also used to compare wetsuit materials with standard jersey material on the upper back region only. Statistical significance was defined a priori as \( p < 0.05 \). Effect sizes were estimated using partial eta squared for ANOVA and Cohen’s \( d \) for pairwise comparisons. Data are presented as mean ± standard error (SD) unless otherwise indicated.
3 Results
3.1 Environmental conditions
The mean duration of the surf session in the field experiment was 67.9 ± 12.1 min. There were no significant differences in water temperatures recorded in the field (15.3 ± 1.3 °C, range: 13.9–17.78 °C) and in the laboratory (16 ± 0.0 °C). There were also no significant differences in wind speed recorded in the field (5.4 ± 3.9 mph, range: 1.2 to 12.6 mph) and in the laboratory (6.2 ± 2.6 mph, range: 3.4 to 9.9 mph); however, mean air temperatures were significantly higher in the laboratory (23.7 ± 4.9 °C, range: 16.1 to 30.6 °C) than in the field study (15.7 ± 3.4 °C, range: 11.7 to 25.6 °C).
Relative humidity was also significantly higher in the field (63.2 ± 22.4%, range: 26 to 97%) than in the laboratory (35.4 ± 23.2%, range: 9 to 79%) ($p = 0.004$). Lastly, environmental conditions were considered sunny in 90% of the trials performed in the laboratory compared to only 55% of trials during the field studies.
### 3.2 Thermoregulatory characteristics
For the field study, a two-way repeated measures ANOVA revealed a significant main effect of time at the upper chest ($p < 0.001$, $\eta_p^2 = 0.393$), upper back ($p = 0.012$, $\eta_p^2 = 0.172$), abdomen ($p < 0.001$, $\eta_p^2 = 0.712$), and lower back region ($p = 0.001$, $\eta_p^2 = 0.391$) (Fig. 3). There was also a main effect of wetsuit material at the abdomen ($p < 0.001$, $\eta_p^2 = 0.508$). The interaction effect of wetsuit material by time was significant at the abdomen ($p = 0.016$, $\eta_p^2 = 0.166$), upper back ($p = 0.009$, $\eta_p^2 = 0.147$), and lower back ($p = 0.045$, $\eta_p^2 = 0.115$). Post hoc analysis for the abdomen revealed a significantly higher abdomen skin temperature under the smoothskin material compared to the silicone-coated material for all time epochs between minutes 5 and 60 (all $p < 0.001$, mean Cohen’s $d = 0.57$, Fig. 3). For the laboratory study, a two-way repeated measures ANOVA revealed a significant main effect of time at the upper chest ($p < 0.001$, $\eta_p^2 = 0.756$), upper back ($p = 0.022$, $\eta_p^2 = 0.359$), and abdomen ($p < 0.001$, $\eta_p^2 = 0.909$) (Fig. 4). There was also a significant main effect of wetsuit material at the abdomen ($p = 0.008$, $\eta_p^2 = 0.558$), lower back ($p = 0.017$, $\eta_p^2 = 0.484$), and upper back ($p = 0.003$, $\eta_p^2 = 0.640$). Finally, there was a significant interaction effect of wetsuit material by time at the upper ($p = 0.017$, $\eta_p^2 = 0.388$) and lower back regions ($p = 0.004$, $\eta_p^2 = 0.424$). Post hoc analysis revealed significantly higher skin temperature...
under the smoothskin material when compared to the silicone material at the abdomen for epochs between minute 30 and minute 60 (mean $p = 0.01$, Cohen’s $d = 0.88$). It also revealed significantly higher skin temperature under the silicone-coated material when compared to the smoothskin material at the upper back for all epochs between minutes 5 and 60 (mean $p = 0.002$, Cohen’s $d = 0.61$, Fig. 4).
### 3.3 Thermoregulatory characteristics: jersey vs. materials
Silicone-coated neoprene and smoothskin neoprene were both compared to the standard jersey material at the lateral upper back region (Fig. 5). In the field experiment, a two-way repeated measures ANOVA revealed a significant main effect of time, a significant main effect of material, and a significant interaction effect of wetsuit material by time for comparisons of both silicone (all $p < 0.001$, $\eta_p^2 = 0.372–0.887$) and smoothskin neoprene (all $p < 0.001$, $\eta_p^2 = 0.344–0.843$) versus standard jersey neoprene. Post hoc analysis revealed significantly higher skin temperatures under both the silicone-coated material (all $p < 0.001$, mean Cohen’s $d = 1.83$) and smoothskin material (all $p < 0.001$, mean Cohen’s $d = 1.83$) when compared to the standard jersey material at all time points.
In the laboratory experiment, a two-way repeated measures ANOVA revealed a significant main effect of material and a significant interaction effect of wetsuit material by time for comparisons of both silicone-coated neoprene ($p < 0.001$, $\eta_p^2 = 0.795–0.878$) and smoothskin neoprene ($p < 0.001$, $\eta_p^2 = 0.692–0.891$) versus standard jersey neoprene. Post hoc analysis revealed significantly higher skin temperatures under both the silicone-coated material (mean $p < 0.001$, Cohen’s $d = 1.72$) and smoothskin material for minutes 5 through 60 (mean $p < 0.001$, Cohen’s $d = 2.41$) when compared to the standard jersey material.
### 3.4 Perception
When data from both experiments were pooled, a total of 20 out of 30 (66.7%) participants reported equal comfort between wetsuit sides (Table 2). A total of 21 out of 30...
(70%) participants reported feeling equally warm between wetsuit sides (Table 3).
### 3.5 Materials testing
The smoothskin neoprene was 0.5 mm thicker than the silicone-coated neoprene and 0.4 mm thicker than the standard jersey-lined neoprene (Table 4) due to differences in the fabrication process. The smoothskin neoprene also exhibited the greatest density of the three types of wetsuit material. All three types of neoprene exhibited similar tensile strength but exhibited different behavior at failure. The smoothskin neoprene failed suddenly at $223.8 \pm 8.6\%$ strain, whereas the jersey-lined and silicone-coated neoprene failed in two stages: first the neoprene material ruptured at $115 \pm 16.7\%$ strain (both jersey and silicone coated), followed by failure of the nylon jersey material at $179.2 \pm 33.3\%$ (jersey lined) and $213.9 \pm 22.0\%$ (silicone coated) strain. Tangent modulus results indicated that smoothskin exhibited the lowest stiffness of the 3 materials up to 70% strain, suggesting that it may provide the least resistance to movement. The silicone-coated neoprene package exhibited the greatest stiffness at lower amounts of strain (10%, 30%) but jersey-lined neoprene exhibited greatest stiffness at 50% and 70% of strain (Table 4, Fig. 6).
### 4 Discussion
The purpose of this study was to determine if silicone-coated neoprene provides similar insulation to smoothskin neoprene, and to determine whether there are differences in density, tensile strength, and tangent modulus between wetsuit materials. There were several novel findings. First, skin temperatures under silicone-coated neoprene were significantly higher than those under standard jersey neoprene in the upper back. Second, skin temperatures under silicone-coated neoprene were similar to those under smoothskin neoprene in the upper chest and lower back. Third, skin temperatures under silicone-coated neoprene were significantly lower than those under smoothskin neoprene in the abdomen. Fourth, skin temperatures under silicone-coated neoprene were significantly higher than those under smoothskin neoprene in the upper back during laboratory studies. Fifth, the majority of participants did not perceive differences in comfort or warmth between the wetsuit materials. Finally, when the material properties of these different neoprene packages were compared, the silicone-coated neoprene exhibited greater stiffness when compared to smoothskin neoprene, but comparable stiffness when compared to standard jersey-lined neoprene. Taken together, these findings suggest that silicone-coated neoprene is a potential alternative to smoothskin neoprene for wetsuit design.
The smoothskin neoprene packaging utilized here was on average 0.5 mm thicker than the silicone-coated neoprene and jersey-lined neoprene packaging (Table 4), and this may have impacted the insulating behavior of each material [6, 20]. However, skin temperatures under silicone-coated neoprene were either not different or warmer than skin

**Fig. 6** Tensile stress vs strain curves for 3 different wetsuit materials. Jersey: standard neoprene lined with jersey on both sides. Silicone: standard neoprene lined with jersey on both sides and a silicone coating applied to the outer surface. Smoothskin: standard neoprene heated and embossed on the outer surface and nylon jersey on the interior surface. Each curve represents the average of eight uniform samples. All samples were tested until failure, but data are only presented here up to 0.75 (or 75%) strain, an approximation of the range of strain that a wetsuit might realistically experience in the field
### Table 4 Material properties of three different types of wetsuit neoprene packages
| | Jersey-lined | Smoothskin | Silicone |
|--------------------------|--------------|------------|----------|
| Sample thickness [mm] | 2.5 ± 0.1 | 2.9 ± 0.1 | 2.4 ± 0.1|
| Density [kg/m³] | 211 ± 7 | 285 ± 12 | 272 ± 12 |
| Tensile strength at failure [MPa] | 2.19 ± 0.19 | 2.13 ± 0.12 | 2.05 ± 0.14 |
| Tensile strain at failure [%] | 179.2 ± 33.3 | 223.8 ± 8.6 | 213.9 ± 22.0 |
| Tangent modulus at 10% strain [MPa] | 0.44 ± 0.08 | 0.44 ± 0.11 | 0.69 ± 0.15 |
| 30% strain [MPa] | 0.76 ± 0.18 | 0.33 ± 0.10 | 1.09 ± 0.38 |
| 50% strain [MPa] | 1.77 ± 0.52 | 0.30 ± 0.07 | 1.43 ± 0.48 |
| 70% strain [MPa] | 1.94 ± 0.36 | 0.35 ± 0.14 | 1.77 ± 0.68 |
Values reported are the mean of 8 specimens tested for each material; mean ± SD
temperatures under smoothskin neoprene for three of the four locations compared here. This suggests that a thinner packaging of silicone-coated neoprene can achieve a similar thermoregulatory effect to that of smoothskin in multiple anatomical locations. A thinner, more efficient wetsuit package is desirable because it may also have a beneficial effect on movement biomechanics through reduced mass and material stiffness [15].
Recently published data established that upper back skin temperature under smoothskin neoprene was on average ~ 1.5 °C higher compared to skin temperature under standard jersey neoprene due to its radiant heat absorption and water repellent properties [5]. The current results are consistent with these previous findings, since significantly higher skin temperatures were found under the smoothskin neoprene compared to jersey material at the upper back (Fig. 5). Similarly, upper back skin temperatures under silicone-coated neoprene were also significantly higher than skin temperatures under standard jersey neoprene (Fig. 5).
4.1 Smoothskin vs silicone-coated neoprene—thermal results
At the upper back, skin temperatures under silicone-coated neoprene were significantly higher than those under smoothskin neoprene in the laboratory study but not in the field. It is well known that between 2 and 16% of the total time surfing consists of miscellaneous activities where the upper back of the surfer is directly interacting with the water (i.e., swimming, diving under waves, and falling after riding waves) [22]. The difference in skin temperature found in the laboratory may be a result of the upper back interacting with the water less than during experiments in the ocean since in the laboratory setting the upper back of the participants only interacted with the water briefly when duck diving between rest and paddling phases. Skin temperature differences in the upper back between laboratory and field setting may also be attributed to differences in environmental conditions. Specifically, ambient air temperature, humidity, and sun exposure differed between research settings. These factors suggest that silicone-coated neoprene may absorb radiant heat from the sun more effectively than smoothskin neoprene at the upper back in conditions with warmer air temperature, higher exposure to the sun and minimal relative humidity. However, without direct measure of radiation-mediated heat absorption, these conclusions are speculative. Additional study is needed to determine the exact mechanisms behind differences in skin temperature under different neoprene packages.
In both the laboratory and field experiments, no significant differences were found between the skin temperatures under the smoothskin neoprene and silicone-coated neoprene at the lower back and chest. During paddling and resting phases, the lower back and chest experience greater exposure to the water and less exposure to radiant heat from the sun than the upper back. These findings support the previous assertion that relative exposure to water, sun and air likely contribute to the insulation capacity of smoothskin and silicone-coated neoprene [5]. Therefore, in conditions where there is intermittent interaction with water and sun exposure, smoothskin neoprene and silicone-coated neoprene may provide very similar insulation.
Skin temperatures under the smoothskin neoprene were significantly warmer than the skin temperatures under silicone-coated neoprene at the abdomen (Fig. 3). Similar findings at the abdomen were also observed in the laboratory (Fig. 4). It is interesting that the abdomen is the only location where skin temperatures under the smoothskin were greater than those under silicone-coated neoprene. These differences in skin temperatures may be influenced by lack of sun exposure and more consistent interaction with cold water during prone paddling and resting while surfing. Skin temperatures at the abdomen may also be influenced by differences in neoprene package thickness, since the silicone-coated neoprene was 0.5 mm thinner than the smoothskin neoprene.
The combined results of this study suggest that silicone-coated neoprene would have the greatest impact on thermoregulation when applied to the regions of the body that are most exposed to radiant heat from the sun. These regions include the upper chest, shoulders, and the upper and lower back. Conversely, this material may have little impact on thermoregulation when used in the abdomen of wetsuits due to greater exposure to cold water. In addition, recent research that reported no differences in skin temperature at the abdomen when smoothskin was compared to standard jersey neoprene [5]. Therefore, since jersey material provides greater durability than smoothskin material, the combined results suggest that jersey-lined neoprene should be utilized at the abdominal region. By the same logic, silicone-coated neoprene should also not be placed in the lower extremities due to the higher interactions with cold water and lack of sun exposure.
4.2 Mechanical results
These results suggest that adding silicone increases material density to a level that is greater than standard jersey-lined neoprene, but less than or comparable to that of smoothskin (Table 4). Unexpectedly, the silicone-coated neoprene did not exhibit greater tensile strength than the smoothskin neoprene (Table 4). The stiffness of the material is an important factor in predicting its impact on an athlete’s movement. It is interesting that smoothskin neoprene exhibited less than 33% of the stiffness (i.e., tangent modulus) of silicone-coated neoprene up to 70% strain.
(Table 4, Fig. 6). However, the silicone-coated neoprene exhibited similar stiffness to that of the standard jersey-lined neoprene, suggesting that adding a silicone coating will have minimal impact on movement biomechanics when these two packages are compared. The impact of a silicone-coated neoprene on human movement should be evaluated in a future study.
The combination of multiple materials into a composite package has a complex effect on material strength and stiffness. The current data suggest that silicone coating contributes more to overall package stiffness at smaller amounts of stretch. This effect can be seen in Fig. 6 where both the jersey-lined and smoothskin neoprene curves exhibit reduced slopes at lower strain (e.g., 10–30%) than the silicone-coated neoprene. It is also interesting that the jersey-lined neoprene exhibits a clear increase in stiffness at around 50% strain, which may indicate that the nylon jersey fibers become more engaged at this level of stretch. It should also be noted that there were differences in behavior of the different materials at failure. Smoothskin neoprene tended to fail/rupture more abruptly, while the jersey-lined neoprene failed in two stages. The inconsistent behavior at failure among materials was a limitation to the current study. Future research should incorporate methods that limit this effect, potentially using other sample shapes such as those recommended for fabric. In addition, a closer examination of tearing would improve understanding of the durability of each material. While this analysis provides some initial insight into the mechanical properties of these materials, additional research is needed to provide a more detailed analysis of the behavior of different neoprene packages. Finally, additional research is needed to evaluate the environmental impact of silicone in the manufacture and disposal of wetsuits, including additional energy costs associated with production. Neoprene wetsuits are difficult to recycle and/or dispose of properly and a silicone coating may add to this challenge.
4.3 Conclusion
The findings from this study demonstrate for the first time that silicone-coated neoprene results in comparable skin temperatures when compared to smoothskin neoprene in regions that have intermittent interactions with water and exposure to radiant heat from the sun. In addition, silicone-coated neoprene exhibited comparable tensile strength and greater stiffness when compared to smoothskin neoprene. Additional study is needed to determine the impact of silicone-coated neoprene on human movement, particularly the paddling motion. The impact on thermoregulation observed here suggests that manufacturers should consider the application of a silicone coating for specific regions of their wetsuits to increase insulation capacity and durability.
Funding No funding was obtained for this study.
Data availability Data and material are available upon request.
Code availability Code available upon request.
Declarations
Conflicts of interest The authors state no conflicts of interest.
Ethical approval Experimental procedures were approved by the California State University-San Marcos Institutional Review Board (IRB#1302181).
Consent to participate All participants provided their informed consent prior to participation.
Consent for publication All authors approve submission of this manuscript.
References
1. Moran L, Webber J (2013) Surfing injuries requiring first aid in New Zealand, 2007–2012. Int J Aquatic Res Ed 7:192–203
2. Denny A, Moore B, Newcomer SC, Nessler JA (2022) Graphene-infused nylon fleece versus standard polyester fleece as a wetsuit liner: comparison of skin temperatures during a recreational surf session. Res J Text Appar Press. https://doi.org/10.1108/RJTA-07-2022-0079
3. Corona LJ, Simmons GH, Nessler JA, Newcomer SC (2018) Characterization of regional skin temperatures in recreational surfers wearing a 2mm wetsuit. Ergonomics 61(5):729–735. https://doi.org/10.1080/00140139.2017.1387291
4. Warner ME, Nessler JA, Newcomer SC (2019) Skin temperatures in females wearing a 2mm wetsuit during surfing. Sports 7(6):145. https://doi.org/10.3390/sports7060145
5. Smith C, Saulino M, Luong K, Simmons M, Nessler JA, Newcomer SC (2020) Effect of wetsuit outer surface material on thermoregulation during surfing. Sports Eng 23(1):1–8. https://doi.org/10.1007/s12283-020-00329-8
6. Naebe M, Robins N, Wany X, Collins p (2013) Assessment of performance properties of wetsuits. J Sports Eng Technol 227:25–264
7. Wiles T, Simmons M, Gomez D, Schubert MM, Newcomer SC, Nessler JA (2022) Foamed neoprene versus thermoplastic elastomer as a wetsuit material: a comparison of skin temperature, biomechanical, and physiological variables. Sports Eng. https://doi.org/10.1007/s12283-022-00370-9
8. Skillern NP, Nessler JA, Schubert MM, Moore B, Newcomer SC (2021) Thermoregulatory sex differences among surfers during a simulated surf session. Sports Eng. https://doi.org/10.1007/s12283-021-00353-2
9. Romainin A, English S, Furness J, Kemp-Smith K, Newcomer SC, Nessler JA (2021) Surfing equipment and design: a scoping review. Sports Eng. https://doi.org/10.1007/s12283-021-00358-x
10. Jiao J, Li Y, Yao L, Chen Y, Guo Y, Wong SHS, Frency SFN, Hu J (2015) Effects of body-mapping-designed clothing on heat stress and running performance in a hot environment. Ergonomics 60(10):1435–1444
11. Schindelka B, Litzenberger S, Sabo A (2013) Body climate differences for men and women wearing functional underwear during sport at temperatures below zero degrees Celsius. Proc Eng 60:46–50
12. Domenico ID, Hoffmann SM, Collins PK (2022) The role of sports clothing in thermoregulation, comfort, and performance during exercise in the heat: a narrative review. Sports Med. https://doi.org/10.1186/s40798-022-00449-4
13. Co., CS. *The Wetsuit Guide*. [cited 2023 1/30/2023]; Available from: www.cleanlinesurf.com/wetsuit-guide/
14. Balan G, Panikker UG (2006) Chemical, water, and thermal durability of silicone rubber/ethylene vinyl acetate blends. Mat Res Innov 10(3):305–320
15. Nessler JA, Silvas M, Carpenter S, Newcomer SC (2015) Wearing a wetsuit alters upper extremity motion during simulated surfboard paddling. PLoS ONE. https://doi.org/10.1371/journal.pone.0142325
16. Maxim Integrated Products, I (2015) *DSJ922L Technical Specifications*, M.I.P. Inc., Editor.
17. Farley O, Harris NK, Kilding AE (2012) Anaerobic and aerobic fitness profiling of competitive surfers. J Strength Cond Res 26(8):2243–2248
18. ASTM International. D412-16 (2021) Standardized test methods for vulcanized rubber and thermoplastic elastomers - tension. https://doi.org/10.1520/D0412-16R21
19. Vlad, D and M Oleksik (2019) Research regarding uniaxial tensile strength of nylon woven fabrics, coated and uncoated with silicone, in 9th International Conference on Manufacturing Science and Education: Sibiu, Romania
20. Kellogg D, Wiles TNJA, Newcomer SC (2020) Impact of velcro cuff closure on forearm skin temperature in surfers wearing a 2mm and 3mm wetsuit. Int J Exerc Sci 13(6):1574–1582
21. Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Royal Statist Soc 57(1):289–300
22. LaLanne CL, Cannady MS, Moon JF, Taylor DL, Nessler JA, Crocker GH, Newcomer SC (2017) Characterization of activity and cardiovascular responses during surfing in recreational male surfers between the ages of 18–75 years old. J Aging Phys Act 25(2):182–188. https://doi.org/10.1123/japa.2016-0041
**Publisher's Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
POLIFORM
INDEX
SIDEBOARDS
STOOLS
DINING CHAIRS
DINING TABLES
ARMCHAIRS / POUFS
COFFEE TABLES
SOFAS
BEDS
NIGHT COMPLEMENTS
RUGS
LIGHTS
WARDROBES / COMPOSITIONS
KITCHENS
APPLIANCES
Poliform
SIDEBOARDS
1 x Code sideboards
By R&D Poliform
Dims: w916 x d501 x 1454 mmh
Inner Structure / Open storage unit / Inner Back: MAT CHAMPAGNE 77
Open storage unit back panel: 4051 BRONZATO MIRROR
Wooden Doors/ Drawers / Lateral sides: GOLD WALNUT
Glass shelves: 1000 CLEAR GLASS
Rear Back Panel: MAT CHAMPAGNE 77
Feet: CHAMPAGNE
Led lamps: 3000 - K. - 220V
Finishing top: MAT ZECEVO / GOLD WALNUT
Drawer Bottom Covering t-cover: B-TECNOCOVER 06 NOCCIOLA
Retail Price: £8,474.00 inc. VAT
Sale Price: £4,205.67 inc. VAT
Symphony Sideboard
By Gallina
Dims: 2125 x 680 x 545 mmH
Foot Height: 300mm
Structure Fin: Black Elm
Legs: Matt Brown Nickel
Top: Matt Zecevo
Retail Price: £18,698.16 inc VAT
Sale Price: £7,324.53 inc. VAT
Code Sideboard
MA24DA
Dims: 1366 x 1454 x 501 mmH
STRUCTURE FIN: BLACK ELM
GLASS FRONT FINISH: 1000 CLEAR GLASS
LIVING STRUCTURE FIN.: MAT BRONZO 23
Top Finish: GLOSSY SAINT LAURENT
LIVING SHELVES FIN.OPEN ELEM.: MAT BRONZO 23
Retail Price: £11,851.32 inc. VAT
Sale Price: 5,954.01 inc. VAT
STOOLS
STOOLS
1 x Ventura Stool
By J.M. Massaud
Dims: 485 x 500 x 950mmH
Finish: S-Pelle Silk Cognac
Structure: Black Elm
Retail Price: £2,882.88 inc. VAT
Sale Price: £1,219.66 inc. VAT
ST 3557
1 x Ventura Stools
by Jean-Marie Massaud
Dims: 485 x 470 x 650/850 mmH
Upholstered in Oliva Cat Y- Pelle Nabuk
Legs in Black Elm
Retail Price: £2,831.40 inc. VAT
Sale Price: £1,422.48 inc. VAT
ST 3406
Poliform
DINING CHAIRS
5 x Sophie Dining Chair
By Emmanuel Gallina
Dims: 570 x 600 x 480/780mmH
Upholstered in Cat S-Pelle Silk 10 Terracotta
Legs in Black Elm
Retail Price: £2,368.80 EACH inc. VAT
Sale Price: £989.94 EACH inc. VAT
ST 3086
1 x Ipanema Chair
by Jean-Marie Massaud
Dims: 810 x 900 x 815 mmH
Upholstery in Cat. D Limoges 02 Visone
Legs in Spessart Oak
Retail Price: £1,696.13 inc. VAT
Sale Price: £699.44 inc. VAT
ST 2279
2 x Grace Dining Chair with armrests
By Emmanuel Gallina
Dims: 535 x 815 x 540mm
Structure in: Black Elm
Seat in D-Antea 03 Visone
Retail Price: £1,751.04 EACH inc. VAT
Sale Price: £765.41 EACH inc. VAT
3 x Seattle Dining Chair
By Jean-Marie Massaud
Dims: 530 x 550 x 790mmH
In Hide 29 Tortora
Retail Price: £2,262.24 EACH inc. VAT
Sale Price: £871.96 EACH inc. VAT
ST 3211
2 x Seattle Dining Chair with Armrests
By Jean-Marie Massaud
Dims: 560 x 550 x 790mmH
In Hide 29 Tortora
Retail Price: £3,036.96 EACH inc. VAT
Sale Price: £1,171.31 EACH inc. VAT
DINING TABLES
DINING TABLES
Kensington Dining Table
By J. M. Massaud
Dims: 2000 x 740mmH
Top in Sahara Noir Marble
Structure in Bronzo Ferro
Retail Price: £32,000.00 inc. VAT
Sale Price: £13,437.46 inc. VAT
ST 2800
Home Hotel Round Dining Table
By J.M. Massaud
Dims: 2000 x 2000 x 740mmH
Top Mat Calacatta Oro
Structure in Black Elm
Retail Price: £13,999.68 inc. VAT
Sale Price: £5,789.19 inc. VAT
ST 3125
Mondrian Oval Dining Table
By J.M. Massaud
Dims: 2400x 1220 x 740 mmH
Feet: Champagne
Table Top: Glossy Calcutta Oro
Retail Price: £20,091.08 inc. VAT
Sale Price: £14,314.89 inc. VAT
ST 3625*
ARMCHAIRS / POUFS
ARMCHAIRS / POUFS
New York Round Big Pouf
By R&D Poliform R&D
Dims: 1200 x 410 x 1200 mmH
Covering finish: Y-Pelle Nabuk 08 Prussia
Feet Fin: Glossy Brown Nickel
Retail Price: £5,551.20 inc. VAT
Sale price: £2,295.36 inc. VAT
ST 3064
New York Round Small Pouf
By J. M. Massaud
Dims: 530 x 530 x 460mmH
Upholstered in Cat. D-Oxford 16 Ghiaccio
Legs in Mat Brown Nickel
Retail price £1,978.22 inc. VAT
Sale price: £1,029.11 inc. VAT
ST 2831
2 x Henry Pouf
By Emmanuel Gallina
Dims: 450 x 450 x 450mmH
Covering Finish: CAT Y-Pelle Nabuk 09 Ruggine
Structure in Oak Spessart
Retail price £1,650.24 inc. VAT EACH
Sale price: £682.40 inc. VAT EACH
ST 3123
Play Pouf
By R&D Poliform
Dims: 400 x 400 x 40mmH
Covering Finish: CAT D – E Signoria 34 Moka
Retail price: £716.00 inc. VAT
Sale price: £314.80 inc. VAT
ARMCHAIRS / POUFS
Stanford Armchair
By J. M. Massaud
(including kidney cushion)
Dims: 705 x 645 x 710mmH
Interior Upholstered in Cat. Y Pelle Soft 13 Carbone
Exterior in Hide 02 Nero
Swivel base in Glossy Brown Nickel
Retail price: £7,911.36 inc. VAT
Sale price: £2,758.34 inc. VAT
ST 2805
KAY Lounge Armchair
By J. M. Massaud
Dims: 730 x 735 x 675mmH
Finish: Silk Leather – Cat. S Pelle 06 Roccia
Legs: Chrome Satin
Retail Price: £4,875.56 inc. VAT
Sale Price: £2,015.74 inc. VAT
ST 3164
2 x Gentleman Single Armchair
By M. Wanders
Dims: 690 x 670 x 800 mmH
Covering Fin: G-Siro 02 Lino
Base Fin: Glossy Brown Nickel
Retail Price: £4,164.34 inc. VAT EACH
Sale Price: £3,060.78 inc. VAT EACH
ST 3110
ARMCHAIRS / POUFS
Kaori Armchair
Dims: 770 x 860 x 740mmH
Internal covering in Cat. M-Yushan 01 Sabbia
External covering in Hide 02 Nero
Feet Matt Brown Nickel
Retail Price: £8,216.52 inc. VAT
Sale Price: £3,476.15 inc. VAT
ST 3554
Santa Monica Armchair w/ Stretched Cover By J. M. Massaud
Dims: 700 x 950 x 640 mmH
Upholstered in Cat. D-Antea 05 Terra Di sienna
Retail Price: £4,743.96 inc. VAT
Sale Price: £1,520.64 inc. VAT
ST 3063
New York Pouf
By J.M. Massaud
Dims: 530 x 530 x 460 mmH
Covering finish: S-Pelle Silk 12 Senape
Feet Fin: Glossy Brown Nickel
Retail Price: £2,613.00 inc. VAT
Sale Price: £997.38 inc. VAT
ARMCHAIRS / POUFS
New York Small Square Pouf
By J. M. Massaud
Dims: 520 x 520 x 410mmH
Upholstered in Cat. G Giada Velvet
Legs in Glossy Brown Nickel
Retail Price: £1,961.16 inc. VAT
Sale Price: £828.89 inc. VAT
ST 2946
Play Pouf
By R&D Poliform
Dims: 400 x 400 x 400mmH
Covering Finish: CAT S – Pelle Silk 10 Terracotta
Retail Price: £1402.44 inc. VAT
Sale price: £561.78 inc. VAT
ST 3117
New York Rectangular Pouf
By J. M. Massaud
Dims: 1220 x 520 x 410mmH
Upholstered in Cat. E Norway 8 Avorio
Legs in Glossy Brown Nickel
Retail Price: £2,572.87 inc. VAT
Sale Price: £1,200.43 inc. VAT
COFFEE TABLES
## COFFEE TABLES
### Bristol Freestanding Coffee Table
**By J.M. Massaud**
- **Dims:** 1400 x 1400 x 330mmH
- **Top Fin:** Mat Calacatta Oro
- **Base Fin:** Glossy Visone 48
- **Feet Fin:** Mat Brown Nickel
- **Retail Price:** £19,563.96 inc. VAT
- **Sale Price:** £8,311.80 inc. VAT
**ST 3081**
### Creek Coffee Table
**By Jean-Marie Massaud**
- **Dims:** 420 x 520mmH
- **Creek Lower Top:** Gold Walnut
- **Creek Upper Top:** Mat Calacatta Oro
- **Structure:** Brown Nickel
- **Retail Price:** £2,879.50 inc. VAT
- **Sale Price:** £2,569.96 inc. VAT
**ST 3137**
### Westside Coffee Table
**By Jean-Marie Massaud**
- **Dims:** 500 x 500 x 475mmH
- **Top in:** Black Elm
- **Base in:** Mat Brown Nickel
- **Retail Price:** £1,159.87 inc. VAT
- **Sale Price:** £974.28 inc. VAT
**ST 3056**
### Creek Square Coffee Table
**By J. M. Massaud**
- **Dims:** 1200 x 1200 x 310mmH
- **Upper Top in:** Glossy Sahara Noir
- **Lower top in:** Black Elm
- **Structure in:** Brown Nickel
- **Retail Price:** £8,851.58 inc. VAT
- **Sale Price:** £7,435.32 inc. VAT
**ST 3011**
COFFEE TABLES
Mondrian Round Coffee Table
By J.M. Massaud
Dims: 550 x 550 x 480 mmH
Feet: Glossy Brown Nickel
Table Top: Black Elm
Retail Price: £1,394.64 inc VAT
Sale Price: £1,244.71 inc. VAT
ST 3082
Mondrian Round Low Coffee Table
By J.M. Massaud
Dims: 800 x 800 x 380 mmH
Feet: Glossy Brown Nickle
Table Top: Glossy Caramello 85 Lacquer
Retail Price: £1,937.52 inc. VAT
Sale Price: £1,729.23 inc. VAT
ST 3083
Mondrian Round Coffee table
By J.M. Massaud
Dims: 550 x 550 x 480 mmH
Feet Fin: Gold Polished Bronze
Table Top: Mat Zecev
Retail Price: £3,475.68 inc. VAT
Sale Price: £1,237.48 inc. VAT
COFFEE TABLES
NARA Coffee Table
By J. M. Massaud
Dims: 600 x 600 x 380mmH (s)
Finish: Solid Walnut Gold
Retail Price: £3,891.11 inc. VAT
Sale Price: £1,608.40 inc. VAT
ST 3131
NARA Coffee Table
By J. M. Massaud
Dims: 500 x 500 x 470mmH (b)
Finish: Solid Walnut Gold
Retail Price: £4,242.58 inc. VAT
Sale Price: £1,754.43 inc. VAT
ST 3132
Koishi Coffee Table
By J. M. Massaud
Dims: 720 x 430 x 480mmH
Structure: Matt Burnished
Top: Mat Sahara Noir
Retail Price: £3,965.52 inc. VAT
Sale Price: £1,465.17 inc. VAT
ST 3319
Flute High Coffee Table
By Roberto Barbieri
Dims: 500 x 500 x 600mmH
Top in 5010 Painted Reflect Blue Glass
Base in Black Chrome
Retail Price: £2,524.10 inc. VAT
Sale Price: £1,043.89 inc. VAT
SOFAS
Dune Sofa
By Carlo Colombo
Please refer to plan below for layout
Upholstered in Cat E-Agadir 05 Crema
Including backrest cushions
Dim: 3550 X 2320 X 725 mmH
Cushions description:
Cat E-Agadir 05 Crema x 3 (450 x 450 x 100mm)
Cat E-Agadir 05 Crema x 5 (800 x 500 x 100mm)
Retail Price: £22,561.30 inc. VAT
Sale Price: £8,576.44 inc. VAT
Gentleman Friends 2
By M. Wanders
(optional cushions included)
Dims: 1660 x 940 x 870 mmH
Covering Fin: G-Tebe 08 Notte
Base Fin: Glossy Brown Nickel
Including the below cushions:
2 X Optional Cushion – Covering G-Tebe 08 Notte
ST 3467
1 X Optional Cushion – Covering G-Lero 01 Ecru
ST 3468
Retail Price: £8,903.52 inc. VAT
Sale Price: £3,906.06 inc. VAT
Westside Sofa
By Jean-Marie Massaud
Upholstered in Cat. E-Arcadia 05 Bronzo fabric
Including backrest cushions; NOT including scatter cushions
Scatter cushions
- Cat G-Siro 04 Notte x 3 (450 x 450mm) - £279.55 EACH
- Cat G-Siro 04 Notte x 3 (650 x 330mm) - £315.84 EACH
ST 3628
FOR AVAILABLE COMPOSITIONS PLEASE REFER TO THE NEXT SLIDE
WESTSIDE SOFA OPTIONS
Westside Sofa
By Jean-Marie Massaud
Dims: 3600 x 1200 x 380/580 mmH
Upholstered in Cat. E-Arcadia 05 Bronzo fabric
Including backrest cushions NOT including scatter cushions
Scatter cushions
- Cat G-Siro 04 Notte x 1 (450 x 450mm) - Sale Price: £234.82
Retail Price: £16,535.23 inc. VAT
Sale Price: £9,627 inc. VAT
ST 3628
Westside Sofa
By Jean-Marie Massaud
Dims: 2400 x 2400 x 380/580 mmH
Upholstered in Cat. E-Arcadia 05 Bronzo fabric
Including backrest cushions NOT including scatter cushions
Scatter cushions
- Cat G-Siro 04 Notte x 1 (450 x 450mm) - Sale Price: £234.82
Retail Price: £16,535.23 inc. VAT
Sale Price: £9,627 inc. VAT
BEDS
DREAM Upholstered Bed Frame For Panelling
By Marcel Wanders
Bed Dims: 1870 x 2147 x 285 mmH
Mattress Dims: 1600 X 2000 mm
Mattress NOT included; paneling not included.
S-Pelle Silk 06 Roccia
Bedstead type: Singe Bedstead with slats
Retail price £4,399.66 inc. VAT
Sale price : £3,926.70 inc. VAT
RUGS
RUGS
Frame Perla Rug
Dims: 2500 x 4000mm
Per Sqm. £893.64
Retail Price: £8,936.40 inc. VAT
Sale Price: £3,581.17 inc. VAT
ST 2862
Relief Rug Perla
Dims: 4500 x 6000mm
Per Sqm. £893.64
Retail Price: £24,128.28 inc. VAT
Sale Price: £9,931.50 inc. VAT
ST 3317
Frame Rug Grey
Dims: 2500 x 4000mm
Hand made carpet (Hand Loom technique)
Cotton weave, bamboo silk fur - 8 cm edge with shaved wool fur
Per Sqm. £893.64
Retail Price: £8,936.40 inc. VAT
Sale Price: £3,581.20 inc. VAT
ST 3179
Frame Rug Carbone
Dims: 2500 x 4000mm
Per Sqm. £893.64
Retail Price: £8,936.40 inc. VAT
Sale Price: £3,581.14 inc. VAT
## RUGS
### Frame Oval Rug
- **Dims:** 4500 x 3000 mm
- **Colour:** Oceano
- **Per. Sqm:** £893.64
- **Retail Price:** £12,064.14 inc. VAT
- **Sale Price:** £4,655.52 inc. VAT
- **ST 2548**
### Tratto Rug
- **Dims:** 5000 x 6000mm
- **Colour:** Carbone
- **Per. Sqm:** £893.64
- **Retail Price:** £26,809.20 inc VAT
- **Sale Price:** £10,743.61 inc. VAT
- **ST 3175**
### Relief Rug
- **Dims:** 4000 x 4000mm
- **Colour:** Camello
- **Per. Sqm:** £893.64
- Handmade with handloom technique, cotton weave, fur in Himalayan wool and bamboo silk, bouclé in Himalayan wool
- **Retail Price:** £14,298.24 inc. VAT
- **Sale Price:** £5,885.33 inc. VAT
- **ST 3252**
### Plain Rug
- **Dims:** 4500 x 4000mm
- **Colour:** Tortora
- **Per Sqm.** £974.88
- Handmade carpet (Hand Loom technique)
Cotton weave, bamboo silk fur - 8 cm edge with shaved wool fur
- **Retail Price:** £17,547.84 inc. VAT
- **Sale Price:** £6,446.40 inc. VAT
- **ST 3170**
LIGHTS
2 x Magic Circus Pendant Lamp 02-90
250 mm diameter, 900 mm h
Polished Brass and mouth blown glass
Retail Price: £816.00 EACH inc. VAT
Sale Price: £408.00 EACH inc. VAT
ST 2401
Nahooor Achillea Light
In Brushed Brass Murano Glass
Retail Price: £1,920.00 inc. VAT
Sale Price: £960.00 inc. VAT
ST 2930
Tru Floor Lamp
by Roberto Paoli
Retail Price: £1,776.00 inc. VAT
Sale Price: £888.00 inc. VAT
ST 2236
Bomma TIM Floor Lamp
Dims 450/550 x 290/390mm
Hand-blown clear glass
Shape and Color may vary
Retail Price: £1,680.00 inc. VAT
Sale Price: £840.00 inc. VAT
ST 2677
Claritas Floor Lamp
By Vico Magistretti
Dims: 470 x 500 x 1640mmH
Finish in painted black aluminum
Retail Price: £924.00 inc. VAT
Sale Price: £462.00 inc. VAT
COMPOSITIONS
Wall System Composition
**Wall System**
**Dims:** 5240 x 416 x 2673 mmH
**Description:**
Flap doors, sides/shelves in Oak Spessart;
Back panels: Oak Spessart and mat Canapa 35;
Unit / Grids / Hanging shelves in metal lacq. Ottone 24
Inserts: Hid 29 Tortora
LED Lighting
**Retail Price:** £60,671.00 inc VAT
**Sales Price:** £25,200 inc. VAT
*(Delivery and Installation TBC)*
Bookshelf Model Wall System
**Dims:** 5240 x 416 x 2673 mmH
**Description:**
- Side finish and shelves: Mat Castoro 88
- Back panels: Gold Walnut
**Sales Price:** £15,755.72 inc. VAT
*(Delivery and Installation TBC)*
Code Wall Panelling
Dims: 6304 x 450 x 2780 mmH
Description:
Wall panelling: Mat special finishing: Lino Scuro
Wall panelling profiles: Ardesia
Retail Price: £20,137.30 inc. VAT
Sale Price: £6,829.80 inc. VAT
(Delivery and Installation TBC)
Code Wall Fixing Shelves
Dims: 3600 x 501 x 60 mmH
Description:
Profile Hanging Cabinets: CHAMPAGNE
Thick Hanging Shelf Finish: WALNUT GOLD
Electrified Channel Thick Hanging Shelf Finish: CHAMPAGNE
Equipment: 2xUK Sockets/USB
Retail Price: £7,611 inc. VAT
Sale Price: £2,832.48 inc. VAT
WALL SYSTEM COMPOSITION
By R&D Poliform
Dims: 5098 x 446 x 2220 mmH FFL
Structure Finish: OAK SPESSART
Back Panels Finish: OAK SPESSART / MAT CORDA 26
Grids Finish: MAT MOKA 49
Jet Flap Door: OAK SPESSART
Jet Insert: HIDE NERO 02
Jetting out Display doors: 2000 CLEAR EXTRAL. GLASS
LED Lamps: ARDESIA
Retail Price: £46,127.64 inc. VAT
Sale Price: £15,697.62 inc. VAT
KITCHENS
Infinity hood with shelves
Retail Price: £20,775.456 inc. VAT
Sale Price: £7,750.83 inc. VAT
ST 3621
‘Minus’ Blanco worktop
60mm thick with all accessories, integrated steel sink (700mmx400mm), CEA tap (3 hole tap in satined steel finish), and dekton chopping board. Includes 2x 2100W x 60H x 840Dmm/ Dekton chopping board 600Wx60Hx840Dmm/worktop legs 961Hx 848Dmm)
Retail Price: £49,037.76 inc. VAT
Sale Price: £23,944.62 inc. VAT
ST 3622
Steel worktop
Dims: 3014Wmm x 6Hmm x 1280Dmm
Retail Price: £13,428.46 inc. VAT
Sale Price: £5,009.85 inc. VAT
SHAPE SHAKER SYSTEM
Finish Structure: Moka Anodized Aluminum
Finish Back Panel: Carrara Touch Mdi
Finishing Tube: Embossed Lacquered Moka
Retail Price: £36,794.22 inc. VAT
Sale Price: £10,718.43 inc. VAT
APPLIANCES
## Appliances
### Gaggenau CV282100
Venting Cooktop 80cm, Flush, Double Flex With Twist
- **Retail Price:** £3,910.86 inc. VAT
- **Sale Price:** £2,581.17 inc. VAT
- **ST 2772**
### Wall-mounted Hood
Dims: 900 x 500 x 1050 mmH
Stainless steel, with matt lacquered carbone paneling
- **Sale Price:** £1,199.33 inc. VAT
- **ST 2377**
### Quooker PR03
Fushion Square Stainless Steel Tap
- **Retail Price:** £1,400.00 inc. VAT
- **Sale Price:** £975.98 inc. VAT
- **ST 3602**
### Blanco FWD Waste Disposal
Medium Evolution 100
- **Retail Price:** £732 inc. VAT
- **Sale Price:** £403.60 inc. VAT
- **ST 3610** |
From the desk of Rachel Lawson...
How many times have you started a sentence with “I just can’t comprehend WHY xyz ….”? Usually there is a hint of frustration behind that statement. Sometimes, but not always, maybe a hint of intolerance? It’s understandable and human nature. Too bad we are asked to rise above that because it’s not easy!! To merely tolerate that which we do not understand is not enough.
I heard somewhere the other day that “It’s about compassion, not comprehension”. That rings so true of what Christ asks of us. Other people's experiences are not your experiences, their feelings are not your feelings, their traumas are not the same as yours.
Goodness knows there is so much we cannot (and perhaps aren't meant) to understand about God's plan. That doesn't mean don't bother TRYING to understand, otherwise we wouldn't have sermons, study the scripture, or Sunday School either. We are to strive for understanding, but it is not as important as the compassion that Jesus so frequently modeled for us.
We must also try to understand where our neighbors are coming from, but it is compassion that is crucial if we are to emulate Christ’s love the way he commanded. Next time, I am going to try and substitute “I cannot comprehend” with “I have compassion for” and just see what happens. I have a feeling my conversation (internal dialog or with others) will take on a different tone. Perhaps yours would, too?
—Rachel
Food Pantry
WRPC
SERVING EVERY FRIDAY
from 10:00am—Noon.
A very special thanks goes out to all of our Food Pantry volunteers who are helping to FEED THE HUNGRY in our community.
People in our Prayers
Rev. Dan Clark
Holston Camp
Doris Blanchard Family
Ted Germroth
Benjamin Salyer
Sue Hall
(Steve Hall’s mother)
Mike Lewis
Sharon Petke
Conner Caldwell Family
Lynda Snook
Laci, Nicholas &
Robin Lodal
Ann Kibler
Kirk and Lola Finch
Travis & Kathy Adams
(Collin’s parents)
Chuck Green
Marty Qualls
Linda Dillon
Missions in our Prayers—4th Quarter
Local Missions
Meals on Wheels
Waverly Road Childcare Center
CarePortal
Regional / National Missions
UKirk (ETSU)
International Missions
Moyo wa Afrika, Tanzania
We will leave names on the Prayer Request List for three weeks unless you notify the Church Office to remain on the list for an extended amount of time.
This Week at WRPC
Indoor and Online Worship at 11:00am.
| Date | Time | Event |
|------------|----------|--------------------------------------------|
| Sunday 10/27 | 9:45am | Sunday School |
| | 11:00am | Worship Service / Remembering the Saints |
| Monday 10/28 | 6:00pm | Youth Group, YS |
| Tuesday 10/29 | 9:00am | Walking Group, Greenbelt Holston Valley Trailhead Entrance |
| | 6:00pm | Has Beens, FH |
| Wednesday 10/30 | 11:15am | NE TN Faith Leaders Co-Hort Lunch, FH |
| | 1:00pm | ZOOM Bible Study |
| | 7:00pm | Chancel Choir Practice |
| Thursday, 10/31 | 9:00am | Trick-or-Treating Event with Lincoln Elementary |
| Friday 11/1 | 10:00am | Food Pantry, FH |
SMILE for the Month of October
Our wish list for October includes new or gently used coats and hoodies, and new stocking stuffer items.
Bible Study Every Wednesday!
Join us every Wednesday at 1:00pm for ZOOM Bible Study. We will send out the information how to tune in and join study and conversation!
Trick-or-Treating Event with Lincoln Elementary School!
Thursday, October 31
Time: 9:00am-12:30pm
Just bring your candy and leave in Rachel’s office. Thanks so much for your contributions and participation!
Mission Moment Update...
October 20, 2024
Meals on Wheels—Celebrating 50 years!
2024 marks the 50th anniversary of Meals on Wheels of Kingsport at Waverly Road Presbyterian Church. Meals on Wheels started in Kingsport in 1972, but it was in 1974 that our church provided Meals on Wheels with a kitchen to use for meal preparation. We have church members who began to volunteer with Meals on Wheels in 1974 and are still volunteering today. From 1974-1985, our kitchen was the only kitchen used for meal preparation. There are several pictures attached to this article that show a cook team preparing meals in the kitchen in the 70’s.
Waverly Road Presbyterian Church has had a special relationship with Meals on Wheels for a very long time. That is demonstrated by the number of people in our church who volunteer now, or have volunteered in the past by preparing or delivering meals. Meals on Wheels of Kingsport serves over 200 meals every day, and each of those meals’ costs approximately $3 per meal. No recipient is ever asked to pay for a meal. These free meals are made possible by the generous in-kind support that is provided by our church in the form of kitchen space and food storage, and preparation space. Without the support from our church and First Presbyterian Church, Meals on Wheels in Kingsport would look very different than it does today. It would be very difficult to deliver over 200 meals free of charge to qualified recipients. Commercial kitchen space is prohibitively expensive and nearly impossible to find in our area.
Meals on Wheels of Kingsport is very thankful for the special relationship and partnership that we have with Waverly Road Presbyterian Church. Recently the Meals on Wheels refrigerator here had temperature control issues. A church member stepped forward and (Continued on next page)
To all:
We are back to normal (for now, delivering 46 bags this past Friday.
Through the end of September (3 quarters) we delivered 1596 bags, compared to 1438 for the comparable period in 2023, an 11% year-to-year increase.
As always, thank you for your support and prayers.
—Pete Lodal
Meals on Wheels—Celebrating 50 years!, cont’d.
secured permission for Meals on Wheels to move some food items temporarily into the church refrigerator. Church members were also instrumental in getting the Meals on Wheels refrigerator repaired. The work and dedication of so many members of our congregation makes it possible for Meals on Wheels to serve over 2 million meals, free of charge, for over 50 years. If you would like to support Meals on Wheels as a volunteer, please visit the website www.mealsonwheelskingsport.org. You truly are the hands and feet of Christ in our community. Thank you.
Stephen Ministry—Anticipatory Grief (continued)
This week we will conclude our discussion on anticipatory grief which is a state of deep, painful sorrow that occurs before an impending loss. Next week, there will be a short article about a couple of times I experienced this type of grief.
What are the benefits of anticipatory grief? Studies are conflicted on whether anticipatory grief is beneficial or not. It appears to depend on the individual. For some, it may help them sort out their feelings and make preparations for moving forward. Experts contend that anticipatory grief allows a person to:
- Confront their fears rather than avoid them
- Deal with any unfinished business, both practical and emotional
- Clarify any misunderstanding or express what should have been said earlier
- Say their goodbyes
- Make preparations for life moving forward
By doing so, a person may have less distress and be able to navigate bereavement when the loss occurs.
For others, however, anticipatory grief may only serve as a prelude to conventional grief, neither bolstering the person for the harsh reality of their loss nor preparing them for life ahead.
Not everyone feels anticipatory grief but it is common. Feeling grief before the loss does not mean you are giving up or abandoning a loved one. Instead, anticipatory grief may give you a chance to gain meaning and closure you may not have had otherwise. You may feel like you are somewhere between holding on and letting go. While it is painful to feel this way, the truth is it is possible to live with both feelings. You don’t have to choose between them. Grief serves a purpose, whether it occurs before or after the loss.
Researchers have identified four phases and tasks of grief:
- Accept the coming loss
- Working through the pain (reflecting & coming to terms with feelings)
- Adjusting to a new reality (whether it be a death, employment change, autistic child, etc.)
- Connecting to life in a different way as you move forward.
Don’t Go It Alone: Express Your Pain – Staying strong when we are facing a challenge can be difficult. Give yourself permission to feel sad and ask for support from other people in your life. Nobody should have to face anticipatory grief alone. Keeping your feelings to (Continued on back page)
Anticipatory Grief, cont’d.
yourself can lead to loneliness and isolation. It can be upsetting when someone tries to tell you what to do or how to feel. This can cause you to react to this unsolicited advice with anger or simply shut down. Neither of these will help.
It is recommended that a person experiencing anticipatory grief find a good listener who will not try to “fix things”, tell you how you should feel or use platitudes such as “I know how you feel”. This describes a Stephen Minister. If you or someone you know is experiencing anticipatory grief, please contact one of our Stephen Leaders – Dave Petke, Susan Foster, Linda Qualls, or Barbara Lane. Our Stephen Ministers care for and are here for you.
Daylight Savings Time ends Saturday, November 2. Be sure to set your clocks back 1 hour before you go to bed Saturday night!
Butter Toffee Pretzels
Ingredients
1 (16 oz) bag Mini Pretzels
1 cup Brown Sugar
1/2 cup Butter
1/4 cup Light Corn Syrup
1 tsp Vanilla
1/2 tsp Baking Soda
1 bag Heath Toffee Bits
Directions
Preheat oven to 200 degrees. Line a large baking sheet with parchment paper and set aside.
1. Pour mini pretzels into a large bowl and set aside.
2. In a medium sauce pan, add brown sugar, cubed butter, and corn syrup. Bring to a boil over medium heat. Boil for 5 minutes while constantly stirring.
3. Remove from stove top and whisk in vanilla and baking soda.
4. Pour hot toffee mixture over the pretzels and toss to coat evenly. Pour half the bag of toffee bits over the pretzels and toss again to coat evenly.
5. Spread pretzels onto the prepared baking sheet and distribute pretzels around evenly.
6. Bake for 1 hour, stirring every 15 minutes.
7. Once baked, remove from oven and sprinkle remaining half bag of toffee bits over the pretzels immediately.
8. Allow pretzels to set and cool.
9. Once set and cooled, break into small clusters. |
Traceable® Conductivity Guide
ISO 9001 QUALITY-CERTIFIED
ISO 17025 CALIBRATION LABORATORY ACCREDITED
Traceable® Portable Conductivity Meter
Quick answers, accurate results on demand
Measuring conductivity quickly doesn’t mean sacrificing accurate results or product purity. In an instant, Portable Bench Conductivity Meter automatically selects the proper range and displays the exact answer without hassles. This auto-ranging feature may be turned off to accommodate user-entered ranges. All special calibration data is saved even when turned off. Fulfills all government measurement requirements plus CAP, ASTM, NCCLS, and ACS.
Range in micromhos is 0.01 to 200,000, in megohms is 0.001 to 20,000, range in dissolved solids/parts per million is 0.1 to 20,000, and in salinity is 2.0 to 42.0 (oceanographic units). Accuracy is ±0.3%.
Four calibration points may be entered into memory utilizing solution standards
Results are displayed in conductivity (micromhos/cm and microsiemens/cm), resistivity (megohms), dissolved solids (parts per million), concentration (user-specified units), salinity (oceanographic units), and temperature (F/C). Probe is 5⅛ x ½ inches in diameter with a cable length of 59 inches. Temperature compensation is automatic (2% per °C), user-designated (0.000 to 5.000% per °C), or absolute. K-factor may be adjusted to match each probe. Plastic accessory probe is available.
Supplied instant-response probe contains platinum electrodes that deliver highly accurate readings
Internal solid-state thermistor (for automatic/manual temperature compensation) permits all readings to be referenced to the international standard of 25°C. Exclusive “temperature compensation disable function” fulfills USP-NF (United States Pharmacopoeia, National Formulary, 645 Conductivity Measurement) requirement. Use to check the purity of water from stills and demineralizers; to analyze seawater; and to make up solutions. Simply turn on, insert probe, and read—easiest unit ever designed for routine analysis, quality control, and research. Elimination of “operator technique” permits everyone in the lab to report identical readings. Tough, chemical-resistant ABS housing assures a long life in severe lab or harsh plant environments. Large ½-inch-high LCD digits are easy to read.
Traceable® to NIST
To assure accuracy, a certificate is provided to indicate instrument traceability to standards provided by NIST (National Institute of Standards and Technology) from our ISO 17025 calibration laboratory.
Permanent hard copy record
Recorder jack allows continuous monitoring and a permanent record. Adjustable control allows unit to be calibrated to solution standards.
Unit size is 3¾ x 6¾ x 1½ inches. Weight is 16 ounces. Battery is supplied. Replacement battery Cat. No. 1112.
Cat. No. 4063 Traceable® Portable Conductivity Meter
| Cat. No. | ACCESSORIES |
|----------|-------------|
| 4062 | Replacement Conductivity Probe– Glass, K=1 Probe range is 0.05 to 200,000 micromhos. |
| 4061 | Accessory Conductivity Probe– Unbreakable Epoxy, K=1 Probe range is 1.0 to 200,000 micromhos. |
(For a list of accessory Flow-Thru Cells see Page 8)
---
**TRACEABLE DIGITAL CONDUCTIVITY METERS**
| Cat. No. | Description | Portable | Bench | H2O Tester | Dual-Display | Portable | Expanded Range |
|----------|-------------|----------|-------|------------|--------------|----------|----------------|
| | Range Micromhos/Microsiemens | 0.01 to 200,000 | 0.01 to 200,000 | 0.1 to 20,000 | 0.1 to 20,000 | 0.01 to 200.0 | 0.01 to 200,000 |
| | Range Megohms | 0.001 to 20,000 | 0.001 to 20,000 | N/A | N/A | 2.00 to 20.00 | N/A |
| | Range Dissolved Solids/Parts Per Million | 0.1 to 20,000 | 0.1 to 20,000 | N/A | N/A | N/A | 0.1 to 120,000 |
| | Range Salinity/Oceanographic Units | 2.0 to 42.0 | 2.0 to 42.0 | N/A | N/A | N/A | N/A |
| | Accuracy | ±0.3% | ±0.3% | ±0.4% | ±(2% full scale + 1 digit) | ±0.4% | ±0.4% |
| | Temperature Compensation | Auto and manual | Auto and manual | Automatic | Auto and manual | Automatic | Automatic |
| | Output | Recorder | Serial | None | Serial | None | Recorder |
| | Size/Inches | 3.75x6.75x1.5 | 8.25x6x3.5 | 6.25x3.2x1.33 | 7x3x1.25 | 3.25x4.5x1.5 | 5x2.25x5.75 |
---
**INDEX**
Calibration Standards ........................................... 3
Data Logger ......................................................... 7
Electrode Holder .................................................. 11
Flow-Thru Cells ..................................................... 8
Meters ............................................................... 2, 4, 5, 6, 7
Pumps ................................................................. 9, 10
Storage Solution .................................................... 8
Tight Ties ............................................................. 10
Wipes ................................................................. 11
Traceable® Conductivity Calibration Standards
Use with any meter
Traceable® Conductivity Calibration Standards are the most accurate available. Accuracy at 25°C is ±0.25 micromhos for 5 and 10 micromho solutions and ±0.25% for other solutions or the uncertainty shown on the certificate, whichever is greater. Standards are 100% compatible with all makes of equipment.
Individually serial-numbered Traceable® Certificate is provided from our ISO 17025, A2LA accredited calibration laboratory. It indicates traceability to standards provided by NIST (National Institute of Standards and Technology). Each bottle is labeled for calibrating conductivity (micromhos), resistivity (ohms), and dissolved solids (parts per million). Supplied complete, each bottle comes with step-by-step calibration instructions, traceability information, individual temperature compensation chart, and Traceable® Certificate. Supplied in a 16-ounce glass bottle.
| Cat. No. NIST/17025 Cert | Cat. No. NIST/17025 A2LA Cert | Micromhos | Megohms | TDS/PPM |
|--------------------------|-------------------------------|-----------|---------|---------|
| 4270 | 4570 | 5 | 0.2 | 3.3 |
| 4065 | 4565 | 10 | 0.1 | 6.6 |
| 4066 | 4566 | 100 | 0.01 | 66 |
| 4067 | 4567 | 1000 | 0.001 | 666 |
| 4173 | 4573 | 1413 | 0.00071 | 933 |
| 4068 | 4568 | 10000 | 0.0001 | 6666 |
| 4069 | 4569 | 100000 | 0.00001 | 66666 |
Traceable® One-Shot™ Conductivity Standard
Single-use standards eliminate concern about external container contamination. Calibration is made in the standard's vial. Fits all probes. Extra-large opening (1¾-inches diameter) and extra-large, 3½-inches depth allow probe calibration to take place in the standard's polyethylene container. One-Shot™ Traceable® standards accommodate virtually all conductivity probes and are ideal for lab or field conditions.
Compatible with all meters
One-Shot™ Traceable® Conductivity Calibration Standards are the most accurate available: ±0.25 micromhos for 5 and 10 solutions; other solutions are ±0.25% or the uncertainty shown on the certificate, whichever is greater. Standards are 100 percent compatible with all makes of equipment. Each bottle is labeled for calibrating conductivity (micromhos), resistivity (ohms), and dissolved solids (parts per million).
Traceable® to NIST for accuracy
Ultra-accurate solutions are from our ISO 17025 calibration laboratory and are Traceable® to standards provided by NIST (National Institute of Standards and Technology). Supplied with individual serial-numbered Traceable® Certificate and individual temperature compensation chart. Supplied as a pack of six One-Shot™ Traceable® calibration standards. Each standard contains 100 milliliters.
| Cat. No. | QTY | Micromhos | Megohms | TDS / PPM |
|----------|-----|-----------|---------|-----------|
| 4271 | 6/Pk| 5 | 0.2 | 3.3 |
| 4175 | 6/Pk| 10 | 0.1 | 6.6 |
| 4176 | 6/Pk| 100 | 0.01 | 66 |
| 4177 | 6/Pk| 1000 | 0.001 | 666 |
| 4174 | 6/Pk| 1413 | 0.00071 | 933 |
| 4178 | 6/Pk| 10000 | 0.0001 | 6666 |
| 4179 | 6/Pk| 100000 | 0.00001 | 66666 |
| 4172 | 6/Pk| Assortment (one each of above, except 4271) | | |
Traceable® Bench Conductivity Meter
Turn on, insert probe, read results
Fulfill all official lab analysis regulations and reagent-grade water standards for CAP, ASTM, NCCLS, and ACS by using the Bench Conductivity Meter. Simply turn on, insert probe into solution, and read accurate results. Unit automatically encompasses all ranges and selects the most appropriate for the current reading. Disable the auto-range function when user-defined ranges are desired. Control allows user to calibrate to solution standards.
Range in micromhos is 0.01 to 200,000, in megohms is 0.001 to 20.000, range in dissolved solids/parts per million is 0.1 to 20,000 and in salinity is 2.0 to 42.0 (oceanographic units). Accuracy is ±0.3%.
Traceable® to NIST
To assure accuracy, a certificate is provided to indicate instrument traceability to standards provided by NIST (National Institute of Standards and Technology) from our ISO 17025 calibration laboratory.
State-of-the-art microcomputer processor and unique software program allow four calibration points to ensure complete accuracy over the entire range. Readings are displayed in conductivity (micromhos/cm), resistivity (megohms), total dissolved solids (milligrams per liter), salinity (oceanographic units), concentration (user specified units), and temperature (Celsius/Fahrenheit). K factor may be adjusted to match each probe. Specifically designed to measure conductivity in water analysis, biology, chromatography, food, and PC board rinsing.
RS-232 serial output
Readings may be downloaded to a data logger or computer for analyzing or reporting at a later time. Makes hard copy results a breeze—no more hand-scrawled notes to decipher.
Supplied probe contains platinum electrodes and a solid-state thermistor for automatic/manual temperature compensation. Readings are automatically referenced to the international standard of 25°C. Exclusive "temperature compensation disable function" fulfills USP-NF (United States Pharmacopoeia, National Formulary, 645 Conductivity Measurement) requirement.
ABS plastic housing withstands the roughest lab environments. Unit comes with 115 VAC adapter, probe replatinizing current, a probe holder arm, battery, and Traceable® Certificate. Size is 8 1/4 x 6 x 3 1/2 inches and weighs 1 1/2 pounds. Replacement battery Cat. No. 1112.
Cat No. 4163 Traceable® Bench Conductivity Meter
| Cat. No. | ACCESSORIES |
|----------|-------------|
| 4062 | Replacement Conductivity Probe– Glass, K = 1 Probe range is from 0.05 to 200,000 micromhos. |
| 4061 | Accessory Conductivity Probe– Unbreakable Epoxy, K=1 Probe range is from 1.0 to 200,000 micromhos. |
(For a list of accessory Flow-Thru Cells see Page 8)
Traceable® Conductivity Meter
Traceable® Conductivity for pure water
To assure accuracy, a certificate is provided to indicate instrument traceability to standards provided by NIST (National Institute of Standards and Technology) from our ISO 17025 calibration laboratory. Use to verify the purity of water from stills, deionizing, and reverse osmosis equipment.
Range in micromhos is 0.1 to 200.0, in megohms is 2.00 to 20.00. Accuracy is ±0.4%. Range is from ultra pure to tap water. Adjustment permits calibration to solution standards.
Easy to use
Ideal for routine analysis, quality control, and research. Elimination of "operator technique" permits everyone in the lab to report identical readings.
Supplied probe contains platinum electrodes and a solid-state thermistor (for automatic temperature compensation). Readings are automatically referenced to the international standard of 25°C. Size is 3⅛ x 4⅝ x 1½ inches. Weight is 8.9 ounces. Replacement battery Cat. No. 1112.
Cat. No. 4070 Traceable® Conductivity Meter
| Cat. No. | Accessories |
|----------|--------------------------------------------------|
| 4079 | Replacement Probe– Glass, K = 1 |
| 4074 | Accessory Probe– Unbreakable Epoxy, K = 1 |
| 4077 | Accessory Glass Probe– K=0.1, increases sensitivity of low micromho range 10 times |
| Cat. No. | Accessories |
|----------|--------------------------------------------------|
| 4078 | Accessory Probe Unbreakable Epoxy– K = 10, increases sensitivity of high micromho range 10 times |
| 4013 | Accessory AC Adaptor 115 VAC/60 Hz |
(For a list of accessory Flow-Thru Cells see Page 8)
Traceable® Pure H₂O Tester
Meets all lab certification requirements
Specifically designed to test water from stills, demineralizers, deionizers, and reverse osmosis equipment. Meets all lab certification requirements for pure water analysis for CAP, ASTM, NCCLS, USP, and ACS. Single-purpose unit complies with all accreditation analysis requirements. Designed for labs obligated to maintain a periodic check of water purity. To assure accuracy, a Traceable® Certificate is provided to indicate instrument traceability to standards provided by NIST (National Institute of Standards and Technology) from our ISO 17025 calibration laboratory.
Accurate readings in an instant
Range is 0.1 to 20,000 micromhos (from pure to raw water). Accuracy is ±0.4% of full scale. Simple operation eliminates errors and ensures everyone reports identical results. Supplied probe (cable length of 40 inches) provides an instant response. Internal, solid-state thermometer ensures all readings are automatically referenced to the international standard of 25°C. Size is 6⅛ x 3⅝ x 1⅛ inches and weight is 11 ounces. Supplied ready to operate with carrying case, instructions, probe, 9-volt battery, and Traceable® Certificate. Replacement battery Cat. No. 1112.
Cat. No. 4168 Traceable® Pure H₂O Tester
Traceable® Expanded-Range Conductivity Meter
Digital NIST Traceable® Conductivity Meter for labs demanding quick answers and accuracy
Meets lab analysis regulations for measuring reagent grade water for CAP, ASTM, NCCLS, and ACS. Labs wanting to maintain accreditation must maintain a periodic check of water purity. Unit is 100% compatible with all accreditation analysis requirements.
Traceable® to NIST
To assure accuracy, a certificate is provided to indicate instrument traceability to standards provided by NIST (National Institute of Standards and Technology) from our ISO 17025 calibration laboratory. Accuracy is ±0.4% of full range. Expanded range provides five scales with readings from 0.01 to 200,000 micromhos. Range is from ultra pure water to beyond seawater.
Perfect water tester
Use this meter to check the purity of water from stills, deionizers, and reverse osmosis; to test laboratory glassware rinsing; to measure total dissolved solids; and to make up solutions. Specifically designed to measure conductivity in water analysis, biology, chromatography, food, electronics, dairies, and PC board rinsing.
Range in micromhos is 0.01 to 200,000 and range in dissolved solids/parts per million is 0.1 to 120,000. Accuracy is ±0.4%.
Reproducible readings
Operation is extremely simple—turn on, insert probe, and read. Made for routine analysis, quality control, and research. Simple operation virtually eliminates "operator technique" and permits everyone in the lab to report identical readings.
Day after day dependability
Tough, chemical-resistant ABS housing assures a long life in severe lab or harsh plant environments. The rugged, handheld/bench unit is completely portable. Large, bright, half-inch digits are easy to read.
Answers in five seconds
Glass probe contains platinum electrodes for instant response and a solid-state thermistor for automatic temperature compensation. All readings are automatically referenced to the international standard of 25°C. Recorder jack allows continuous monitoring and a permanent record. A control knob permits calibration to solution standards. Size is 5 x 2 1/4 x 5 3/4 inches and weight is 12 ounces. Probe has a diameter of 1/2 inch, length of 5 1/4 inches, and cable length of 59 inches. Supplied ready to operate with instructions, probe, 9-volt battery, Traceable® Certificate, recorder jack, and electronic calibrator. Replacement battery Cat. No. 1112.
Cat. No. 4075 Traceable® Expanded-Range Conductivity Meter
| Cat. No. | ACCESSORIES |
|----------|-------------|
| 4079 | Replacement Probe—Glass, K = 1 |
| 4074 | Accessory Probe—Unbreakable Epoxy, K = 1 |
| 4077 | Accessory Glass Probe—K = 0.1, increases sensitivity of low micromho range 10 times |
| 4078 | Accessory Probe Unbreakable Epoxy—K = 10, increases sensitivity of high micromho range 10 times |
| 4013 | Accessory AC Adaptor 115 VAC/60 Hz |
(For a list of accessory Flow-Thru Cells see Page 8)
Traceable® Dual-Display Conductivity Meter
Two displays simultaneously show conductivity readings and temperature measurements. Measures conductivity in three ranges: 0.1 to 199.9 micromhos (0.1 micromho resolution), 0.001 to 1.999 millimhos (0.001 millimho resolution), and 0.01 to 19.99 millimhos (0.01 millimho resolution). Accuracy is ±(2% of full scale plus 1 digit). Automatic and manual temperature compensation is 32.0 to 140.0°F and 0.0 to 60.0°C. Resolution is 0.1 and accuracy is ±0.8°C.
Traceable® Certificate is provided to indicate instrument traceability to NIST (National Institute of Standards and Technology) from our ISO 17025 calibration laboratory.
Calibrate using solution standards
Unit has a controller to adjust a probe's K (constant) factor and to calibrate to solution standards. Easy-to-read, jumbo-size digits are 1 3/8 inches high. Serial RS-232 output allows connection to a computer or data logger for monitoring and storing results. At the touch of a button the instrument recalls highest, lowest, and average readings. A data hold button "freezes" the display to capture readings.
Supplied complete
Unit is supplied with epoxy probe (cable length is 40 inches), Traceable® Certificate, 9-volt alkaline battery, and serial computer output. Size is 7 x 3 x 1 1/4 inches and weight is 9.5 ounces. Replacement battery Cat. No. 1112.
Cat. No. 4169 Traceable® Dual-Display Conductivity Meter
CAT. NO. ACCESSORIES FOR 4169
4136 Data Acquisition System Accessory—Complete DAS-3™ Data Acquisition System captures, displays, and stores readings on any PC. Information can be imported into databases. A 3.5-inch diskette (Windows® and DOS), and 5-foot serial cable with D9F plug are supplied.
4325 Accessory Data Logger—(pictured at right) Complete DAS-4™ System captures and stores up to 8000 bytes (over 1000 readings). Readings may be taken at intervals from 1 second to 99 hours. Stored readings may be downloaded to any PC and viewed. Can be read as-is or imported to spreadsheets, databases, and statistical programs. Supplied complete with 36-inch serial cable with D9F computer plug, 3.5-inch diskette (Windows® and DOS), and four AA alkaline batteries. Size is 5 x 3 x 1 inches. Weight is 7 ounces.
4138 Easy-Use® Accessory Adaptor 115 VAC
Conductivity/pH Universal Flow-Thru Adaptor
Flow-thru adaptor is universally designed to accept all ½-inch (12 mm) diameter conductivity probes and pH electrodes. Allows constant monitoring of flowing fluids with a standard dip probe. Two O-rings provide a secure, leakproof seal.
Connectors accept tubing with a ¼-inch, ⅜-inch, ½-inch and ¾-inch inside diameter. Cell volume depends on the positioning of the probe (approximately 2 to 4 ml). Flow rate may be from 0.001 to 50 milliliters per minute. Constructed of teflon (universally chemically inert), it may be in constant use at temperatures from -100 to +500°F. Size is 1-inch diameter x 2.2 inches.
Cat. No. 4167 Conductivity/pH Universal Flow-Thru Adaptor
Platinum Plate Glass Flow-Thru Conductivity Cells K=1 with Automatic Temperature Compensation
Cat. No. 4054 Conductivity Flow Cell K=1 with Temperature Sensor
For use with Conductivity Meter Nos. 4063 and 4163
Cat. No. 4057 Conductivity Flow Cell K=1 with Temperature Sensor
For use with Conductivity Meter Nos. 4070 and 4075
Platinum Plate Glass Flow-Thru Conductivity Cells K=0.2 with Automatic Temperature Compensation
Cat. No. 4056 Conductivity Cell K=0.2
For use with Conductivity Meter Nos. 4063 and 4163
Cat. No. 4055 Conductivity Cell K=0.2
For use with Conductivity Meter Nos. 4070 and 4075
Redi-Stor™ Probe Storage Solution
Redi-Stor is the ideal solution for storing conductivity probes. It preserves probe’s cleanliness, eliminates growths found when storing in water only, and maintains the probe for immediate use with no conditioning.
Cat. No. 4170 Redi-Stor™ Probe Storage Solution
Variable-Speed Pump
Compact, variable-flow, bi-directional, self-priming peristaltic pumps offer precise flow deliveries. Pumps are ideal for use with conductivity flow-thru cells, liquid chromatography, collecting fractions, circulating fluids or buffers in baths, and moving corrosive materials. They provide outstanding flow control and flexibility for transferring and dosing liquids. Fluid contacts only the tubing for contamination-free pumping.
Dial in the flow rates
Flow rates are from 0.005 milliliters per minute to 600 milliliters per minute. Variable-speed flow control and five different tubing sizes provide fine resolution with a wide flow range. The revolution of one roller delivers a precisely measured volume specific to the tubing size and motor speed.
Pumps liquids and gases
Tubing may be used with fluid temperatures from -80 to 500°F (-62 to 260°C). For use with food, pharmaceuticals, and other critical solutions. The tubing may be sterilized by autoclave. Unit pumps liquids and gases. Pumping dry does not harm the pump. Pump has a purge/prime switch for high-speed emptying/filling. It also reverses at the touch of a switch for ease in draining tubing. Three rollers reduce flow pulsation, prevent siphoning, and eliminate the need for check valves. There are no valves to clog, no seals to leak.
Complete, ready-to-use
The 115-VAC CSA-approved wall power supply ensures that a safe 12 volts drive the pump motor. Comes with a battery connector for portable use with any 9 or 12-volt battery. Pump draws so little power it will run unattended for five months on a car battery. Supplied with silicone tubing, and polypropylene fittings/nipples, 115-VAC wall power supply, and an accessory battery connector (battery not supplied). Packaged in a chemical-resistant ABS plastic case. Size is 6⅝ x 4¾ x 4½ inches and weight is 1.25 pounds. One-year warranty.
| Cat. No. | Flow | Rate |
|---------|-----------------------|---------------|
| 3384 | Ultra-Low Flow | 0.005 to 0.900 ml/minute |
| 3385 | Low Flow | 0.03 to 8.20 ml/minute |
| 3386 | Medium Flow | 0.4 to 85.0 ml/minute |
| 3389 | Medium/High Flow | 4.0 to 600 ml/minute |
REPLACEMENT SETS OF TUBING
| Cat. No. | Description |
|----------|-----------------------------------------------------------------------------|
| 3370 | Replacement set of tubing and fittings/nipples for pump Cat. #3384 and #3385. Identical to the set supplied with the pump. Supplied with ⅛, ¼, ⅜, ½, and ¾-inch I.D. tubing and corresponding fittings/nipples. |
| 3377 | Replacement set of tubing and fittings/nipples for pump Cat. #3386. Identical to the set supplied with the pump. Supplied with ⅛, ¼, ⅜, ½, and ¾-inch I.D. tubing and corresponding fittings/nipples. |
| 3378 | Replacement set of tubing and fittings/nipples for pump Cat. #3389. Identical to the set supplied with the pump. Supplied with ¼, ⅜, ½, and ¾-inch I.D. tubing and corresponding fittings/nipples. |
REPLACEMENT TUBING ONLY
| Cat. No. | Description |
|----------|-----------------------------------------------------------------------------|
| 3360 | ⅛-inch I.D. Silicone tubing, 25 ft. (all pumps) |
| 3361 | ¼-inch I.D. Silicone tubing, 25 ft. (all pumps) Uses the identical fittings/nipples as the ⅛-inch tubing |
| 3362 | ⅜-inch I.D. Silicone tubing, 25 ft. (all pumps) |
| 3363 | ½-inch I.D. Silicone tubing, 25 ft. (all pumps) |
| 3364 | ¾-inch I.D. Silicone tubing, 25 ft. (all pumps) |
| 3365 | ¾-inch I.D. Silicone tubing, 25 ft. (Cat. #3384 and #3385) |
| 3366 | ¼-inch I.D. Silicone tubing, 25 ft. (Cat. #3386 and #3389) |
REPLACEMENT TUBING ASSEMBLIES
| Cat. No. | Description |
|----------|-----------------------------------------------------------------------------|
| 3371 | ⅛-inch I.D. Silicone tubing and fittings/nipples for Cat. #3384, #3385 and #3386 |
| 3372 | ¼-inch I.D. Silicone tubing and fittings/nipples for Cat. #3384, #3385 and #3386 |
| 3373 | ⅜-inch I.D. Silicone tubing and fittings/nipples (fitting color coded red inside) for Cat. #3384, #3385 and #3386 |
| 3374 | ½-inch I.D. Silicone tubing and fittings/nipples (fitting color coded blue inside) for Cat. #3384, #3385 and #3386 |
| 3375 | ¾-inch I.D. Silicone tubing and fittings/nipples for Cat. #3384 and #3385 |
| 3376 | ¼-inch I.D. Silicone tubing and fitting/nipples for Cat. #3386 |
| 3390 | ⅛-inch I.D. Silicone tubing and fittings/nipples for Cat. #3389 |
| 3391 | ¼-inch I.D. Silicone tubing and fittings/nipples for Cat. #3389 |
| 3392 | ⅜-inch I.D. Silicone tubing and fittings/nipples for Cat. #3389 |
| 3393 | ½-inch I.D. Silicone tubing and fittings/nipples for Cat. #3389 |
| 3394 | ¾-inch I.D. Silicone tubing and fittings/nipples for Cat. #3389 |
Variable-Flow Chemical Transfer Pump
Designed specifically for pulseless fluid transfer at variable flow rates. Pumps from 120 milliliters to 2.2 liters (4.2 ounces to 0.6 gallons) per minute. Pumps fluids with a viscosity to 200 centipoises. Suction lift is 10 feet wet, 4 inches dry. May be used with fluid temperatures from –40 to 200°F (–40 to 93°C). Barbed inlet/outlet ports use any type of tubing with an inside diameter (nominal 3/16 inches).
Direct-drive engineering provides maximum motor power to the pump. “Continuous-sweep” variable control provides precise, seamless speed control. The chemical-resistant, wetted parts are Dupont Delrin®, 304 stainless steel, Buna N, and Dupont Teflon®. Pump has a purge/prime switch for high-speed emptying/filling. It also reverses for ease in draining tubing.
Pump comes ready to use with 115 VAC UL-rated wall power supply. Packaged in a chemical-resistant ABS plastic case. Size is 8 x 4¾ x 4½ inches and weight is 3.8 pounds.
Cat. No. 3388 Variable-Flow Chemical Transfer Pump
Tight-Ties™
These cable ties feature a unique tapered tip for fast insertion and tightening. One-piece, self-locking ties are made of tough, chemical-corrosion-resistant, rustproof, nonconducting nylon. Applications include affixing tubing to pumps, bundling materials, hanging equipment.
Usable in wide temperature range
Use in the wide temperature range of –40 to 185°F (–40 to 85°C) Tensile strength is 12 to 50 pounds. When fastened, the diameters adjust 1/16 to 4 inches.
Cat. No. 3285 Tight-Ties™ Assortment Kit
Includes 400 ties consisting of 5 sizes (see table below)
| Qty. | Length |
|------|--------|
| 100 ea. | 4 inches |
| 100 ea. | 5.6 inches |
| 100 ea. | 8 inches |
| 50 ea. | 11 inches |
| 50 ea. | 14 inches |
Clean-Wipes™
New universal clean-up tool
Premoistened Clean-Wipes™ clean up even the dirtiest parts. Non-woven cloths are moistened with a blend of pure, reagent-grade 70% isopropyl alcohol and 30% reagent-grade deionized water, or 100% pure reagent-grade deionized water. They are ideal for wiping optical parts, delicate glassware, electrodes, cuvettes, microscopes, or lenses. Wipes are also available dry, with no solution. This permits using any liquid by simply adding eight fluid ounces to the wipe canister. Instantly clean metals, plastics, glass, rubber, and epoxies. Use for cleaning electronics, computer screens, instruments, probes, glassware, syringes, pipettes, and cameras. Wipes are designed for all applications in environmentally sensitive areas.
Lint-free
Very strong, clean room grade, nonwoven cloth is soft and lint-free. Constructed of cellulose and polyester, it will not scratch surfaces. Meets all lab and production requirements for cleanliness, softness, absorbency, and strength. Long-lasting wipes never fall apart like paper towels.
Convenient
Individually dispensed wipes are handy and ready for instant use. Unique opening allows wipes to virtually glide out when pulled. Reclosable bench top container assures a clean wipe every time. Snap-on sealed top eliminates contamination from dust or dirt. Each canister is shrink-wrapped to maintain cleanliness during shipping. Wipes are 6 x 9 inches and packaged 100 per polyethylene canister.
Cat. No. 2060 Premoistened DI Clean-Wipes™ 100% pure reagent-grade deionized water
Cat. No. 2061 Dry Clean-Wipes™ User adds any solution
Cat. No. 2065 Premoistened Alcohol/DI Clean-Wipes™ 70% reagent-grade isopropyl alcohol and 30% reagent-grade deionized water
Probe Holder
Pays for itself by speeding up batch sampling and reducing probe breakage. NASA Space Shuttle engineering assures smooth and effortless operation. Performs like a robot arm in zero gravity.
Ideal for multiple conductivity or pH readings
Fingertip control raises, lowers, and pivots (360 degrees) the perfectly balanced electrode holder wherever desired. Moves in all directions, holds electrodes safely and securely in any selected position. Electrode arm articulates at three points so the electrodes always remain vertical.
Accepts all brands
It is designed to accept any standard glass, reference, combination or other electrode—a temperature probe or conductivity probe. Weighted die-cast metal base and spring counterbalance permit fluid movement with superior stability. Beware of cheap, plastic "look-a-likes" that have jerky, uneven motions. Complete with 21-inch metal arm, 8-inch diameter metal base, and probe holder. Weighs 6 pounds.
Cat. No. 3090 Probe Holder
Traceable® Certificate
Traceable® conductivity calibration standards are provided with a Traceable® Calibration Certificate supplied from Control Company’s ISO/IEC 17025 calibration laboratory accredited by A2LA (The American Association for Laboratory Accreditation). The supplied Traceable® Certificate indicates that the solution is traceable to standards provided by the National Institute of Standards and Technology (NIST), a U.S. Government agency within the Commerce Department. All products are supplied from our ISO 9001 facility.
A Traceable® Certificate includes all of the information to meet today’s stringent accreditation demands, government specifications, and ISO 9000 requirements. This information includes analysis number, certificate number, item number, calibration test equipment, equipment serial number, equipment calibration due date, NIST reference number, uncertainty, test conditions, individual specific test data, expiration date, tester’s name, and signature of the metrology manager.
Traceable® is a registered trademark. The information, presentation, and completeness of data provided on the Traceable® Certificate are all protected under United States copyright laws.
ISO 9001 Quality-Certified
Control Company is an ISO 9001 Quality-Certified company. This provides you with the assurance that you are supplied with only the finest and most reliable products. Control Company is recognized worldwide for superb quality and innovative products. We are one of the world’s market leaders for digital equipment. Control Company is pleased to offer you the extra confidence that ISO 9001 Certification brings to every Control product. (DNV Certificate No. CERT-01805-2000-AQ-HOU-RAB)
ISO 17025 Cal Lab Accredited
Control Company is an ISO 17025 calibration laboratory accredited by the American Association of Laboratory Accreditation (A2LA) meeting the requirements for ISO/IEC 17025. This ensures that you are supplied accurate solutions, individually tested to be traceable to NIST, and a Traceable® Certificate (A2LA Certificate No. 17050.01). |
Alkali- and nitrate-free synthesis of highly active Mg–Al hydrotalcite-coated alumina for FAME production†
Julia J. Creasey,a Alessandro Chieregato,b Jinesh C. Manayil,c Christopher M. A. Parlett,a Karen Wilsonc and Adam F. Lee*a,d
Mg–Al hydrotalcite coatings have been grown on alumina via a novel alkali- and nitrate-free impregnation route and subsequent calcination and hydrothermal treatment. The resulting Mg-HT/Al$_2$O$_3$ catalysts significantly outperform conventional bulk hydrotalcites prepared via co-precipitation in the transesterification of C$_4$–C$_{18}$ triglycerides for fatty acid methyl ester (FAME) production, with rate enhancements increasing with alkyl chain length. This promotion is attributed to improved accessibility of bulky triglycerides to active surface base sites over the higher area alumina support compared to conventional hydrotalcites wherein many active sites are confined within the micropores.
Introduction
Global energy consumption is predicted to rise from 550 EJ in 2020 to 865 by 2040, placing a growing strain on existing fossil fuel reserves and driving controversial efforts to develop new engineering approaches to accessing recalcitrant hydrocarbons through e.g. fracking or bituminous ‘tar’ sand extraction. However, more environmentally friendly routes to (low cost) liquid transportation fuels are potentially available from biomass. In order for such ‘second generation’ bio-fuels to be sustainable, they should be sourced from either non-edible crop components (e.g. stems, leaves and husks), forestry waste, or alternative non-food plants such as switchgrass, *Miscanthus* or *Jatropha curcas*, which require minimal cultivation and do not compete with traditional arable land or drive deforestation, or algal sources.
Biodiesel is a clean burning and biodegradable fuel which, when derived from non-food plant or algal oils or animal fats, is viewed as a viable alternative (or additive) to current petroleum-derived diesel. Commercial biodiesel is currently synthesised via liquid base catalysed transesterification of C$_{14}$–C$_{20}$ triacylglyceride (TAG) components of lipids with C$_1$–C$_2$ alcohols into fatty acid methyl esters (FAMEs) which constitute biodiesel, alongside glycerol by-product. While the use of higher (e.g. C$_4$) alcohols is also possible, and advantageous in respect of producing a less polar and corrosive FAME with reduced cloud and pour points, the current high cost of longer chain alcohols, and difficulties associated with separating the heavier FAME product from unreacted alcohol and glycerol, remain problematic.
The predominant liquid base catalysts employed in biodiesel synthesis are NaOH and KOH. Extraction of the biodiesel product and removal/neutralisation of the base catalysts is hampered by competing saponification and emulsification side reactions, but is essential to prevent corrosion of vehicle fuel tanks and injector systems. The attendant quenching and processing steps contaminate the glycerol by-product with alkali salts and water, rendering the former unusable as a commodity chemical for the food and cosmetics industry. Heterogeneous catalysts offer facile product separation, eliminating the requirement for such quenching steps and permitting process intensification via continuous biodiesel production, and are hence the subject of intensive academic and industrial research. Solid base catalysts such as hydrotalcites, alkaline earth oxides, and alkali-doped mesoporous silicas exhibit good activity for TAG transesterification to biodiesel. Dispersing alkali or alkaline earth elements over high surface area materials such as silica or alumina is a well-documented method to lower the cost and increase the stability of such solid base catalysts. High area supports permit good dispersions of a small amount of these catalytically active metals, and aid recovery of the resulting spent catalyst. Judiciously chosen porous supports can also ameliorate mass transport limitations inherent to heterogeneous catalysts in the...
liquid phase by improving the accessibility of reactants to in-pore active sites and accelerating product removal to the bulk solution.
Hydrotalcites \([M(II)_{1-x}M(III)_x(OH)_2]^{x+}(A_{x/n}^{n-})\cdot mH_2O\) are conventionally synthesised via co-precipitation from their nitrates using alkalis as both pH regulators and a carbonate source. This is problematic, since alkali residues may leach during transesterification thereby contaminating the FAME product and mitigating the benefits of a solid versus soluble base catalyst. Alumina supported hydrotalcites have been reported via co-precipitation routes employing a \(^\gamma\)-alumina substrate (or Al-containing glass), by the hydrothermal reaction of alumina with brucite or Co, Mn or Ni nitrates, or by addition of an M(u) salt solution to alumina at near neutral pH, causing the partial dissolution and release of aluminium cations thereby forming a hydrotalcite coating. Some of these routes afford crystalline hydrotalcites, however they provide little control over the morphology or intralayer porosity of such coatings. Furthermore, the most facile, low cost impregnation routes employ nitrate precursors and require high temperature (hydro)thermal processing, typically ~500 °C, which can promote competitive brucite and boehmite crystallisation. Davis and co-workers have shown that thermal processing and subsequent rehydration of conventionally (co-precipitated) Mg–Al nitrates is critical to forming well-ordered brucite-like layers with a high density of Brønsted base sites, whose density is directly proportional to the rate constant for tributyrin transesterification. High temperature thermal treatment alone results in a mixed Mg–Al oxide spinel with few (Lewis) base sites, hence moderate temperature (100–400 °C) hydrothermal protocols are favoured in the synthesis of unsupported and supported zeolite Mg–Al hydrotalcites. Environmental considerations are also a powerful driver to eliminate the use of nitrate precursors in catalyst syntheses due to their attendant contamination of wastewater streams and/or NO\(_x\) emissions.
In an attempt to overcome mass transport limitations in biodiesel synthesis from viscous oils in bulk microporous hydrotalcites, we have developed a new alkali/nitrate-free hydrothermal route to tunable Mg–Al hydrotalcite coatings dispersed on alumina from a Mg(OC\(_2H_5\))\(_2\) precursor. The resulting materials exhibit Turnover Frequencies (TOFs) for the transesterification of short and long chain TAGs far exceeding those achievable over conventional hydrotalcites produced by co-precipitation, providing new possibilities to heterogeneously catalysed biodiesel production.
**Experimental**
**Catalyst synthesis**
Commercial \(^\gamma\)-alumina (Degussa 110 m\(^2\) g\(^{-1}\), 5 g) was dried at 80 °C for 1 h. To this, 21.8 cm\(^3\) magnesium methoxide solution (Aldrich 6–10 wt% in methanol) was added to form a homogeneous paste on mixing. After 15 min stirring, the mixture was dried under vacuum at 80 °C for 1 h to remove excess methanol and yield a 10 wt% Mg sample. In order to incorporate higher magnesium loadings, additional magnesium methoxide treatments were performed identically to above, with each impregnation nominally adding 10 wt% Mg. The progressive decrease in pore volume of these magnesium impregnated aluminas necessitated removal of excess solvent via rotary evaporation prior to drying in a vacuum oven.
The nominal 10 wt% Mg, 20 wt% Mg, 40 wt% Mg and 50 wt% Mg samples (~500 mg yield each) were calcined at 450 °C for 15 h under 20 mL min\(^{-1}\) O\(_2\) (ramp rate 1 °C min\(^{-1}\)). After cooling to room temperature under N\(_2\) (20 mL min\(^{-1}\)), powdered samples were added to a 100 mL Ace round-bottomed, glass pressure vessel containing deionised water (50 cm\(^3\) per 300 mg of impregnated alumina) and heated to 125 °C with stirring for 21 h. After cooling the flasks to room temperature, the final samples (designated Mg-HT/Al\(_2\)O\(_3\)) were filtered, washed with deionised water, and dried in a vacuum oven overnight at 80 °C and stored in a desiccator. Conventional, hydrotalcite reference materials (ConvHTs) were prepared via our alkali-free co-precipitation method from Mg(NO\(_3\))\(_2\)\(\cdot\)6H\(_2\)O and Al(NO\(_3\))\(_3\)\(\cdot\)9H\(_2\)O precursors, with Mg:Al atomic ratios varying between 0.5 : 1 and 2 : 1.
**Materials characterisation**
Nitrogen porosimetry was undertaken on Quantachrome Nova 1200 and Autosorb porosimeters. Samples were degassed at 120 °C for 2 h prior to analysis. Multi-point BET surface areas were calculated over the relative pressure range 0.01–0.3. Pore diameters and volumes were calculated applying either the HK or BJH methods to the desorption isotherm for relative pressures and <0.02 and >0.35 respectively. Powder XRD patterns were recorded on a PANalytical X’pertPro diffractometer fitted with an X’celerator detector and Cu K\(_{\alpha}\) source for 2\(\theta\) = 10–80° with a step size of 0.02°. The Scherrer equation was used to calculate HT crystallite sizes. XPS was performed on a Kratos Axis HSi X-ray photoelectron spectrometer fitted with a charge neutraliser and magnetic focusing lens employing Al K\(_{\alpha}\) monochromated radiation (1486.7 eV). Spectral fitting was performed using CasaXPS version 2.3.15. Binding energies were corrected to the C 1s peak at 284.5 eV. Base site densities were measured via CO\(_2\) pulse chemisorption and subsequent temperature programmed desorption (TPD) on a Quantachrome ChemBET 3000 system coupled to an MKS Minilab QMS. Samples were outgassed at 120 °C under flowing He (120 ml min\(^{-1}\)) for 1 h, prior to CO\(_2\) titration at 40 °C and subsequent desorption under a temperature ramp of 8 °C min\(^{-1}\). EDX analysis was carried out on a Oxford Instruments EVO SEM utilising Inca software. Prior to analysis samples were uniformly dispersed over a carbon disc on an aluminium stub, and sputter coated with 90 : 10 mixture of gold and palladium to minimise charging.
**Transesterification**
Transesterification was performed using a Radleys Starfish parallel reactor at 60 °C. Glass round-bottomed flasks were charged with 10 mmols of individual saturated TAGs C\(_3\)H\(_5\)(OOR)\(_3\) (R = C\(_4\) and C\(_8\)) or the unsaturated glyceryl
trioleate (Aldrich, 98%) in methanol (12.5 mL, i.e. 170 mmols), with dihexyl ether (0.0025 mol, Aldrich, 97%) as an internal standard. 18.5 wt% butanol was added to ensure complete TAG solubility (35 wt% for the glyceryl trioleate). Reactions were performed in air using 50 mg of catalyst. Aliquots were periodically withdrawn and filtered prior to detailed analysis of TAG conversion and FAME production on a Varian 450 GC with 8400 autosampler. C₄–C₈ TAGs and reaction products were analysed using a Zebron Inferno ZB-5HT capillary column (15 m × 0.32 mm i.d. and 0.1 μm film thickness), while triolein and associated products were analysed via on-column injection on a CP-simdist wide-bore column (10 m × 0.53 mm and 0.1 μm film thickness) with temperature-programmed injector. The maximum conversion of tributyrin in the absence of any catalyst or presence of the bare alumina was <4% under our mild reaction conditions, falling below the limits of detection (±1%) for tricaprylin and triolein. Initial rates were calculated from the linear portion of the conversion profile during the first 60 min of reaction. Percentage FAME selectivity is defined as [FAME]/([DAG]+[MAG]+[FAME]) × 100, where DAG and MAG are diglyceride and monoglyceride intermediates. TOFs were determined by normalising initial rates to the corresponding base site density of each sample. GC chromatograms evidenced only trace butyl esters under our reaction conditions, amounting to 0.3–0.5% of the total methyl esters formed, suggesting that low temperature TAG transesterification by butanol has negligible impact on our reported TAG conversions.
Results and discussion
Characterisation
The magnesium content of the Mg-HT/Al₂O₃ samples was first quantified by EDX, which showed a systematic increase from 5 wt% to 17 wt% across the series. These values are significantly lower than the nominal Mg loading added during synthesis which we attribute to coincident hydroxide and water incorporation during grafting. XRD patterns of the materials reveal a common set of reflections at 11.6°, 23.4°, 35°, 39.6°, 47.1°, and 61.1° characteristic of Mg–Al hydrotalcites, in good agreement with those observed for the co-precipitated HT standard (Fig. 1). Volume-averaged crystallite sizes determined from line broadening using the Scherrer equation were similar for all samples (Table 1) at around 30 nm, but significantly larger than that derived for the conventionally prepared (unsupported) Mg–Al hydrotalcite of 6 nm. This shows that the hydrotalcite phase present in Mg-HT/Al₂O₃ exhibits longer range order, likely reflecting its extended hydrothermal treatment compared to the less aggressive vapour phase rehydration method used to prepare the conventional HT. For example, low temperature (liquid phase) rehydration is more effective in crystallising unsupported hydrotalcites than higher temperature (vapour phase) rehydration (Fig. S1†), although the surface area and accessibility of Brønsted base sites is generally greater following vapour phase rehydration treatment. Interlayer spacings for Mg-HT/Al₂O₃ samples calculated from the d(003) and d(006) reflections were consistent with a hydroxide-intercalated hydrotalcite structure. There was no evidence for brucite in any Mg-HT/Al₂O₃ sample or the conventionally prepared hydrotalcite, however a weak reflection at 42.6° was indicative of a small contribution from MgO.
The intensity of hydrotalcite reflections increased linearly with Mg loading across the Mg-HT/Al₂O₃ series (Fig. 2), indicating that magnesium is exclusively incorporated into hydrotalcite phases and not e.g. undesired brucite or additional MgO. The relative intensities of hydrotalcite reflections from all the Mg-HT/Al₂O₃ materials were very similar to that of the 2:1ConvHT reference, indicating they possess similar, three-dimensional crystallite morphologies (Table S1†).
In order to calculate the composition of hydrotalcite present within our Mg-HT/Al₂O₃ series, Vegard’s law was first applied to quantify the relationship between the lattice parameter and Mg:Al ratio of pure, nanocrystalline hydrotalcites prepared via conventional co-precipitation (0.5 : 1ConvHT–2 : 1ConvHT). As anticipated, the bulk Mg:Al atomic ratio determined by EDX varied linearly with lattice parameter for the reference materials (Fig. 3). This relationship was utilised in conjunction with the XRD-derived lattice parameters from Table 1 to calculate the nominal Mg:Al ratio within the hydrotalcite phase for each Mg-HT/Al₂O₃ sample without interference from the alumina support. The resulting Mg:Al ratios for Mg-HT/Al₂O₃ show only a small increase with Mg content, remaining close to the 2:1 ratio most commonly observed for co-precipitated hydrotalcites wherein crystallites are most ordered possessing a honeycomb structure with each Mg²⁺ ion surrounded by three Mg²⁺ and three Al³⁺ octahedra, and each Al³⁺ ion surrounded by six Mg²⁺ octahedra. This equates to a molecular formula of [Mg₀.₆₆Al₀.₃₃(OH)₂]₀.₃₃¹[(CO₃)₂]₀.₁₇⁻·mH₂O. Since hydrotalcite compositions remains essentially unchanged...
Table 1 Structural properties of hydrotalcite Mg–HT/Al$_2$O$_3$ materials
| Mg loading$^a$/wt% | HT crystallite size$^b$/nm | HT interlayer spacing $d$/nm | HT lattice parameter $a$/Å | Mg : Al ratio$^c$ |
|-------------------|-----------------------------|-------------------------------|---------------------------|------------------|
| 5 | 27 ± 2.2 | 0.76 | 3.046 | 1.79 : 1 |
| 9 | 33 ± 2.6 | 0.76 | 3.050 | 1.90 : 1 |
| 14 | 36 ± 2.9 | 0.76 | 3.052 | 2.13 : 1 |
| 17 | 31 ± 2.5 | 0.77 | 3.051 | 2.08 : 1 |
$^a$ Bulk content from EDX. $^b$ XRD line broadening from Scherrer equation. $^c$ Calculated from Vegard’s law.
Fig. 2 Intensity of $d(003)$ reflection of Mg–Al hydrotalcite phase as a function of bulk Mg loading.
Fig. 3 Lattice parameter versus experimental Mg : Al atomic ratio for co-precipitated Mg–Al hydrotalcites (ConvHTs), and theoretical Mg : Al ratio derived for Mg–HT/Al$_2$O$_3$.
N$_2$ porosimetry (Fig. S2†) reveals the BET surface areas Mg–HT/Al$_2$O$_3$ are comparable to the alumina support for low Mg loadings, but decrease >9 wt% Mg, although still twice that of the pure 2:1ConvHT (Table 2). The BJH pore volumes for the Mg–HT/Al$_2$O$_3$ series are significantly higher than the parent alumina support, but fall likewise fall at high Mg loadings. We hypothesise that hydrotalcite crystallites initially nucleate widely spaced over the alumina surface, creating intercrystallite mesoporous voids; as the number of (similar sized) hydrotalcite crystallites rises with consecutive impregnation cycles, these interparticle voids are eliminated. The mean pore diameter may also rise due to blockage of micro- and smaller mesopores in the alumina support by preferential hydrotalcite crystallisation at such pore entrances. Thermal analysis of Mg–HT/Al$_2$O$_3$ samples showed the expected weight losses due to desorption of interlayer hydroxide anions (Fig. S3†) which increased with Mg loading consistent with their greater hydrotalcite content seen by XRD.
Surface basicity of Mg–HT/Al$_2$O$_3$ was assessed via CO$_2$ TPD of the pre-saturated materials. Fig. 4 shows that all supported hydrotalcites possess significantly lower base site densities than the co-precipitated 2:1ConvHT reference. However, in contrast to the pure hydrotalcite which only exhibits a single well-defined desorption peak ~350 °C, all the Mg–HT/Al$_2$O$_3$ samples display two distinct CO$_2$ desorptions. The low temperature desorption (centred ~300 °C) is assigned to bicarbonate species formed at surface hydroxide anions exposed on the external surface of hydrotalcite crystallites and the parent alumina.$^{50,61}$ These are weaker bases than the interlayer hydroxide anions,$^{62}$ hence the higher temperature feature (>370 °C) is assigned to CO$_2$ bound between the brucite-like sheets.$^{63}$ The desorption areas, and hence densities, of both types of base sites present within Mg–HT/Al$_2$O$_3$ increase with Mg content (Table 3), consistent with increased hydrotalcite formation apparent by XRD and TGA. The desorption peak maximum for interlayer bicarbonate shifts to lower temperature with increasing Mg content, converging towards that of the 2:1ConvHT for the 17 wt% Mg–HT/Al$_2$O$_3$ sample. We attribute the higher initial desorption temperature to contributions from a disordered MgO phase at the alumina interface as evidenced by XPS in the following section.
Further insight into the Mg–HT/Al$_2$O$_3$ surface composition was obtained from XPS. Fig. 5 shows the resulting background subtracted, fitted Al 2p and Mg 2s XP spectra as a function of bulk Mg content, alongside pure alumina and MgO reference compounds. Considering the Al 2p spectra of the parent alumina first, two distinct sets of spin–orbit split...
Table 2 $N_2$ porosimetry data for Mg–HT/Al$_2$O$_3$ and 2 : 1ConvHT and parent Al$_2$O$_3$ support references
| Material | BET surface area/m$^2$ g$^{-1}$ | BJH pore volume/cm$^3$ g$^{-1}$ | Average BJH pore diameter/nm |
|----------------|---------------------------------|---------------------------------|------------------------------|
| Al$_2$O$_3$ | 110 ± 11 | 0.23 ± 0.02 | 1.2 ± 0.1 |
| 5 wt% | 119 ± 12 | 0.81 ± 0.10 | 21 ± 3 |
| 9 wt% | 113 ± 11 | 0.75 ± 0.09 | 26 ± 5 |
| 14 wt% | 90 ± 9 | 0.59 ± 0.07 | 26 ± 5 |
| 17 wt% | 88 ± 9 | 0.57 ± 0.07 | 17 ± 2 |
| 2:1ConvHT | 48 ± 5 | 0.21 ± 0.01 | 3.4 ± 0.4 |
Fig. 4 CO$_2$ TPD profiles for Mg–HT/Al$_2$O$_3$ series and 2 : 1 ConvHT reference as a function of bulk Mg loading.
Fig. 5 (Left) Al 2p and (Right) Mg 2s XP spectra of Mg–HT/Al$_2$O$_3$ series as a function of bulk Mg loading and pure Al$_2$O$_3$ and MgO references.
Doublets are apparent, with 2$p_{3/2}$ binding energies (BE) of 73.8 and 74.7 eV, attributed to respective octahedral and tetrahedral Al$^{3+}$ sites within the underlying $\gamma$-Al$_2$O$_3$ support,\textsuperscript{64} in the expected ~2 : 1 ratio for a defective spinel structure.\textsuperscript{65} Magnesium impregnation results in the appearance of a new doublet at 73.5 eV, whose intensity increase monotonically with Mg loading and we assign to the hydrotalcite phase. Coincident attenuation of alumina features demonstrates that hydrotalcite crystallites coat the support surface, presumably via the dissolution and reaction of aluminium cations as previously hypothesised from EXAFS studies.\textsuperscript{50,51} The Mg 2s XP spectra of Mg–HT/Al$_2$O$_3$ materials reveal a high BE component at 87.9 eV characteristic of MgO,\textsuperscript{65} and a second component at 88.5 eV which grows with Mg loading and is likewise assigned to hydrotalcite formation.
Attenuation of the underlying alumina XP signal at 74.7 eV relative to the summed hydrotalcite (Al 2$p_{3/2}$ 73.5 eV and Mg 2 s 88.5 eV) XP signals is directly proportional to the Mg content (Fig. 6), indicating that successive magnesium additions produce new hydrotalcite crystallites over exposed patches of the support, resulting in a conformal coating, rather than a rough/porous three-dimensional film. This is consistent with the loss of (intercrystallite) mesopore voids at higher Mg loadings seen in Table 2. The proportion of surface magnesium incorporated into the [Mg$_{0.66}$Al$_{0.33}$(OH)$_2$]$^{0.33+}$[(CO$_3^{2-}$)$_{0.17}$]-$m$H$_2$O hydrotalcite phase
Table 3 Base site densities for Mg–HT/Al$_2$O$_3$ and 2 : 1ConvHT reference determined via CO$_2$ TPD analysis
| Material | External density/g$^{-1}$ | External $T_{\text{max}}$/$^\circ$C | Interlayer density/g$^{-1}$ | Interlayer $T_{\text{max}}$/$^\circ$C | Total density/g$^{-1}$ |
|----------------|---------------------------|-------------------------------------|----------------------------|--------------------------------------|------------------------|
| 5 wt% | $1.03 \times 10^{18}$ | 283.9 | $3.66 \times 10^{18}$ | 397.63 | $4.69 \times 10^{18}$ |
| 9 wt% | $1.12 \times 10^{18}$ | 297.1 | $5.30 \times 10^{18}$ | 397.2 | $6.43 \times 10^{18}$ |
| 14 wt% | $1.63 \times 10^{18}$ | 314.0 | $1.09 \times 10^{19}$ | 391.5 | $1.25 \times 10^{19}$ |
| 17 wt% | $3.77 \times 10^{18}$ | 291.0 | $1.82 \times 10^{19}$ | 374.6 | $2.20 \times 10^{19}$ |
| 2 : 1ConvHT | — | — | $8.55 \times 10^{19}$ | 349.8 | $8.55 \times 10^{19}$ |
$^a$ Experimental error ±0.2 °C.
thus rises from 38% to 64% across the Mg–HT/Al$_2$O$_3$ series. Attenuation of the alumina XP signal can also be used to estimate the fractional coverage of the hydrotalcite coating. Since the mean hydrotalcite crystallite size of ~30 nm is sufficient to fully screen any contribution from the underlying support, the remaining alumina XP signal detected must arise from exposed areas. The surface coverage of the 17 wt% Mg–HT/Al$_2$O$_3$ hydrotalcite coating is around 0.55 of a monolayer, similar to that estimated from the parent alumina surface area and the surface density of Mg atoms within a 2:1 Mg–Al hydrotalcite phase (Table S2†). Scheme 1 summarises the proposed growth mode of the hydrotalcite coating over alumina.
**Catalytic transesterification**
The efficacy of our Mg–HT/Al$_2$O$_3$ materials for FAME production was evaluated *via* the transesterification of increasingly bulky TAGs, from tributyrin (C$_4$) through to glyceryl trioletate (C$_{18}$), with methanol under mild conditions. Reaction profiles for resulting FAME production are shown in Fig. 7 for the highest loading 14 wt% and 17 wt% Mg–HT/Al$_2$O$_3$, alongside the conventionally prepared 2:1ConvHT material. Two reaction regimes were observed for all catalysts and substrates; rapid esterification during the initial 50–200 min of reaction wherein the FAME yield increases linearly with time, followed by a slower phase with TAG conversion reaching a plateau between 26–55%.
Table 4 compares the initial rates of TAG conversion (determined directly by GC analysis and not inferred from FAME yields) and limiting conversion and selectivity after 24 h reaction across the Mg–HT/Al$_2$O$_3$ series. Note that the low loading 5 wt% and 9 wt% Mg–HT/Al$_2$O$_3$ were not tested in triolein transesterification since their low base site densities prohibited accurate conversion measurements during the early stage of reaction. The absolute initial rate increased almost linearly with Mg loading, closely mirroring the rise in total and interlayer base site densities. Despite the 50 mg 2:1ConvHT catalyst charge comprising pure hydrotalcite with a high base site density, the associated initial rate of TAG conversion was comparable to that of the 17 wt% Mg–HT/Al$_2$O$_3$ catalyst. Resulting Turnover Frequencies for the coated aluminas are thus far superior to that of the co-precipitated reference catalyst, offering three- (C$_4$/C$_8$) to ten-fold (C$_{18}$) rate enhancements (Fig. 8). This indicates that the majority of active sites in the 2:1ConvHT reference do not participate in esterification, even though *individual crystallites* are significantly more highly dispersed (6 nm) and afford a far higher density of base sites accessible by CO$_2$ than those in the coated aluminas (~30 nm). Nanocrystallite aggregation during the conventional hydrotalcite preparation seems the likely culprit for its poorer performance.
TOFs for Mg–HT/Al$_2$O$_3$ were almost identical whether calculated per base site or per interlayer base site, and crucially, were independent of Mg loading for all TAGs (Fig. 8). The latter observation is consistent with our model of a two-dimensional (nanocrystalline) hydrotalcite coating spreading over the alumina support, rather of than three-dimensional growth at higher Mg loadings which would impede TAG diffusion and access to active base sites lowering the apparent TOFs. Indeed, absolute TOF values for the Mg–HT/Al$_2$O$_3$ catalysts are comparable to those recently reported employing a (pure) macroporous Mg–Al hydrotalcite to overcome mass-transport limitations even for bulky triglycerides. Since the proportion of surface MgO and hydrotalcite varies with loading (Fig. 6), the observation of a common TOF value for the C$_4$ and C$_8$ TAGs suggests either both phases have the same intrinsic activity towards transesterification, or that only the hydrotalcite coating participates in reaction; as mentioned above, the absolute TOF values of 10–20 min$^{-1}$ are in excellent agreement with literature values for hydrotalcites, and an order of magnitude greater than expected for MgO, hence we favour the latter hypothesis. Observation of a constant TOF when normalising rates to the (more strongly basic) interlayer OH$^-$ density suggests that these are the active sites responsible for transesterification, rather than weaker hydroxyls on the external surface of hydrotalcite crystallites (for which a volcano dependence of TOF on loading is obtained).

**Scheme 1** Growth of hydrotalcite coating over alumina support.
Fig. 7 FAME productivity via the transesterification of tributyrin, tricaprylin and triolein with methanol at 60 °C over Mg–HT/Al$_2$O$_3$ and 2 : 1ConvHT catalysts.
Table 4 Catalytic transesterification performance of Mg–HT/Al$_2$O$_3$ and 2 : 1ConvHT catalysts as a function of bulk Mg loading and TAG chain length
| | C$_4$ TAG | | C$_8$ TAG | | C$_{18}$ TAG |
|----------------|-----------|----------------|-----------|----------------|--------------|
| | Initial rate/ mmol min$^{-1}$ g$^{-1}$ | FAME Conversion$^{b}$/% | selectivity$^{b,c}$/% | Initial rate/ mmol min$^{-1}$ g$^{-1}$ | FAME Conversion$^{b}$/% | selectivity$^{b,c}$/% | Initial rate/ mmol min$^{-1}$ g$^{-1}$ | FAME Conversion$^{b}$/% | selectivity$^{b,c}$/% |
| 2:1ConvHT | 0.78 ± 0.01 | 42 | 43 | 0.42 ± 0.13 | 30 | 54 | 0.026 ± 0.01 | 16 | 67 |
| 5 wt% Mg | 0.15 ± 0.01 | 13 | 7 | 0.10 ± 0.02 | 5 | 14 | n/a | n/a | n/a |
| 9 wt% Mg | 0.21 ± 0.03 | 14 | 8 | 0.16 ± 0.03 | 5 | 18 | n/a | n/a | n/a |
| 14 wt% Mg | 0.40 ± 0.02 | 19 | 14 | 0.30 ± 0.05 | 10 | 25 | 0.024 ± 0.002 | 4 | 26 |
| 17 wt% Mg | 0.66 ± 0.03 | 25 | 20 | 0.49 ± 0.09 | 15 | 40 | 0.042 ± 0.004 | 6 | 42 |
$^a$ GC analysis of TAG after 24 h reaction at 60 °C. $^b$ 0.05 g catalyst, and MeOH : TAG = 30 : 1. $^c$ GC analysis, error of 1.5%.
Fig. 8 TOF values for Mg–HT/Al$_2$O$_3$ catalysts compared to a 2 : 1ConvHT reference catalyst as a function of bulk Mg loading and TAG chain length.
This conclusion is also in accordance with the other key finding from Fig. 8, namely the decrease in TOF for each Mg–HT/Al$_2$O$_3$ with alkyl chain length from 19 (C$_4$) > 9 (C$_8$) > 1 (C$_{18}$); access to base sites within the microporous interlayers is expected to fall significantly as the molecular size of TAGs increases.
Selectivity to the desired FAME product increases with TAG conversion in all cases, as expected, since more active catalysts are likely to favour esterification of the diglyceride (DAG) and monoglyceride (MAG) intermediates (Fig. S4†). The lower selectivity of the Mg–HT/Al$_2$O$_3$ catalysts simply reflects their lower conversions relative to conventional, pure hydrotalcites (unsurprisingly since they contain far fewer base sites), and hence greater yield of intermediate DAGs and MAGs, which are precursors to the desired FAME product. Hence lower selectivity is not a result of alternative side-products, or subsequent reaction of FAME, but merely that, as for any sequential reaction, the higher the initial TAG conversion (and thus greater concentration of reactive intermediates), the greater probability that DAG and MAG liberated into the reaction media will compete effectively with the TAG feedstock to re-adsorb and further react at surface base sites – the pre-requisite for FAME production. However, Table 4 also reveals that for all catalysts FAME selectivity increases with TAG chain length, e.g. from 20% to 42% for the 17 wt% Mg–HT/Al$_2$O$_3$. We suggest this relates to the increasingly poor solubility of the heavier DAG/MAG intermediates in the methanol–butanol solvent, and hence longer residence time within the HT interlayer of crystallite edges and consequent propensity to undergo consecutive esterification reactions. In contrast, the highly soluble di- and mono-butyrin are readily solubilised in the alcoholic bulk medium resulting in poor FAME selectivities.
Stability of the active HT phase within Mg–HT/Al$_2$O$_3$ catalysts was assessed by bulk and surface analysis following recovery via hot filtration and methanol washing (50 cm$^3$) after a 24 h tributyrin transesterification. EDX showed no change in the Mg : Al ratios for any loading, suggesting minimal Mg leaching.
during reaction. XRD revealed the hydrotalcite structure was preserved in all cases with negligible change in the interlayer spacing post-reaction, although crystallite sizes decreased slightly (Fig. S5†). The HT lattice parameter also exhibited a small decrease from e.g. 2.08 : 1 to 1.87 : 1 for 17 wt% Mg–HT/Al$_2$O$_3$ suggesting a small amount of aluminium was incorporated in the hydrotalcite coating during esterification. The latter conclusion is supported by the higher intensity of HT versus alumina reflections post-reaction, whose ratio increases by $\sim$120 ± 30% across the coated aluminas. This surprising observation that the spent catalyst contains more of the desired active hydrotalcite phase than the fresh material was further supported by XPS. Fig. 9 plots the mean change in Mg 2s and Al 2p derived HT surface populations (as a function of Mg loading), following tributyrin esterification. All Mg–HT/Al$_2$O$_3$ catalysts expose significantly more hydrotalcite post-reaction, at the expense of MgO and alumina, which we suggest react \textit{in situ} via ion-exchange under the mild, solvothermal conditions. This enhancement is less for higher Mg loadings, wherein the freshly prepared surface HT coatings already encapsulate more of the alumina support (Fig. 6).
In light of the preceding observation that XPS indicates no degradation of the hydrotalcite coating in spent catalysts, we examine the catalytic stability of the 17 wt% Mg–HT/Al$_2$O$_3$ material towards tributyrin transesterification under repeated re-use (Fig. 10). The spent catalyst was simply filtered and washed with 80 cm$^{-3}$ of methanol after each reaction to remove any reversibly adsorbed TAG or products, dried at 80 °C in air, and then re-introduced to the reactor with a fresh tributyrin/methanol charge without further pretreatment. This rapid, low cost and energy efficient regeneration protocol proved effect, with only a 10% drop activity after the first reaction, and no further change from second to third recycles. We attribute this small, one-off drop to site-blocking of the strongest base sites by strongly bound carboxylate residues which cannot be removed by our extremely mild solvent wash between cycles. It is likely that recalcination/rehydration of spent catalysts would suffice to fully regenerate this small deactivation.
**Conclusions**
A uniform and tunable coating of Mg–Al hydrotalcite nanocrystallites has been grown over amorphous alumina \textit{via} an environmentally-friendly route employing impregnation and subsequent hydrothermal processing of magnesium methoxide, without recourse to alkali- or nitrogen-containing precursors. The hydrotalcite coating has a constant Mg:Al stoichiometry of 2 : 1 and interlayer spacing of $\sim$1 nm, and wets the alumina support with a coverage proportional to the magnesium concentration. Chemisorption measurements reveal two distinct base sites; minority, weakly basic surface hydroxyls, and majority, medium basicity interlayer hydroxide anions. Turnover frequencies for C$_4$–C$_{18}$ triglyceride transesterification with methanol over Mg–HT/Al$_2$O$_3$ are superior to those of conventional (pure) hydrotalcites prepared \textit{via} co-precipitation, particularly for the long chain triolein naturally occurring at 8–15% in \textit{Jatropha curcas} seed oil,\textsuperscript{68,69} highlighting the potential application of these hydrotalcite coatings in biodiesel production from sustainable biomass. This enhanced reactivity is attributed to the high dispersion of hydrotalcite nanocrystallites over the parent alumina surface and associated intercrystallite mesopore voids, which eliminate mass-transport barriers to the diffusion of bulky TAGs prevalent within co-precipitated hydrotalcite catalysts. Indeed the TOFs observed herein for Mg–HT/Al$_2$O$_3$ catalysts are comparable to those for macroporous hydrotalcites\textsuperscript{20} synthesised through less cost-effective and more complex hard-templating protocols employing sacrificial polystyrene.
nanospheres. In summary, we have developed a simple, low cost route to depositing crystalline hydrotalcite coatings over high area alumina from benign precursors that affords highly active solid base catalysts for FAME production under mild reaction conditions.
Acknowledgements
We thank the EPSRC (EP/G007594/3) for financial support and a Leadership Fellowship (AFL) and studentship (JJG), and the Royal Society for the award of an Industry Fellowship (KW).
Notes and references
1 International Energy Outlook 2013, Report DOE/EIA-0484(2013), 2013.
2 N. Armaroli and V. Balzani, Angew. Chem., Int. Ed., 2007, 46, 52–66.
3 R. Luque, L. Herrero-Davila, J. M. Campelo, J. H. Clark, J. M. Hidalgo, D. Luna, J. M. Marinas and A. A. Romero, Energy Environ. Sci., 2008, 1, 542–564.
4 X. Y. Yan, O. R. Inderwildi and D. A. King, Energy Environ. Sci., 2010, 3, 190–197.
5 W. M. J. Achten, L. Verchot, Y. J. Franken, E. Mathijs, V. P. Singh, R. Aerts and B. Muys, Biomass Bioenergy, 2008, 32, 1063–1084.
6 G. Knothe, Top. Catal., 2010, 53, 714–720.
7 M. J. Climent, A. Corma, S. Iborra and A. Velty, J. Catal., 2004, 221, 474–482.
8 U. Constantino, F. Marmottini, M. Nocchetti and R. Vivani, Eur. J. Inorg. Chem., 1998, 1439–1446.
9 K. Narasimharao, A. Lee and K. Wilson, J. Biobased Mater. Bioenergy, 2007, 1, 19–30.
10 M. R. Othman, Z. Helwani, Martunus and W. J. N. Fernando, Appl. Organomet. Chem., 2009, 23, 335–346.
11 Y. Liu, E. Lotero, J. G. Goodwin and X. Mo, Appl. Catal., A, 2007, 33, 138–148.
12 J. Geuens, J. M. Kremsner, B. A. Nebel, S. Schober, R. A. Dommissie, M. Mittelbach, S. Tavernier, C. O. Kappe and B. U. W. Maes, Energy Fuels, 2007, 22, 643–645.
13 Jianbo, Zexue, Z. Tang and Enze, Ind. Eng. Chem. Res., 2004, 43, 7928–7931.
14 G. Knothe, Fuel Process. Technol., 2005, 86, 1059–1070.
15 T. M. Mata, A. A. Martins and N. S. Caetano, Renewable Sustainable Energy Rev., 2010, 14, 217.
16 K. Wilson and A. F. Lee, Catal. Sci. Technol., 2012, 2, 884–897.
17 K. N. Rao, A. F. Lee and K. Wilson, J. Biobased Mater. Bioenergy, 2007, 1, 19.
18 M. J. Kim, S. M. Park, D. R. Chang and G. Seo, Fuel Process. Technol., 2010, 91, 618–624.
19 Y. Xi and R. J. Davis, J. Catal., 2008, 254, 190–197.
20 J. J. Woodford, J.-P. Dacquin, K. Wilson and A. F. Lee, Energy Environ. Sci., 2012, 5, 6145–6150.
21 D. G. Cantrell, L. J. Gillie, A. F. Lee and K. Wilson, Appl. Catal., A, 2005, 287, 183–190.
22 T. F. Dossin, M.-F. Reyniers, R. J. Berger and G. B. Marin, Appl. Catal., B, 2006, 67, 136–148.
23 R. S. Watkins, A. F. Lee and K. Wilson, Green Chem., 2004, 6, 335–340.
24 K. Wilson, C. Hardacre, A. F. Lee, J. M. Montero and L. Shellard, Green Chem., 2008, 10, 654–659.
25 J. M. Montero, P. Gai, K. Wilson and A. F. Lee, Green Chem., 2009, 11, 265–268.
26 M. Verziu, B. Cocojaru, J. Hu, R. Richards, C. Ciuculescu, P. Filip and V. I. Parvulescu, Green Chem., 2008, 10, 373–381.
27 M. C. G. Albuquerque, I. Jimenez-Urbistondo, J. Santamaria-Gonzalez, J. M. Merida-Robles, R. Moreno-Tost, E. Rodriguez-Castellon, A. Jimenez-Lopez, D. C. S. Azevedo, C. L. Cavalcante Jr. and P. Maireles-Torres, Appl. Catal., A, 2008, 334, 35–43.
28 H. A. Pearce and N. Sheppard, Surf. Sci., 1976, 59, 205–217.
29 H. Pines and W. O. Haag, J. Am. Chem. Soc., 1960, 82, 2471–2483.
30 B. R. Cuena, Thin Solid Films, 2010, 518, 3127–3150.
31 C. M. A. Parlett, D. W. Bruce, N. S. Hondow, A. F. Lee and K. Wilson, ACS Catal., 2011, 1, 636–640.
32 J. M. Campelo, A. F. Lee, R. Luque, D. Luna, J. M. Marinas and A. A. Romero, Chem.–Eur. J., 2008, 14, 5988–5995.
33 N. F. Zheng and G. D. Stucky, J. Am. Chem. Soc., 2006, 128, 14278–14280.
34 M. Zabeti, W. M. A. Wan Daud and M. K. Aroua, Fuel Process. Technol., 2009, 90, 770–777.
35 I. Chorkendorff and J. W. Niemantsverdriet, Concepts of Modern Catalysis and Kinetics, Wiley-VCH, Germany, 2003.
36 F. Cavani, F. Trifirò and A. Vaccari, Catal. Today, 1991, 11, 173–301.
37 M. R. Othman, N. M. Rasid and W. J. N. Fernando, Chem. Eng. Sci., 2006, 61, 1555–1560.
38 I. Reyero, I. Velasco, O. Sanz, M. Montes, G. Arzamendi and L. M. Gandia, Catal. Today, 2013, 216, 211–219.
39 D. M. Alonso, R. Mariscal, M. L. Granados and P. Maireles-Torres, Catal. Today, 2009, 143, 167–171.
40 D.-W. Lee, Y.-M. Park and K.-Y. Lee, Catal. Surv. Asia, 2009, 13, 63–77.
41 M. Di Serio, R. Tessier, L. Casale, A. D’Angelo, M. Trifuoggi and E. Santacesaria, Top. Catal., 2010, 53, 811–819.
42 Y. C. Sharma, B. Singh and J. Korstad, Fuel, 2011, 90, 1309–1324.
43 A. A. Refaat, Int. J. Environ. Sci. Technol., 2011, 8, 203–221.
44 BASF AG, UK Patent, 1,462,059-60, 1973.
45 J. L. Paulhiac and O. Clause, J. Am. Chem. Soc., 1993, 115, 11602–11603.
46 Y. F. Gao, A. Nagai, Y. Masuda, F. Sato, W. S. Seo and K. Koumoto, Langmuir, 2006, 22, 3521–3527.
47 S. P. Newman, W. Jones, P. O’Connor and D. N. Stamires, J. Mater. Chem., 2002, 12, 153–155.
48 S. Mitchell, T. Biswick, W. Jones, G. Williams and D. O’Hare, Green Chem., 2007, 9, 373–378.
49 F. Kovanda, P. Masatova, P. Novotna and K. Jiratova, Clays Clay Miner., 2009, 57, 425–432.
50 T. P. Trainor, G. E. Brown Jr and G. A. Parks, J. Colloid Interface Sci., 2000, 231, 359–372.
51 J. B. D. Delacaillerie, M. Kermarec and O. Clause, *J. Am. Chem. Soc.*, 1995, **117**, 11471–11481.
52 Y. Xi and R. J. Davis, *J. Catal.*, 2009, **268**, 307–317.
53 M. Behrens, S. Kißner, F. Girsgdies, I. Kasatkin, F. Hermerschmidt, K. Mette, H. Ruland, M. Muhler and R. Schlogl, *Chem. Commun.*, 2011, **47**, 1701–1703.
54 C. J. Johnson and B. C. Kross, *Am. J. Ind. Med.*, 1990, **18**, 449–456.
55 K. Chibwe and W. Jones, *J. Chem. Soc., Chem. Commun.*, 1989, 926.
56 F. Cavani, F. Trifiro and A. Vaccari, *Catal. Today*, 1991, **11**, 173–291.
57 S. Miyata, *Clays Clay Miner.*, 1983, **31**, 305–311.
58 A. R. Denton and N. W. Ashcroft, *Phys. Rev. A: At., Mol., Opt. Phys.*, 1991, **43**, 3161–3164.
59 H. Pfeiffer, L. Martinez-dicruz, E. Lima, J. Flores, M. A. Vera and J. S. Valente, *J. Phys. Chem. C*, 2010, **114**, 8485–8492.
60 M. A. Al-Daous, A. A. Manda and H. Hattori, *J. Mol. Catal. A: Chem.*, 2012, **363**, 512–520.
61 A. Tarlani and M. P. Zarabadi, *Solid State Sci.*, 2013, **16**, 76–80.
62 R. Philipp and K. Fujimoto, *J. Phys. Chem.*, 1992, **96**, 9035–9038.
63 S. Abello, F. Medina, D. Tichit, J. Perez-Ramirez, X. Rodriguez, J. E. Sueiras, P. Salagre and Y. Cesteros, *Appl. Catal., A*, 2005, **281**, 191–198.
64 S. G. Gagarin and Y. A. Teterin, *Theor. Exp. Chem.*, 1985, **21**, 193–197.
65 M. H. Lee, C.-F. Cheng, V. Heine and J. Klinowski, *Chem. Phys. Lett.*, 1997, **265**, 673–676.
66 J. Montero, K. Wilson and A. Lee, *Top. Catal.*, 2010, **53**, 737–745.
67 J. J. Woodford, C. M. A. Parlett, J.-P. Dacquin, G. Cibin, A. Dent, J. Montero, K. Wilson and A. F. Lee, *J. Chem. Technol. Biotechnol.*, 2014, **89**, 73–80.
68 J. Salimon and R. Abdullah, *Sains Malays.*, 2008, **37**, 379–382.
69 J. Salimon and W. A. Ahmed, *Sains Malays.*, 2012, **41**, 313–317. |
Berkeley Center for Law & Technology
2014–2015
ANNUAL BULLETIN
ESTABLISHED IN 1995, the Berkeley Center for Law & Technology (BCLT) is a multidisciplinary research center at the University of California, Berkeley, School of Law. The first of its kind, BCLT has garnered worldwide distinction for its research and instructional program exploring the most pressing technology law and policy issues.
BCLT is uniquely situated in the San Francisco Bay Area, capitalizing on its location:
- at the world’s foremost public research university;
- near Silicon Valley—the engine of the information economy; and
- at the heart of the biotech revolution.
Headed by Executive Director Robert Barr and an internationally esteemed faculty, BCLT frames and advances the law and technology discussion. Equally noteworthy is BCLT’s global community of students, alumni, practicing attorneys, policymakers, and scholars in all sectors of law and technology, who participate with BCLT in a variety of academic, practical, and law reform activities.
Now entering its twentieth year as the foremost research and policy-oriented academic center, BCLT continues to excel in pursuit of its multifaceted mission to examine the complex questions raised by new technologies.
OUR MISSION
The mission of the Berkeley Center for Law & Technology is to foster the beneficial and ethical advancement of technology by guiding the development of intellectual property law, information privacy law, and related areas of law and public policy as they interact with business, science, and technical innovation.
ROBERT BARR IS THE EXECUTIVE DIRECTOR of the Berkeley Center for Law & Technology and Lecturer-in-Residence at UC Berkeley School of Law. Prior to joining BCLT in 2005, he was the first Vice President of Intellectual Property for Cisco Systems in San Jose, California, where he was responsible for all of the company’s patent prosecution, licensing, and litigation.
Robert has been a prominent patent attorney in Silicon Valley for over 30 years. He has degrees in Electrical Engineering and Political Science from the Massachusetts Institute of Technology and a J.D. cum laude from Boston University School of Law. He has been a partner at three major law firms, where he specialized in patent strategy counseling for clients in the computer, telecommunications, and semiconductor industries. In honor of his accomplishments, Robert’s professional colleagues have created a scholarship in his name at the UC Berkeley School of Law. The scholarship is awarded each year to a 2L or 3L J.D. student who has demonstrated an interest in and commitment to the field of law and technology.
During his tenure as Executive Director, Robert has expanded the BCLT community, mentored countless students, and established a high standard of programming that uniquely combines academic and practical perspectives.
“BCLT is carrying out groundbreaking empirical research to expand the body of reliable knowledge about IP, privacy law, and cyberlaw, and we remain committed to offering the best possible education to tomorrow’s leaders in technology law. Driven by our world-class law and technology faculty and our ever-expanding community of illustrious alumni, BCLT continues to earn its place as the premiere program of its kind in the country.”
—ROBERT BARR,
BCLT EXECUTIVE DIRECTOR
STARTING WITH A FOCUS on intellectual property, BCLT has expanded over the years beyond its intellectual property core to encompass crucial related subject matter fields, including privacy law, cyberlaw, electronic commerce, entertainment law, telecommunications regulation, and many other areas of constitutional, regulatory, and business law that are affected by new technologies.
Through its curricular development, conferences, symposia, judicial education programs, and other efforts, BCLT provides a forum where distinguished faculty throughout the university and beyond, law students, policymakers, and leading lawyers, entrepreneurs, and tech experts from all parts of the world can exchange and discuss ideas.
For the latest information about BCLT events and programming:
- Join the Berkeley Center for Law & Technology–BCLT group on LinkedIn: law.berkeley.edu/bclt/linkedin
- Like the Berkeley Center for Law & Technology–BCLT page on Facebook: law.berkeley.edu/bclt/facebook
- Follow @BCLTatBoalt on Twitter: law.berkeley.edu/bclt/twitter
Sign up for email updates by contacting firstname.lastname@example.org
Audio, video, photos, slides, and other resources from BCLT events are online: law.berkeley.edu/bclt/pastevents
Over the years, BCLT’s pioneering intellectual property program has provided not only unmatched educational resources but also a multitude of advanced scholarly and collaborative opportunities year-round. Students, scholars, attorneys, and policymakers meet at BCLT events to consider the complex policy and legal issues arising from technological developments and advancements in the contexts of patent law, copyright law, and other IP-related fields.
**USPTO Software Partnership Meeting**
*October 17, 2013 – Berkeley, CA*
BCLT hosted this public meeting held by the U.S. Patent & Trademark Office, bringing together stakeholders from the tech and software communities to discuss software-related patents.
**Altai @ 21: Software Copyrights Revisited**
*October 25, 2013 – Berkeley, CA*
BCLT hosted a day-long workshop gathering IP experts, both academic and in-practice, to look back to the influential 1992 Second Circuit Court of Appeals decision in *Computer Associates v. Altai* and to discuss the current and possible future state of software copyright law.
**5th Annual Patent Law and Policy Conference**
*November 1, 2013 – Washington, D.C.*
BCLT and Georgetown University Law Center together created this unique one-day program with in-practice, academic, government, and judicial experts on the role of the courts in patent law and policy.
**14th Annual Advanced Patent Law Institute**
*December 12–13, 2013 – Palo Alto, CA*
Co-organized by BCLT and Stanford Law School, a nationally recognized faculty of district judges, academics, litigation experts, patent attorneys, and senior IP counsel from major corporations looked in-depth at issues on prosecuting and litigating patents.
**Copyright Roundtable: Remixes, First Sale, and Statutory Damages**
*July 30, 2014 – Berkeley, CA*
This timely USPTO-led roundtable discussion solicited public input on three important topics touching on the scope of current copyright law and its development in the digital environment.
**18th Annual BCLT/BTLJ Symposium: The Next Great Copyright Act**
*April 3–4, 2014 – Berkeley, CA*
This two-day conference brought together noted scholars, policymakers, and representatives of numerous stakeholder groups to consider what changes could be made and how to work towards a comprehensive revision of U.S. copyright law.
**Innovation and Intellectual Property: A Tribute to Suzanne Scotchmer’s Work**
*May 1, 2014 – Berkeley, CA*
A very special day-long event paid fond tribute to Professor Suzanne Scotchmer (1950–2014), one of the most important and influential economists of her generation.
**Multistakeholder Forum on the DMCA Notice and Takedown System**
*May 8, 2014 – Berkeley, CA*
This public meeting, initiated by the Department of Commerce’s Internet Policy Task Force and led by the USPTO and NTIA, called for consensus building on improving the operation of the notice-and-takedown system under the Digital Millennium Copyright Act (DMCA).
**14th Annual Intellectual Property Scholars Conference**
*August 7–8, 2014 – Berkeley, CA*
BCLT hosted this year’s IPSC, presenting a full two-day program in which IP scholars could come together to present and discuss a wide range of in-progress works.
**PRIVACY**
BCLT has developed the premiere privacy program for students and researchers working in this interdisciplinary field. The esteemed faculty includes several leading privacy experts with concentrations spanning international and comparative privacy law, online privacy law, consumer privacy law, and the law of surveillance. BCLT faculty also works with corporations and organizations as well as testifying before legislative hearings and government commissions on privacy law topics. BCLT hosts numerous events on cutting-edge privacy issues.
**Privacy Roundtable: Pulling the Curtain Back to Reveal the New World of Web Tracking**
*October 16, 2013 – Palo Alto, CA*
A panel of experts discussed the complex and invisible data ecosystem on websites, the business and legal implications, and the data governance decisions that companies face.
Developments in California Privacy Law: Assessing the Present and Predicting the Future
December 3, 2013 – San Francisco, CA
BCLT presented, in conjunction with Paul Hastings LLP, a privacy roundtable with leading experts and top policymakers to discuss the impact of recent changes in California privacy law and its likely future direction.
Online Tracking Workshop: Developing an International Consensus for Consumer Protection and Privacy Online
February 11, 2014 – Brussels, Belgium
Following up on a previous workshop, BCLT and the University of Amsterdam Institute for Information Law (IViR) reconvened policymakers, academics, and technical experts to revisit the state of online tracking technology as well as policy implications and efforts.
Civil Liberties, Privacy, and National Security: A Conversation with the Privacy and Civil Liberties Oversight Board
February 26, 2014 – Berkeley, CA
BCLT was a co-sponsor of this special event at UC Berkeley’s School of Information, in which members of the U.S. Privacy and Civil Liberties Oversight Board discussed with commentators important current topics as well as the Board’s role going forward.
Developments in California Health Privacy Law: Present and Future Trends
February 27, 2014 – San Francisco, CA
BCLT and Paul Hastings LLP presented a health privacy roundtable to discuss California health privacy law and why it matters more than ever.
3rd Annual BCLT Privacy Law Forum: Silicon Valley
March 14, 2014 – Palo Alto, CA
Leading academics and practitioners discussed the latest developments in privacy law and explored “real world” privacy law problems. This year’s keynote speaker was Jan Albrecht, Member, European Parliament, and rapporteur for the Data Protection Regulation.
Big Data: Values and Governance
April 1, 2014 – Berkeley, CA
Co-hosted by BCLT, the White House Office of Science and Technology Policy (OSTP), and the UC Berkeley School of Information, this all-day workshop examined the policy and governance questions raised by the use of large and complex data sets and sophisticated analytics.
7th Annual Privacy Law Scholars Conference
June 5–6, 2014 – Washington, DC
Held jointly by BCLT and the George Washington University Law School, PLSC assembled a wide array of privacy law scholars and practitioners from around the world to discuss current issues and foster greater connections between academia and practice.
INTERNATIONAL CONNECTIONS
In today’s interconnected world, intellectual property, privacy, and other technology law issues must be addressed on a global scale. Recognizing this, BCLT collaborates with international scholars, lawyers, entrepreneurs, public officials, and students to discuss and dissect differences in the IP and regulatory regimes of various countries and to confront important legal issues surrounding the international development of technology law. BCLT has strong working relationships with a number of international universities and organizations, including the Seoul National University Center for Law and Technology; Tel-Aviv University, Israel; and the Institute for Information Law, University of Amsterdam.
**Law in the Global Marketplace: Intellectual Property and Related Issues**
*February 24, 2014 – San Francisco, CA*
BCLT was a co-sponsor, along with Hogan Lovells, of this one-day summit hosted by UC Hastings featuring prominent judges, in-house counsel, and leading scholars addressing key issues in IP law on three continents: Asia, North America, and Europe.
---
**JUDICIAL EDUCATION**
Since 1998, Professor Peter Menell and BCLT, in conjunction with the Federal Judicial Center (FJC), have organized an annual intellectual property education program for the federal judiciary. These one-week training sessions have drawn more than 500 judges from across the country. Professor Menell has also organized numerous advanced programs on patent law, copyright law, trademark law, cyberlaw, and the interplay of IP and bankruptcy law for the FJC, as well as intellectual property presentations, panels, and symposia for various circuit court and district court conferences. *The Patent Case Management Judicial Guide, Second Edition*, by Professor Menell, Lynn Pasahow, James Pooley, Matthew Powers, Steven Carlson, and Jeffrey Homrig, is an authoritative treatise on all aspects of patent case management.
---
**DIGITAL ENTERTAINMENT**
BCLT is exploring challenges raised at the crossroad where entertainment law converges with digital media and technology. In addition to significant conferences and networking opportunities, innovative courses such as a music law seminar consider this increasingly popular and vital area of law, which involves a unique set of emerging—and rapidly changing—legal issues in traditional content, sports, and entertainment fields as well as new media.
**21st Century Musician: Making a Living Making Music**
*February 8, 2014 – Berkeley, CA*
Hosted by BCLT and co-presented by California Lawyers for the Arts (CLA) and the Sports and Entertainment Law Society of Berkeley Law, CLA’s 31st Annual Music Business Seminar assembled students, musicians, music industry professionals, and entertainment attorneys to discuss the current state of the music industry.
**Legal Frontiers in Digital Media**
*May 15–16, 2014 – Mountain View, CA*
BCLT cohosted the 7th annual conference, exploring emerging legal issues surrounding digital content in today’s multiplatform world, including digital video convergence, scraping content, digital media in the age of NSA surveillance, online advertising and privacy regulations, and mobile legal issues.
BCLT’s renowned faculty produces high-quality, high-impact scholarship, research, and policy initiatives. The range and depth of their expertise and their skill as instructors combine to form the most comprehensive and advanced academic program in law and technology in the country.
BCLT faculty members also play a direct and influential role beyond the academic setting, participating in public policy debates, testifying before federal and state congressional hearings, performing advisory roles, and submitting *amicus curiae* briefs in important cases.
**IN MEMORIAM: SUZANNE SCOTCHMER**
January 23, 1950 – January 30, 2014
Suzanne Scotchmer, professor of law, economics, and public policy, earned her undergraduate degree in economics from the University of Washington and an M.A. in statistics and a Ph.D. in economics from UC Berkeley. She was at Harvard University from 1981 to 1986 before returning to Berkeley for the rest of her career, first as an assistant and associate professor of public policy in 1986, then becoming a professor of economics in 1995 and a professor of law in 2008. She joined BCLT as a faculty director in 2008. In addition, she held multiple academic appointments, served on numerous committees and boards, and notably consulted with the U.S. Court of Appeals for the Federal Circuit.
Professor Scotchmer’s main scholarly interest was examining the economics, policy, and law of innovation, making major contributions to applied economic theory, in particular club theory, evolutionary game theory, and models of innovation with an emphasis on patents and other incentives for research and development. She was celebrated in particular for her seminal 1991 article, *Standing on the Shoulders of Giants*, and her groundbreaking book, *Innovation and Incentives* (2004).
Brilliant, tireless in her enthusiasm, and an inspiration to so many, Suzanne will be missed by all in her BCLT family.
Kenneth A. Bamberger
administrative law, technology and governance, information privacy and data governance, the first amendment
Ken Bamberger is an expert on government regulation and corporate compliance, especially with regard to issues of technology, free expression, and information privacy. Professor Bamberger’s research more generally covers risk regulation, the use of technology in regulation and compliance, and the role of private actors in regulation. *Privacy on the Ground*, his groundbreaking study of corporate privacy practices in the U.S. and Europe (conducted with Professor Deirdre Mulligan), will be published by MIT Press in 2015.
Recent Publications
*Privacy in Europe: Initial Data on Governance Choices and Corporate Practices*, 81 George Washington Law Review 1529 (2013) (with Deirdre K. Mulligan)
*Selected as a top privacy paper of 2013 by the Future of Privacy Forum*
*What Regulators Can Do to Advance Privacy Through Design*, Communications of the ACM, Vol. 56 No. 11 (2013) (with Deirdre K. Mulligan)
*PIA Requirements and Privacy Decisionmaking in U.S. Government Agencies, in Privacy Impact Assessment* (Wright & De Hert, eds.) (2012) (with Deirdre K. Mulligan)
Chris Jay Hoofnagle
privacy, computer crime, online advertising, web privacy measurement
Chris Hoofnagle is Director of Information Privacy Programs for BCLT. His research focuses on the structure of legal and economic relationships that lead to tensions between firms and individuals manifested through information privacy problems, gaps in understanding of legal protections, deficits in consumer law protections, and the problem of financial fraud. Professor Hoofnagle has written extensively in the fields of information privacy, the law of unfair and deceptive practices, consumer law, and identity theft. He has also written on payments technologies with a focus on mobile payments, consumer attitudes toward and knowledge of privacy law, identity theft, the First Amendment, and the government’s reliance on private-sector databases to investigate citizens.
Recent Publications
*Alan Westin’s Privacy Homo Economicus*, 49 Wake Forest Law Review 261 (2014) (with Jennifer M. Urban)
*The Price of ‘Free’: Accounting for the Cost of the Internet’s Most Popular Price*, 61 UCLA Law Review 606 (2014) (with Jan Whittington)
*Privacy and Advertising Mail*, Berkeley Center for Law & Technology Research Paper (2012) (with Jennifer M. Urban & Su Li)
*Behavioral Advertising: The Offer You Cannot Refuse*, 6 Harvard Law Review 273 (2012) (with Soitani, Good, Wambach, & Ayenson)
*2014 Computers, Privacy & Data Protection Multidisciplinary Privacy Research Award winner*
Peter Menell
intellectual property, computer law, entertainment law, property law, environmental law
Peter Menell is Koret Professor of Law. Reflecting his training in economics and law, Professor Menell’s research focuses principally on the role and design of intellectual property law with particular emphasis on the digital technology and content industries. His current projects explore the role of notice in developing tangible and intangible resources, copyright reform, and the adaptation of content and digital technology industries to the internet age. He also filed amicus briefs in four important cases. During 2012–13, he served as one of the inaugural Thomas Alva Edison Visiting Professionals at the United States Patent & Trademark Office.
Recent Publications
*Intellectual Property, Innovation, and the Environment* (2014) (with Sarah Tran)
*This American Copyright Life: Reflections on Re-equilibrating Copyright for the Internet Age* (42nd Brace Lecture), 61 Journal of the Copyright Society of the USA 235 (2014)
*Using Fee Shifting to Promote Fair Use and Fair Licensing*, 102 California Law Review 53 (2014) (with Ben Depoorter)
*Informal Deference: A Historical, Empirical, and Normative Analysis of Patent Claim Construction*, 108 Northwestern University Law Review I (2014) (with J. Jonas Anderson)
*2014: Brand Totalitarianism*, 47 UC Davis Law Review 787 (2014)
*The Mixed Heritage of Federal Intellectual Property Law and Ramifications for Statutory Interpretation, in Intellectual Property and the Common Law* (S. Balganesh, ed.) (2013)
Robert P. Merges
patents, intellectual property, economics, technology markets & valuation
Robert Merges is Wilson Sonsini Goodrich & Rosati Professor of Law. He is the author of *Justifying Intellectual Property*, published by Harvard University Press in 2011. A comprehensive statement of mature views on the ethical and economic foundations of IP law, the book reviews foundational philosophical theories of property and contemporary theories about distributive justice and applies them to IP; identifies operational high-level principles of IP law; and, with all this as background, works through several pressing problems facing IP law today. Professor Merges also has undertaken extensive revisions to two of the casebooks he coauthors, to update them in light of the America Invents Act which largely took effect in 2013.
Recent Publications
*Economics of Intellectual Property Law*, in Oxford Handbook of Law and Economics (F. Parisi, ed.) (forthcoming)
*The Path of IP Studies: Growth, Diversification, and Hope*, 92 Texas Law Review 1757 (2014) (with Golden & Samuelson)
*Foundations and Principles Redux: A Reply to Professor Blankfein-Tabachnick*, 101 California Law Review 1361 (2013)
*The Relationship Between Foundations and Principles in IP Law*, 49 San Diego Law Review 957 (2012)
*Priority and Novelty Under the AIA*, 27 Berkeley Technology Law Journal 1023 (2012)
Deirdre K. Mulligan
information technology law & policy, privacy, security, copyright
Deirdre Mulligan’s current research agenda focuses on information privacy, security, cybersecurity and fairness. *Privacy on the Ground: Governance Choices and Corporate Practice in the US and Europe*, the culmination of Professors Mulligan and Bamberger’s five-year empirical comparative research project exploring regulatory structures and corporate governance of privacy in five countries, will be published in Fall 2015 by MIT Press. Other areas of research include digital rights management technology and privacy and security issues in sensor networks and visual surveillance systems, and alternative legal strategies to advance network security.
Recent Publications
*Privacy in Europe: Initial Data on Governance Choices and Corporate Practices*, 81 George Washington Law Review 1529 (2013) (with Kenneth A. Bamberger)
*It's Not Privacy, and It's Not Fair*, 66 Stanford Law Review Online 35 (2013) (with Cynthia Dwork)
*Internet Multistakeholder Processes and Techno-Policy Standards: Initial Reflections on Privacy at the World Wide Web Consortium*, 11 Journal on Telecommunications & High Technology Law 135 (2013) (with Nick Doty)
*The Quest for a Sound Conception of Copyright's Derivative Work Right*, 101 Georgetown Law Journal 1505 (2013)
*Solving the Orphan Works Problem for the United States*, 37 Columbia Journal of Law & the Arts 1 (2013) (with Hansen, Hashimoto, Hinze, & Urban)
*Statutory Damages: A Rarity in Copyright Laws Internationally, but for How Long?* 60 Journal of the Copyright Society of the USA 529 (2013) (with Hill & Wheatland)
Pamela Samuelson
copyright, patent, internet and digital media, cyberlaw
Pamela Samuelson is Richard M. Sherman Distinguished Professor of Law and Information. Her recent work has focused on updating and adapting U.S. copyright law to meet challenges of the digital age. She has proposed several ways in which the law should be reformed and has considered various modes and venues through which reform might be achieved. Other recent work has focused on mass digitization and intellectual property protection for computer programs, on which she has coauthored several *amicus curiae* briefs in pending cases.
Recent Publications
*The Path of IP Studies: Growth, Diversification, and Hope*, 92 Texas Law Review 1757 (2014) (with Golden & Merges)
*Book Review: Is Copyright Reform Possible?* 126 Harvard Law Review 740 (2013)
*Essay: A Fresh Look at Tests for Nonliteral Copyright Infringement*, 107 Northwestern University Law Review 1821 (2013)
*The Quest for a Sound Conception of Copyright's Derivative Work Right*, 101 Georgetown Law Journal 1505 (2013)
*Solving the Orphan Works Problem for the United States*, 37 Columbia Journal of Law & the Arts 1 (2013) (with Hansen, Hashimoto, Hinze, & Urban)
*Statutory Damages: A Rarity in Copyright Laws Internationally, but for How Long?* 60 Journal of the Copyright Society of the USA 529 (2013) (with Hill & Wheatland)
Paul M. Schwartz
privacy, data security, international data protection law, cyberlaw, intellectual property
Paul Schwartz is Jefferson E. Peyser Professor of Law. His scholarship focuses on how the law has sought to regulate and shape information technology. His most frequently researched topic concerns information privacy and data security. At present, Professor Schwartz is engaged in several different research projects, including research into comparative privacy developments in the U.S. and the European Union as well as the interplay between state and federal privacy law.
Recent Publications
Reconciling Personal Information in the U.S. and EU, 102 California Law Review 877 (2014) (with Daniel J. Solove)
The Battle for Leadership in Education Privacy Law, SafeGov.org (March 27, 2014) (with Daniel Solove)
In Practice: The ‘California Effect’ on Privacy Law, The Recorder (Jan. 2, 2014)
Testimony, Balancing Privacy and Opportunity in the Internet Age, California Assembly Informational Hearing (Dec. 12, 2013)
The EU-US Privacy Collision: A Turn to Institutions and Procedures, 126 Harvard Law Review 1966 (2013)
Information Privacy in the Cloud, 161 University of Pennsylvania Law Review 1623 (2013)
Talha Syed
pharmaceutical patents, copyright, torts, health law, regulatory law & policy, normative legal theory
Talha Syed’s research focuses on institutional and normative analysis of patents, copyright, and alternative innovation policies; and normative legal theory with special reference to torts, health, and education policy. Among Professor Syed’s recent and current projects are: comparative-institutional analysis of patent reforms, prizes, public funding and regulatory incentives for improving the social impact of innovation in health; democratic and distributive analysis of copyright; economic analysis of product differentiation models of copyright; and the development of a distributive justice approach to disability in the context of health law, torts, and education policy.
Recent Publications
Infection: The Health Crisis in the Developing World and What We Should Do About It (Stanford University Press, forthcoming) (with William W. Fisher III)
Beyond Efficiency: Consequence-Sensitive Theories of Copyright, 29 Berkeley Technology Law Journal 229 (2014) (with Oren Bracha)
Beyond the Incentive-Access Paradigm? Product Differentiation & Copyright Revisited, 92 Texas Law Review 1841 (2014) (with Oren Bracha)
The Continuum of Excludability and the Limits of Patents, 122 Yale Law Journal 1900 (2013) (with Amy Kapczynski)
Distributive Equity for Disability in Education and Health: The Principle of Proportionate Benefit (forthcoming)
Amelioration, Not Compensation: Pain and Suffering as Distributive Equity (forthcoming)
Jennifer M. Urban
copyright, intellectual property, privacy, licensing, emerging artists, patents
Jennifer M. Urban directs the Samuelson Law, Technology & Public Policy Clinic. She is presently working on a series of empirical studies of consumer understandings and attitudes toward privacy in mobile and web applications with Chris Hoofnagle; on digitization and libraries with the Berkeley Digital Library Copyright Project; and on the Takedown Project (takedownproject.org) a broad, collaborative effort to research takedown of content by intermediaries. Her recent paper with Mark Lemley shows that judges with more experience handling patent cases are more likely to rule for defendants, and her recent paper with Chris Hoofnagle empirically questions longstanding research used to support the dominant “notice and choice” regime in privacy regulations. In the Clinic, Professor Urban is working on copyright limitations and exceptions for emerging artists and cultural repositories; privacy in the cloud and in “smart” ecosystems and the “Internet of Things”; and government and private surveillance.
Recent Publications
Alan Westin’s Privacy Homo Economicus, 49 Wake Forest Law Review 261 (2014) (with Chris Jay Hoofnagle)
Does Familiarity Breed Contempt Among Judges Deciding Patent Cases? 66 Stanford Law Review 1121 (2014) (with Mark A. Lemley)
Solving the Orphan Works Problem for the United States, 37 Columbia Journal of Law & the Arts I (2013) (with Hansen, Hashimoto, Hinze, & Samuelson)
Molly S. Van Houweling
copyright, digital media, intellectual property, technology law
Much of Molly Van Houweling’s research focuses on copyright law’s implications for new information technologies (and vice versa). One strand of her research explores how legal rules designed to regulate sophisticated commercial actors impact unsophisticated individuals who are empowered by information technology. Another strand explores how those individuals are deploying copyright law themselves in ways that appear both to enrich and complicate the creative environment. Professor Van Houweling often explores these and other intellectual property issues using theoretical and doctrinal tools borrowed from the law of tangible property. She is currently working on a book, tentatively entitled *Property’s Intellect*, that focuses on these connections.
Recent Publications
*Land Recording and Copyright Reform*, 28 Berkeley Technology Law Journal 1497 (2013)
*Technology and Tracing Costs: Lessons from Real Property*, in *Intellectual Property and the Common Law* (S. Balganesh, ed.) (2013)
*Atomism and Automation*, 27 Berkeley Technology Law Journal 1471 (2012)
BCLT FELLOWS
**KEVIN HICKEY** is the BCLT Microsoft Research Fellow. Prior to joining BCLT, he was the Furman Academic Fellow at New York University School of Law, where he published several works on copyright law, most recently *Consent, User Reliance, and Fair Use* in the Yale Journal of Law and Technology (2014). His current research project examines the historical origins of copyright’s substantial similarity doctrine. Kevin has a J.D. *magna cum laude* from NYU School of Law and a B.A. in mathematics from Brown University. He was a clerk for the Hon. Diana Gribbon Motz for the Fourth Circuit Court of Appeals, and he spent several years practicing intellectual property litigation at Covington & Burling LLP.
**MICHAEL WOLFE** is a Copyright Research Fellow who is developing research and commentary to promote public interest authorship in the digital age. He received his J.D. from Duke in 2013 and holds a B.A. in Social Studies from Harvard.
**KATHRYN HASHIMOTO** is a Copyright Research Fellow. While working in book publishing, she attended the University of San Francisco School of Law and received her J.D. in 2010. She also interned at the Electronic Frontier Foundation.
**PATRICK GOOLD**, 2012–14 BCLT Microsoft Fellow, is currently IP Fellow at IIT Chicago-Kent College of Law. His research interests include competition law, IP law, public international law and legal theory. He completed his undergraduate studies at Newcastle University and holds an LL.M. degree from Cornell University and a Ph.D. in Law from the International Max Planck Research School for Competition and Innovation, Germany.
**MICHAEL MATTIOLI**, 2011–12 BCLT Microsoft Fellow, is currently an Associate Professor at the Indiana University Maurer School of Law, where he teaches contracts and a variety of IP courses. His scholarship focuses on communities of innovation. He graduated from Penn Law School in 2007.
**JONAS ANDERSON**, 2009–11 BCLT Microsoft Fellow, is currently Assistant Professor of Law at American University, Washington College of Law, where he teaches courses in patent law, trade secret law, and real property. He graduated from Harvard Law School in 2006.
**AARON PERZANOWSKI**, 2008–09 BCLT Microsoft Fellow, is currently Associate Professor of Law at Case Western Reserve University of Law. His research is in the areas of copyright, trademark, and telecommunications law. He graduated from UC Berkeley School of Law in 2006.
**STUART GRAHAM**, 2007–09 BCLT Kauffman Foundation Fellow in Social Science and the Law, served as the first Chief Economist of the USPTO. He teaches and conducts research on the economics of the patent system, IP strategies and transactions, and the relationship of IP to entrepreneurship and the commercialization of new technologies. He is currently an Assistant Professor at Georgia Tech Scheller College of Business.
**TED SICHELMAN**, 2008–09 BCLT Kauffman Foundation Legal Research Fellow, is currently Professor of Law at the University of San Diego School of Law, teaching intellectual property law. His current research includes exploring theories of patent remedies, the effects of the patent system on entrepreneurial companies, and the role of patent law in technology commercialization. He graduated from Harvard Law School in 1999.
**MIRIAM BITTON**, the 2007–08 BCLT Microsoft Fellow, is currently a Law Professor at Bar-Ilan University in Israel. She writes and teaches in the fields of intellectual property law, law and technology, and property. She earned bachelors of law (LL.B.) and Masters of Law (LL.M.), *magna cum laude*, degrees from the Bar-Ilan University in Israel and LL.M. and S.J.D. degrees from the University of Michigan Law School.
Louise Lee
Associate Director
Year joined BCLT: 2005
Claire Trias
Assistant Director, Program Development & Student Engagement
Year joined BCLT: 2013
Erin Proudfoot
Assistant Director, Events & Communications
Year joined BCLT: 2014
BCLT PARTNERSHIPS & AFFILIATES
BCLT HAS EXPANDED ITS PRESENCE within and beyond the law school, developing relationships locally, nationally, and internationally. BCLT is involved in cross-disciplinary programs with other schools and departments, on the UC Berkeley campus and elsewhere, and with a variety of outstanding affiliated scholars with interests in technology law, including some of the top economists and technologists working on technology policy issues. BCLT also works closely with other groups, research centers, and programs, including the Samuelson Law, Technology & Public Policy Clinic at the UC Berkeley School of Law, to produce outstanding scholarly works and organize law and technology-related events.
SAMUELSON LAW, TECHNOLOGY & PUBLIC POLICY CLINIC
Founded in 2001, the Samuelson Law, Technology and Public Policy Clinic (SLTPPC) provided the first opportunity in legal academia for students to represent public interest clients, concerns, and constituencies in key debates and litigation concerning the fundamental public policies at the intersection of law and technology. Today, it is the leader in a growing movement of clinics that give law students hands-on training in advocacy in the areas of IP, technology, and civil liberties.
BCLT and SLTPPC work on cutting-edge scholarship, research, and policy initiatives that help governments, academic institutions, and private entities develop sound technology-related policies and practices in the digital age.
Beginning in Fall 2014, Catherine Crump joins Clinic Director Jennifer M. Urban at SLTPPC as Associate Director and Assistant Clinical Professor of Law. She graduated from Stanford Law School in 2004 and clerked for the Hon. M. Margaret McKeown on the Ninth Circuit Court of Appeals. Prior to joining the Clinic, she was a staff attorney at the American Civil Liberties Union’s Speech, Privacy and Technology Project. Professor Crump is a FOIA (Freedom of Information Act) expert and has led students in FOIA projects with Professor Jason Schultz at the Technology Law and Policy Clinic at NYU. Her recent research has focused on license plate readers and drone surveillance.
AFFILIATED EVENTS
The Net: Utopia vs. Dystopia
September 27, 2013 – San Francisco, CA
This informative Tech Talk, a featured event for Alumni Weekend, was co-sponsored by BCLT in cooperation with the Boalt Hall Alumni Center.
Alumni Reunion: Hot Topics in Law and Technology
September 28, 2013 – Berkeley, CA
BCLT Executive Director Robert Barr moderated an expert panel of Boalt alums discussing the latest developments and trends at the intersection of law and technology.
2nd Annual ChIPs Women in IP Global Summit 2013
October 1–2, 2013 – Washington, D.C.
BCLT was a proud co-sponsor of this summit event with ChIPs, an organization that supports and promotes the advancement of women in technology and IP.
Expert Witnesses: Maximizing Their Effectiveness
February 26, 2014 – Berkeley, CA
BCLT sponsored this ABA Regional Workshop of the ABA Section of Litigation, Expert Witnesses Committee.
Federal Judicial Center Dinner
May 20, 2014 – Berkeley, CA
BCLT hosted this special reception and dinner in honor of the judges attending BCLT’s Federal Judicial Center Intellectual Property Training.
Networking Dinner
July 23, 2014 – Berkeley, CA
BCLT hosted a summer networking dinner for lawyers from Tel Aviv University’s International Executive LL.M. program.
AFFILIATED FACULTY & SCHOLARS
Aaron Edlin (School of Law and Economics Department, UC Berkeley)
Joseph Farrell (Economics Department, UC Berkeley)
Rebecca Giblin (Faculty of Law, Monash University)
Richard J. Gilbert (Economics Department, UC Berkeley)
Bronwyn H. Hall (Economics Department, UC Berkeley)
Thomas Jorde (School of Law, UC Berkeley)
Michael L. Katz (Haas School of Business and Economics Department, UC Berkeley)
David Mowery (Haas School of Business, UC Berkeley)
David Nimmer (School of Law, UCLA)
Daniel Rubinfeld (School of Law, UC Berkeley)
AnnaLee Saxenian (I School and Department of City and Regional Planning, UC Berkeley)
Jason M. Schultz (NYU School of Law)
Carl Shapiro (Haas School of Business and Economics Department, UC Berkeley)
Howard Shelanski (Administrator of the Office of Information and Regulatory Affairs at the Office of Management and Budget)
David Teece (Haas School of Business, UC Berkeley)
Hal R. Varian (Emeritus: I School, Haas School of Business, and Economics Department, UC Berkeley)
David Winickoff (Environmental Science, Policy, & Management, UC Berkeley)
AFFILIATED PROGRAMS
Berkeley Roundtable on the International Economy (BRIE), UC Berkeley
Center for Information Technology Research in the Interest of Society (CITRIS), UC Berkeley
Center for Intellectual Property Studies (CIP), University of Gothenburg & Chalmers University of Technology
Competition Policy Center (CPC), Haas School of Business, UC Berkeley
Haifa Center of Law & Technology (HCLT), University of Haifa
Institute for Business Innovation, Haas School of Business, UC Berkeley
Institute for Information Law (IViR), University of Amsterdam
Lester Center for Entrepreneurship, Haas School of Business, UC Berkeley
Samuelson Law, Technology, & Public Policy Clinic, UC Berkeley School of Law
School of Information (I School), UC Berkeley
Seoul National University Center for Law & Technology, Seoul National University
Swiss Federal Institute of Technology – Eidgenössische Technische Hochschule (ETH) Zürich
Team for Research in Ubiquitous Secure Technology (TRUST)
Tel Aviv University Executive LL.M. Program
BCLT attracts the very best students and offers them the most comprehensive instructional program in law and technology. Featuring foundational and advanced courses taught by noted BCLT regular faculty, many of whom use their own leading casebooks, and an adjunct faculty of expert practitioners, BCLT gives UC Berkeley law students an unmatched educational experience making them among the most sought-after hiring prospects by top law firms and organizations.
As well, BCLT’s program extends beyond the classroom, providing students a number of diverse, unique opportunities to immerse themselves in the ever-expanding ecosystem of technology and the law. From informative networking functions to career fairs and more, BCLT supports a number of co-curricular and extracurricular activities to enrich the law school experience. BCLT also provides primary funding to Moot Court for its Intellectual Property Law, Technology Law, and Entertainment Law Competitions.
LAW & TECHNOLOGY CERTIFICATE PROGRAM
BCLT offers the nation’s leading program in law and technology for students interested in concentrating their studies in this area. The Law & Technology Certificate recognizes successful completion of a specialized course of study in addition to an activity component. The curricular requirements emphasize depth and breadth of coverage and afford students substantial flexibility in adapting their course of study toward a range of career paths at the growing intersection of law and technology.
**BCLT STUDENT EVENTS**
For students interested in law and technology, BCLT provides information, guidance, and support, from Day One to graduation and beyond.
**LAW & TECHNOLOGY PROGRAM ORIENTATION**
BCLT welcomes incoming students with information on its law and technology program as well as related extracurricular opportunities at the law school.
**FALL RECEPTION**
BCLT hosts this annual fall networking mixer for law students and top IP law firms in the area.
**SPRING CAREER FAIR**
BCLT introduces representatives from leading law firms to law students interested in technology law.
**SUMMER MIXER**
BCLT brings together law students and top IP law firms for an exclusive summer networking opportunity.
**BCLT/BTLJ LAW & TECH SPEAKER SERIES**
Prominent practitioners bring real-world experience and practical legal knowledge to students on a variety of law and technology–focused topics.
---
**BCLT STUDENT GROUPS**
BCLT provides administrative and financial support to eight student groups. These groups concentrate on specific legal skills or areas of the law, including public interest, allowing students to supplement their law school education with invaluable law & technology–focused activities.
**BERC@BOALT**
BERC@Boalt is the law school branch of the Berkeley Energy & Resources Collaborative, a student-led organization which aims to connect and educate the UC Berkeley energy and resources community. BERC@Boalt helps to inform law students about current legal practice and advances in the fields of energy, climate and clean technologies. It does this through the development of curriculum, the continuing expansion of an alumni and professional network, the promotion of events and discussions centered on green issues, and the creation of a Career Guide for Energy, Climate and CleanTech Law.
**BOALT.ORG**
boalt.org is the law school’s public interest and technology group. Its activities fall into three main categories. First, boalt.org works to make information technology more useful and accessible to UC Berkeley law students. Second, boalt.org advocates on behalf of the public interest in debates over law and technology. Third, boalt.org provides a community and social activities for those who might best be described as law and tech geeks.
**BERKELEY TECHNOLOGY LAW JOURNAL**
The Berkeley Technology Law Journal (BTLJ) is a student-run publication that covers emerging issues in the areas of intellectual property, privacy and cyberlaw. Since 1986, BTLJ has kept judges, policymakers, practitioners and the academic community abreast of the dynamic field of technology law. The Journal’s membership of approximately 150 students publishes three issues of scholarly work each year, plus the Annual Review of Law and Technology. The Annual Review is a distinctive issue of the Journal published in collaboration with BCLT and is dedicated to student-written casenotes discussing the most important recent developments in this sector.
Each year, BTLJ co-hosts the Annual BCLT/BTLJ Symposium. For each conference, BTLJ publishes a symposium issue, featuring articles by leading academics on the issues raised at the conference. These symposia often pioneer new areas of research by introducing topics that are far-reaching and significant, but have yet to enter the public discourse.
**CONSUMER ADVOCACY AND PROTECTION SOCIETY (CAPS)**
The Berkeley Consumer Advocacy and Protection Society (CAPS) is dedicated to fostering research, discussion and advocacy in the field of consumer protection law. It seeks to strengthen ties between consumer law groups and the UC Berkeley School of Law community. This includes creating networks between consumer law attorneys, advocacy organizations and the student body. Its activities promote the field of consumer protection law and provide training opportunities for students, foster community among student advocates whose interests intersect with consumer protection, and encourage and maintain consumer protection curriculum and clinic opportunities at the law school.
HEALTHCARE AND BIOTECH LAW SOCIETY
Members of the Healthcare and Biotech Law Society examine and analyze the intersection between law, society, policy and science. Their mission is to foster discussion on emerging health/biotech issues and to stimulate the intellectual and professional development of Boalt students interested in these issues. They do this by organizing networking events with practitioners, promoting health and biotech courses at the UC Berkeley School of Law and increasing interaction between the law school and other healthcare and biotech related institutions at UC Berkeley and beyond.
PATENT LAW SOCIETY
The Boalt Hall Patent Law Society provides a forum for students interested in practicing patent law to discuss and debate the latest developments in this specialty. The Group often engages patent law practitioners to share their experiences through student presentations. These and other events provide opportunities for members of the Patent Law Society to interact, network and exchange ideas.
SPORTS AND ENTERTAINMENT LAW SOCIETY
The mission of the Sports and Entertainment Law Society (SELS) is to educate the UC Berkeley School of Law community about legal opportunities and issues in the entertainment and sports industries. SELS also strives to facilitate opportunities for students to network not only with each other, but also with legal professionals in these industries. SELS regularly sponsors many events during the academic year, including guest lectures and social events.
UNIVERSITIES ALLIED FOR ESSENTIAL MEDICINES
Universities Allied for Essential Medicines (UAEM) links members of universities in the U.S., the U.K. and Canada who are concerned about patient access to medicines in poor countries. Its mission is to promote access to medicines and medical innovations in low and middle-income countries by changing norms and practices around academic patenting and licensing; to ensure that university medical research meets the needs of people worldwide; and to empower students to respond to the access and innovation crisis.
BCLT STUDENTS IN THE NEWS
UC Berkeley School of Law students Wilson Dunlavey and Christina Farmer took first place at the 23rd Annual Saul Lefkowitz Moot Court Competition. Presented annually by the International Trademark Association, the Lefkowitz Competition focuses on issues of U.S. trademark and unfair competition law. The triumphant pair were among a starting field of 86 teams from 61 different law schools, competing at the regional level before advancing as one of 10 teams at the national finals.
BCLT 2014–2015 SPONSORS
PARTNERS
Cooley LLP
Fenwick & West LLP
Orrick, Herrington & Sutcliffe LLP
BENEFACTORS
Covington & Burling LLP
Fish & Richardson P.C.
Kasowitz Benson Torres & Friedman LLP
Kirkland & Ellis LLP
Latham & Watkins LLP
McDermott Will & Emery
Morrison & Foerster LLP
Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates
Weil, Gotshal & Manges LLP
White & Case LLP
Wilmer Cutler Pickering Hale and Dorr LLP
Wilson, Sonsini, Goodrich & Rosati
Winston & Strawn LLP
MEMBERS
Baker Botts LLP
Baker & McKenzie LLP
Durie Tangri LLP
GTC Law Group LLP & Affiliates
Gunderson Dettmer Stough Villeneuve Franklin & Hachigian, LLP
Haynes and Boone, LLP
Hickman Palermo Truong Becker Bingham Wong LLP
Hogan Lovells LLP
Irell & Manella LLP
Keker & Van Nest LLP
Kilpatrick Townsend & Stockton LLP
Knobbe Martens Olson & Bear LLP
Munger, Tolles & Olson LLP
O’Melveny & Myers LLP
Paul Hastings LLP
Ropes & Gray LLP
Sidley Austin LLP
Simpson Thacher & Bartlett LLP
Turner Boyd LLP
Van Pelt,Yi & James LLP
Weaver Austin Villeneuve & Sampson LLP
CORPORATE BENEFACTORS
Apple, Inc.
Cisco Systems
Google Inc.
Microsoft Corporation
Nokia
RPX Corporation
SAP
Tessera Technologies
Warner Bros.
Yahoo!
CREDITS: Design by Nancy Austin; front cover (large) photograph by Tim Griffith; page 2 (top) photograph by Michael Lyefsky; page 13 photograph by Ethan Kaplan; pages 8 (top), 14 (top), and back cover photographs by Nancy Austin; page 18 photograph by Michael Ventura
SAVE THE DATE: 2014-2015 EVENTS
7th Annual BCLT Privacy Lecture
October 6, 2014 – Berkeley, CA
Fall Reception for Boalt Students and BCLT Sponsors
November 5, 2014 – Berkeley, CA
Troll-Proofing Patents: Protecting Open Innovation
November 7, 2014 – Berkeley, CA
The Role of the Courts in Patent Law & Policy
November 7, 2014 – Washington, DC
Conference on Chinese IP Law
November 7, 2014 – Los Angeles, CA
ChLPs Women in IP Global Summit
Fall 2014 – Washington, DC
Private Security and Regulatory Space: In Search of the Public Interest
December 11, 2014 – Berkeley, CA
15th Annual Advanced Patent Law Institute: Silicon Valley
December 11–12, 2014 – Palo Alto, CA
Seminar on Law in the Global Marketplace: IP and Related Issues
Spring 2015 – San Francisco, CA
4th Annual BCLT Privacy Law Forum: Silicon Valley
March 13, 2015 – Palo Alto, CA
19th Annual BCLT/BTLJ Symposium
April 17, 2015 – Berkeley, CA
Legal Frontiers in Digital Media: The 8th Annual Conference on Emerging Legal Issues Surrounding Digital Publishing and Content Distribution
May 15–16, 2015 – Mountain View, CA
8th Annual Privacy Law Scholars Conference
June 2015 – Berkeley, CA
15th Annual Intellectual Property Scholars Conference
August 2015 – Chicago, IL
COMING IN 2015
BCLT 20th ANNIVERSARY
Details to come!
Or visit the BCLT site for updates law.berkeley.edu/bclt |
This is Tefillin Shel Yad ("phylacteries of the hand") in a box.
This Is How Rabbi Jonah Rank Puts On Tefillin (& How You Can Too)!
I unwrapped it. Now there’s one droopy loop.
We gotta get this thing started!
There’s a little cap on top of the Tefillin Shel Yad.
(Some folks wear it. Some don’t. I don’t wear it these days.)
Fun fact: A lot of folks lose the cap to their Tefillin Shel Yad. If you have lost yours, don’t worry.
-3-
Created by Rabbi Jonah Gabriel Rank for Shaar Shalom Synagogue in Halifax, NS: July 20, 2018.
This Is How Rabbi Jonah Rank Puts On Tefillin (& How You Can Too)!
The cap fits so neatly into the Tefillin Shel Yad box!
That’s a real beauty right there!
This Is How Rabbi Jonah Rank Puts On Tefillin (& How You Can Too)!
The cap fits perfectly!
If only all things in life could work out this way...
-5-
Created by Rabbi Jonah Gabriel Rank for Shaar Shalom Synagogue in Halifax, NS: July 20, 2018.
Imagine your weak hand is a duck with a top hat and a band in the back.
(Call a doctor if it is.)
And slide the tefillin down your arm now.
Quack! And let us say: Amen.
This is actually considered entertainment for tefillin.
Your tefillin might want to loop clockwise, but you gotta make it counter-clockwise!
Make sure (mostly) just the black part of the strap is showing.
Keep it tight!
Bring it down to here!
Beyond your elbow; facing the heart!
It’s important to keep things very tight so they stay together. If you notice your skin turning purple or you start losing any oxygen at any point, you should loosen things a bit. Unfortunately, dead people—to my knowledge—don’t count as one of the ten adult Jews needed for a minyan.
It’s at this stage by the way that we’ll say the first of a few blessings:
ברוך אתה יהוה אלהינו מלך העולם, אשר קדשנו במצותיו וצונו להניח תפילין.
Ba-RUKH a-TAH ado-NAI elo-HEY-nu ME-lekh ha’o-LAM a-SHER kidde-SHA-nu bemitzvo-TAV vetzi-VA-nu leha-NI-ach tefi-LIN.
Blessed are You Adonai our God, ruler of the universe, who has made us holy with these mitzvot/commandments/connections and has brought us closer in this covenant in the laying of our tefillin.
Make at least one full counterclockwise loop above the elbow (and try not to let the strap touch the base of the tefillin)
If you can master this part, then... well... you can go to the next step.
-9-
Created by Rabbi Jonah Gabriel Rank for Shaar Shalom Synagogue in Halifax, NS: July 20, 2018.
In the Bible, counting sometimes led to bad things. Jews often count “not 1, not 2…” Count (not) 7 loops (somewhat equally distanced from each other) Around your arm below the elbow.
I always am satisfied with how easy this part feels.
After (not) 7 loops, just before the wrist, bring the strap from the frontside of where your ulna is across your palm to the space in between your index finger and thumb.
(Your ulna is the jutting bone in the back of this.)
What, you’ve never heard of an ulna? It’s definitely one of my top 250 favourite bones. Look it up when you get a chance, but not now. Now it’s tefillin time.
This Is How Rabbi Jonah Rank Puts On Tefillin (& How You Can Too)!
Make a straight line from between your index finger and thumb to the pseudo-equivalent place on your pinky’s side.
(Flipping your hand over optional if you can do this without looking.)
If only we had shorter common names for all of these body parts... you’re probably still thinking about this ulna thing too?
Make a cinnamon roll of your remnant hand strap.
Did I mention tefillin are delicious!? (They probably actually aren’t…)
When you run out of strap to wrap around, tuck the remnant under your “cinnamon roll.”
I should tell this trick to a baker! Mmmm…. mmm…. good!
-14-
Created by Rabbi Jonah Gabriel Rank for Shaar Shalom Synagogue in Halifax, NS: July 20, 2018.
These are Tefillin Shel Rosh (the tefillin “of the head”), with a double-strap kind of situation. It looks like a tractor but is too small to ride in.
Tefillin don’t make great vehicles. Learn from my experience!
Two straps are hanging there, which you’ll have to place relatively tightly around the circumference of your head hairline—was... make sure your tefillin are well-fitted. And, as we put them on, we’re going to say another blessing:
ברוך אתה יהוה אלהינו מלך העולם, אשר קדשנו במצותיו וצונו על מצות תפילה.
Ba-RUKH a-TAH ado-NAI elo-HEY-nu ME-lekh ha'o-LAM a-SHER kidde-SHA-nu bemitzvo-TAV vetzi-VA-nu AL mitz-VAT tefi-LIN.
Blessed are You Adonai our God, ruler of the universe, who has made us holy with these mitzvot/commandments/connections and has brought us closer in this covenant in the mitzvah of tefillin.
Do better than me. Aim for the front base of the tefillin (below your “hairline”) to be exactly centred and above the eyes. Also make a face that says, “What’s a selfie?”
I looked it up by the way! A selfie is defined as “a thing that adults are not allowed to do except when making tefillin tutorials.” Phew!
Make sure the tefillin straps in the back go their separate ways after the occipital bone.
The occipital bone is (obviously) the bone by the back of your neck that juts out sorta where your mouth would be if your face were on backwards—which is an all too common problem nowadays. (I’m not a medical doctor, so I’m just guessing that backwards face is a widespread ailment.)
Oh, and make sure that the strap on the left side goes left and the side of the strap on the right goes to the right. No criss-crossing!
Straighten out the two straps from the back of your head to the front of you.
As you flatten out the tefillin against the front of your upper body (using both of your hands), you recite:
ברוך שמו כבוד מלכותו לעולם ועד.
Ba-RUKH SHEM ke-VOD malkhu-TO le'o-LAM va-ED.*
Blessed is the name of the honour of God's rulership forever and ever.
*No relation to Mr. Ed.
After undoing the “cinnamon roll” of the hand, put the strap from between your thumb and forefinger to the bottom third of the middle finger.
As I wrap the Tefillin on my hand more officially, I now recite from Genesis 39:3:
Va-YAR ado-NAV KI Ado-NAI i-TO ve-KHOL a-SHER HU o-SEH ado-NAI matz-LI-ach beya-DO.
His [Joseph’s] overlords saw that Adonai was with him, and,[ in] all that he, God brought success to his hand.
This is exactly where things can, literally, fall apart.
Wrap from the bottom of the middle finger to the top of the middle finger in this sorta zig-zag, and end it by wrapping all the around the top of the middle finger like you did for the bottom of it.
If you lose blood circulation at any point in this process, this wrapping—which I know you want to get right—is probably correct (except for the fact that you shouldn’t be harming yourself in doing the tefillin thing). Please loosen your straps, or talk to your local blood circulation authority.
Wrap from the top of your middle finger to the bottom of your index finger (on the side closest to your thumb).
It’s starting to look like we’re doing something serious here!
Wrap, along the back of your hand, from the bottom of your index finger to the bottom of your ring finger.
You're so close to the end!
Across your palm, bring the strap from your ring finger to the halfway point between your thumb and your index finger.
Let the strap overlap a lil!
I was going to insert a joke here, but I decided it wasn’t funny. Never mind me. Turn to the next page.
Wrap the strap from the halfway point between your thumb and index finger to the point that’s basically opposite it on the pinky side of your hand, and wrap it under all the way back to the halfway point between your thumb and index finger.
And then you’re gonna wrap over (LATER) that little oval making another cinnamon roll again.
I know I said you’re almost done. You really are. Just look at the next page.
Wrap it from the point on your pinky’s side that is opposite the halfway point between your thumb and index finger to the point that actually is halfway between your thumb and index finger (and it should look a bit like a possibly upside-down Hebrew letter Shin [ש]).
Now make a cinnamon roll over the middle of the shin (ש).
God has a name spelled shin-dalet-yod (pronounced Shaddai). See if you can find the letters dalet (ד) and yod (י) on your hand now! (It probably won’t look exactly right; it’s nice to look though.)
I know, I know. The Yod doesn’t look that much like a Yod. And the Dalet doesn’t look that much like a Dalet. (Talk about ineligible handwriting!) Traditions are traditions.
Created by Rabbi Jonah Gabriel Rank for Shaar Shalom Synagogue in Halifax, NS: July 20, 2018.
Just tuck under (and through) any leftovers to keep things tight.
CONGRATULATIONS! (Mazzal tov!) You made it to the end of putting on the *tefillin*. Do you think you can keep yourself in this exact position now for 45 minutes while holding a prayer book, adjusting your *tallit*, holding the handles of a Torah (if you get called up), and shaking the hands of people whose *tefillin* appear to be either always or never slippin’ and slidin’? In all seriousness, you might just have to adjust yourself here and there because, stationary as prayer in synagogues can be, you do move around.
Well, anyway, once you make it to the end of services, you can put away your *tefillin*, and just do the reverse of what you saw here, and make sure to put your head *tefillin* further away than the hand *tefillin* wherever you store them (such as a *tefillin* bag); that way, when you reach for your *tefillin* next time, you’ll begin again with the *Tefillin Shel Yad*. |
Proportionality between variances in gene expression induced by noise and mutation: consequence of evolutionary robustness
Kunihiko Kaneko
**Abstract**
**Background:** Characterization of robustness and plasticity of phenotypes is a basic issue in evolutionary and developmental biology. The robustness and plasticity are concerned with changeability of a biological system against external perturbations. The perturbations are either genetic, i.e., due to mutations in genes in the population, or epigenetic, i.e., due to noise during development or environmental variations. Thus, the variances of phenotypes due to genetic and epigenetic perturbations provide quantitative measures for such changeability during evolution and development, respectively.
**Results:** Using numerical models simulating the evolutionary changes in the gene regulation network required to achieve a particular expression pattern, we first confirmed that gene expression dynamics robust to mutation evolved in the presence of a sufficient level of transcriptional noise. Under such conditions, the two types of variances in the gene expression levels, i.e. those due to mutations to the gene regulation network and those due to noise in gene expression dynamics were found to be proportional over a number of genes. The fraction of such genes with a common proportionality coefficient increased with an increase in the robustness of the evolved network. This proportionality was generally confirmed, also under the presence of environmental fluctuations and sexual recombination in diploids, and was explained from an evolutionary robustness hypothesis, in which an evolved robust system suppresses the so-called error catastrophe – the destabilization of the single-peaked distribution in gene expression levels. Experimental evidences for the proportionality of the variances over genes are also discussed.
**Conclusions:** The proportionality between the genetic and epigenetic variances of phenotypes implies the correlation between the robustness (or plasticity) against genetic changes and against noise in development, and also suggests that phenotypic traits that are more variable epigenetically have a higher evolutionary potential.
**Background**
Plasticity and robustness are basic concepts in evolutionary and developmental biology. Plasticity refers to the changeability of phenotypes in response to external environmental perturbations. Indeed many important concepts in biology are concerned with the changeability in the system. This changeability depends on each phenotype: some phenotypes are more variable than others. How is such degree of changeability characterized quantitatively?
On the other hand, robustness is another basic concept in evolutionary and developmental biology. Here, phenotypic robustness is defined as the ability of the system to continue to function despite perturbations to it [1-7]. Phenotypes important for survival are expected to be robust, at least to some degree, to enable organisms to survive under such perturbations.
For both plasticity and robustness, there are epigenetic and genetic sources of perturbations to a biological system, which act in different time scales. Epigenetic perturbation works at a faster scale. Phenotypes are changed through noise in gene expression and developmental dynamics. Environmental variation gives another source for variability. Genetic variation, on the other
hand, works at a longer time scale through evolution. Now, is there any relationship between the changes of genetic and epigenetic origins? If a phenotype is changeable easily epigenetically through development or environment, is it also more feasible to change genetically? Similarly, if a phenotype is robust to developmental perturbations, is it also robust to genetic variations through evolution? When we consider generally any dynamical systems, such relationship would not be expected. However, as a biological system is a result of evolution, existence of some relationship between the genetic and epigenetic robustness may be expected.
The relationship between evolution and robustness has been long debated since the pioneering studies by Schmalhausen and Waddington [8,9]. Waddington, then coined the term "genetic assimilation", in which phenotypic changes induced environmentally are then assimilated to genetic changes through evolution. Although important, these pioneering studies have mostly emphasized the qualitative aspects of the relationship between robustness and evolution. However, advances in quantitative studies on cell biology have facilitated the quantitative assessment of this relationship. In particular, fluctuations of phenotypes (e.g., gene expression levels) that have been measured extensively through the fluorescent-protein techniques [10-13] can provide a tool to explore such relationship.
In quantitative terms, robustness can be considered as a measure of the insensitivity of phenotypes to external disturbances and plasticity as a measure of changeability of phenotypes. On the other hand, fluctuation is the degree of phenotypic variation induced by perturbations. Hence, the phenotypic fluctuation (variance) increases with a decrease in robustness, and vice versa. Thus, the variance can serve as an index (inverse) for robustness, and also for plasticity. Now, the question concerning robustness and evolution can also be posed in terms of phenotypic variances. How is the evolution speed correlated with the variance? Does the variance increase or decrease through evolution?
Indeed, findings of previous studies involving artificial selection experiments with bacteria suggest that the rate of evolution, i.e., the increase in the fitness per generation, is proportional to the phenotypic variance among isogenic individuals [14,15]. This relationship, originally defined on the basis of a macroscopic distribution theory, was also confirmed in in-silico experiments by using gene regulation networks (GRNs) and metabolic networks [16,17].
This observed relationship is noteworthy because evolvability is characterized by non-genetic variation of phenotypes. Even if the rate of genetic change (mutation or recombination) is identical, the rate of evolution can differ according to this variation. To elucidate this point, recall again that there are 2 sources in phenotypic variances, genetic and epigenetic. Quantitatively, the former is characterized by the phenotypic variance in a heterogenic population and is due to genetic modifications, as, denoted as $V_g$, whereas the latter, denoted here as $V_{ip}$, is the phenotypic variance in an isogenic population due to noise during the developmental process. The former reflects the structural robustness of the phenotype, i.e., the rigidity of the phenotype against changes induced by genetic mutations, whereas the latter reflects the robustness of the phenotype against the stochasticity encountered during the developmental process or that induced by environmental changes. (Phenotypic variance of non-genetic origin is traditionally discussed as environmental variance $V_e$. Here, we are concerned with the variance due to fluctuation during developmental process, and thus adopt this notation, but one could regard this variance as a component of $V_e$ [18,19].)
It is obvious that evolution speed, which indicates the change in phenotype due to genetic changes, is correlated with $V_g$, as was demonstrated by Fisher [18,20]. Thus, the proportionality between the evolution speed and $V_{ip}$ suggests the proportionality between $V_g$ and $V_{ip}$ throughout the course of evolution. Indeed, this relationship was confirmed by the evolutionary stability theory and numerical simulations [14,16], which imply that robustness to noise and to mutation are correlated.
So far the proportionality between $V_{ip}$ and $V_g$ of a given fitness \textit{through the course of evolution (over generations)} was confirmed. Now, let us come back to the original question on a possible relationship between genetic and epigenetic robustness (or changeability) over many phenotypic traits. To discuss such problem, we need to study the relationship of the variances $V_{ip}$ and $V_g$ over many phenotypes or expressions of many genes, for a given individual (not over the evolutionary course). This is the focus of the present paper.
The phenotypes or gene expression levels are generally associated with several genes, as known as epistasis. Even if a fitness value may be directly related to a single phenotype or the expression of a single gene, many genes may modify the expression of each other. These expressions are interrelated through a complex gene regulation network; therefore, the nature of their correlation may change during the course of evolution. This may give some evolutionary constraint on the changes of expression levels of genes. Then, does a gene with a higher fluctuation in its expression level have a higher potentiality in evolution than others? Is there correlation between phenotypic changes by epigenetic noise in gene expression dynamics and by genetic variation? We will demonstrate such proportionality over expressions of many genes, rather than the previously studied proportionality of the variances through the evolutionary course.
Using a numerical model in GRNs and simulating their evolutionary changes required to increase a given level of fitness, we first found that the rate of evolution of the expression levels of several genes was highly correlated with (or roughly proportional to) the respective variances of these genes. Next, we proceeded to present evidence for the proportionality between the 2 types of variances of gene expressions, i.e., of genetic and epigenetic origins over many genes. This proportionality was achieved after a selection process under a given fitness condition and is true whenever the phenotype of evolved system is robust to transcriptional noise in gene expression dynamics and genetic mutation. Further, the generality of this relationship was confirmed by studying a variety of models and evolutionary stability theory of multivariate distribution. We will also discuss some experimental evidences for this relationship.
Results
Modeling Strategy
Here, we considered the evolution of a simple model for "development." For this purpose, we postulated the following conditions for development and evolution.
(i) The set of variables $x_i$ ($i = 1, ..., M$) represents the expression levels of $M$ genes. These variables take continuous values, which we set such that if gene $i$ is expressed, then $x_i > 0$ and if not, $x_i < 0$. (Choice of a threshold at zero is a matter of convenience. Indeed, we also carried out simulations of a model in which the expression $x_i$ takes only non-negative values with certain positive threshold values for activation. Results to be discussed were not altered by this change).
(ii) Gene expressions mutually activate or inhibit each other and are regulated by the GRN. The temporal evolution of the gene expression level $x_i$ is generally not simple because the value of $M$ is large. The phenotype of each individual is defined by the set of gene expression level $x_i$ after evolution for "developmental time," which starts from the time point with a given initial gene expression level (set as $x_i < 0$ for all $i$, i.e., none of the genes is expressed). The developmental time for the system is set to a large period to allow gene expression patterns to settle into an attractor state.
(iii) Gene expression dynamics are noisy. Owing to their dependence on chemical reactions involving molecular collisions, gene expression dynamics are also stochastic in nature. In particular, since the number of molecules (e.g., mRNA and proteins) involved in gene expression is not extremely large, a deviation from the average rate equation for the reaction is possible. This deviation is represented as the noise applied to the average gene regulation expression dynamics. The amplitude of noise is denoted as $\sigma$.
(iv) Genotype: Depending on the genotype, the structure of GRN changes, i.e., network paths that determine the genes responsible for activating (or repressing) a given gene. This interaction between genes is represented by a connection matrix $J_{ij}$, which takes a value of 1 (if activating), -1 (if repressing), or 0 (if there is no influence).
(v) Population: A population of individuals with different genotypes, i.e., with each individual having a slightly different GRN—the matrix $J_{ij}$. This gives a genotypic distribution.
(vi) Fitness: The pattern of expression in $x_i$ is determined on the basis of gene expression dynamics. Fitness is a function of the expression pattern of a subset of $x_i$, i.e., the expression of a given set of "target" genes $i = 1, ..., k$. If the pattern of $x_i$ is closer to the prescribed pattern of genes, the fitness is higher. We take the prescribed pattern as "all on" for the target genes, unless otherwise mentioned. If all components of $x_i$ ($i = 1, ..., k$) are positive, the fitness is set at 0, and it is decreased by 1 when the number of negative $x_i$ is increased by 1.
(vii) Mutation and selection: Offspring are produced according to the fitness: individuals with low fitness cannot produce offspring. The selection process according to the fitness thus progresses, while mutation introduces a change in the GRN, represented as 1,-1, or 0, in a few elements of the matrix $J_{ij}$. The fraction of the elements altered is given by the mutation rate.
We studied numerical models based on the above-mentioned characteristics. (For details see Method). Note that due to the complex nature of gene expression dynamics, GRN with a higher fitness value is generally rare. Further, since the expression dynamics are subjected to perturbations by noise, the "goal" of reaching the highest fitness may not be easily achieved. However, through the evolutionary processes, including natural selection and mutations, a GRN with the highest fitness value is generated. Individuals with the highest fitness value in a population evolve over some generations to reach the highest fitness value, i.e., $x_i > 0$, for the target genes $i = 1, 2, ..., k$.
We previously reported that the fitness distribution in a heterogenic population undergoes a transition when the noise level is increased [17,21]. Before we discuss the main results of the present paper from the next subsection, we briefly summarize the earlier numerical result in this subsection.
When the noise level $\sigma$ was below a given threshold ($\sigma < \sigma_c$), some individuals within the population might take lower fitness values, thereby inducing a considerable difference in the distribution of fitness over different genotypes. Some mutants derived from individuals with genotypes having high fitness values might take much lower fitness values, even after many generations.
of evolution. However, when the noise level was greater than the threshold \((\sigma_c < \sigma)\), the distribution of fitness was sharp, with concentration of mutants with high fitness values. Thus, low-fitness mutants were eliminated through the evolution. (If the noise level was too high, the expression levels were not fixed in time and increased or decreased over time. We did not examine such cases).
Accordingly, an increase in the noise strength lead to a transition in the robustness to mutation. GRNs that evolve under a low noise level did not have robustness to mutation, whereby some mutants could not sustain the high fitness value. In contrast, robustness to mutation was achieved for GRNs evolved only under a high noise level \(\sigma > \sigma_c\).
This counterintuitive robustness to mutation for \(\sigma > \sigma_c\) can be explained as follows. According to the dynamics of GRN that evolves under higher noise level, a large portion of the initial conditions reach target attractors that give the highest fitness values, thereby achieving robustness against noise, while for those evolved under \(\sigma < \sigma_c\), only a tiny fraction reaches target attractors. The developmental landscape for \(\sigma > \sigma_c\) gives a global, smooth attraction to the target, whereas the landscape evolved at \(\sigma < \sigma_c\) is rugged. Now, consider mutation to a network to slightly change gene expression dynamics. In the smooth landscape with global attraction, such a perturbation will cause little change to the final expression pattern, while under the dynamics with rugged developmental landscape, it often destroys the attraction to the target attractor. In other words, robustness to mutation evolves only under robustness to noise during development.
For \(\sigma > \sigma_c\), robustness to both noise and mutation evolved. We computed robustness to both factors using the 2 types of variances, \(V_{ip}\) and \(V_g\), where the former was the noise-induced variance in the distribution of the fitness (i.e., the number of "on" target genes), in a population of isogenic individuals and \(V_g\) is the variance in the distribution of fitness in a heterogenic population. The latter was obtained by first computing the mean fitness value for each genotype, and then by measuring the variance of these mean values for a heterogenic population. The 2 variances of the fitness decreased proportionally, throughout the course of evolution, for \(\sigma > \sigma_c\), in accordance with the principle of evolution of robustness (see [17,21]).
**\(V_g\)-\(V_{ip}\) relationship over genes**
Apart from the fitness level, the expression level \(x_i\) of each gene \(i\) was also distributed, even for isogenic distribution with the same \(I_{ij}\); this was because of the stochasticity in each gene expression dynamics. Similar to the variances for the fitness, the phenotypic variance \(V_{ip}(i)\) for each gene \(i\) in an isogenic population is defined on the basis of the variance of the expression of each gene \(i\), with each \(X_i = \text{Sign}(x_i)\), in an isogenic population. On the other hand, the mean expression level \(\overline{X_i}\) over the isogenic population depended on each genotype (i.e., the matrix \(I_{ij}\)). The variance computed using the distribution of \(\overline{X_i}\) in this heterogenic population, then, gives the genetic variance \(V_g(i)\) for each gene \(i\).
As mentioned above, our model also accounted for many non-target genes that do not contribute to fitness. The expression level \(x_i\) of such non-target genes \(i\) could be either positive or negative because there was no selection pressure directed at fixing their expression level. However, we found that the expression levels of many non-target genes become fixed to positive or negative values over the course of evolution when \(\sigma > \sigma_c\). To achieve robust fitness, the expression levels of some of the non-target genes were also fixed to either the on or off status, with consistency across individuals in the gene expression status. Hence, the average expression level \(\langle \overline{X_i} \rangle\) in a heterogenic population increased or decreased to either positive or negative values (\(\langle \ldots \rangle\) is the mean over an isogenic population, while \(< \ldots >\) is the average over a heterogenic population).
We computed the rate of evolution for each gene expression over a given number of generations, as the rate of either increase or decrease of the average expression level \(\langle \overline{X_i} \rangle\) in a heterogenic population. It was computed after evolution for some generation to achieve an increase in fitness. We then examined the validity of the relationship between the evolution speed and the fluctuations over genes, by plotting \(|\Delta X_i|\) against \(V_{ip}(i)\), where \(\Delta X_i\) is the change in the average expression level \(\langle \overline{X_i} \rangle\) of a given gene over some generations (20, in Figure 1). The proportionality was found to be valid over different genes, both target and non-target, as shown in Figure 1. Genes with higher variances would have a higher potentiality to evolve (In Figure 1, the target genes had less variance and evolution speeds over these generations. This is because the fitness had already increased in this generation, fixing the expression levels to positive values, and there was little room for further increase).
Considering the possible generalization of Fisher's theorem [18,20], which states that the evolution speed is proportional to \(V_g\), and applying it to the expression levels over genes, we may expect that the evolution speed of each gene expression level is proportional to \(V_g(i)\). Then, the proportionality between \(V_g(i)\) and \(V_{ip}(i)\) over the genes can be expected from the proportionality between the evolution speed and \(V_{ip}(i)\). Considering this, we examined the relationship between \(V_g(i)\) and \(V_{ip}(i)\) for several values of noise levels (Figure 2). From the plots, we obtained the following findings:
(i) Proportionality between $V_{ip}(i)$ and $V_g(i)$ was satisfied over many genes, i.e., $r_i \equiv V_{ip}(i)/V_g(i)$ took a common value $\rho (< 1)$ for many genes $i$, when the evolution of robustness progressed at $\sigma > \sigma_c$.
(ii) Target genes always lay on the proportional line, with relatively low values of $V_{ip}(i)$ (and accordingly, $V_g(i)$), while the variances of many non-target genes also lay on the same proportionality line $r_i \sim \rho$. Variances of only a few non-target genes did not exhibit the above-mentioned proportional relationship, and the ratios $r_i$ for such genes were scattered between $\rho$ and 1. (See also Additional file 1, Figure S1).
(iii) The fraction of such genes that fitted on the single proportional line increased with the noise strength $\sigma$. As the noise level was lowered, an increase was noted in the fraction of genes showing expression variances that deviate from the abovementioned proportional relationship. At around the threshold noise level $\sigma_c$, most genes approached $r_i \sim 1$. (See Additional file 1, Figure S1, for noise dependence of the variances $V_{ip}(i)$ and $V_g(i)$).
We then plotted the histogram of $r_i$ over all genes $i$, sampled over a few sets of generations (see Figure 3a, plotted in log-scale for $r_i$). The figure showed that the peak at $\rho$ was more prominent with an increase in $\sigma$. On the other hand, the broader distribution ranging between $r_i \sim \rho$ and 1 became more prominent as the noise level decreased, until the distribution around $r_i \sim 1$ dominated at $\sigma \sim \sigma_c$.
The proportionality between $V_{ip}(i)$ and $V_g(i)$ was not a property of every gene expression dynamics but was evident only after the system achieved robustness through evolution. The fraction of genes showing a proportional relationship increased during the course of evolution. Indeed, for $\sigma > \sigma_c$, the peak at $r_i \sim \rho$ increased over generations until it approached the distribution shown in Figure 3a (see Figure 3b). Summing up, the evolution of robustness was characterized by the formation of the peak at $\rho < 1$, in the distribution of $r_i = V_{ip}(i)/V_g(i)$.
The proportionality between the 2 variances implies the existence of a correlation between the noise- and mutation-induced changes in the gene expression statuses (see also Additional file 1, Figure S2 for the correlation in variances). Such a correlation was observed by computing the frequency of errors, i.e., changes in the on/off status of gene expression due to noise (without a change in the network) and as a result of mutation to the network (without adding the noise). The frequency of these 2 errors was highly correlated over genes for the GRN evolved at $\sigma > \sigma_c$ (see Figure 4). In other words, genes that were switched on or off more frequently by noise were also switched more frequently by mutation. This was in strong contrast with the GRN evolved at $\sigma < \sigma_c$ where no such correlation was observed (see Additional file 1, Figure S3). To sum up, the changeability of each gene expression level by noise and mutation was correlated, for a robust evolved system.
**Generality**
We confirmed that the proportionality between phenotypic variances of genetic and epigenetic origins held true for a system with evolved robustness, by simulating our model and its extended versions over several conditions. For all the conditions below, we confirmed (a) transition to robust evolution with the increase in noise level (b) proportionality between $V_{ip}(i)$ and $V_g(i)$ throughout the course of evolution and over many genes $i$.
(i) Against the change in $k$ (target set) and $M$ (number of genes): With an increase in the fraction of target genes, evolution to the fittest state became increasingly difficult, and the noise level for robust evolution $\sigma_c$ was slightly increased, but the proportionality was valid for $\sigma > \sigma_c$ (see Additional file 1, Figure S4a).
(ii) Considering that the density in the connection paths in the actual GRN is rather low, the validity of the results was verified by decreasing the path rates in the model. The conclusion remained unchanged as long as the network was percolated. The fraction of genes deviating from the proportionality slightly increased as the path rate in the network decreased (e.g., to .05 per gene) (see Additional file 1, Figure S4b).
(iii) Even if the noise level depended on each gene $i$, the conclusion was valid (see Additional file 1, Figure S5). After evolution, the variance in the expression level of each gene was not correlated with the noise level of each gene, implying that the fluctuation of gene expression was mainly controlled by (evolved) gene-to-gene regulation $J_{ij}$.
(iv) To consider the influence of extrinsic rather than intrinsic noise, we also introduced the same level of noise to all gene expressions. Again, the robustness and the proportionality between the variances persisted as long as the level of the intrinsic noise was larger than $\sigma_e$. Although the "extrinsic" (common) noise also contributed to the evolution of robustness, it played only a minor role in this respect. (See Additional file 1, Figure S6).
(v) By the variation in the environmental condition, the fitness condition for the target genes varied accordingly. By switching the condition for the target genes from 'on' to 'off' after some generations, we verified whether the evolution of GRN copes with this environmental variation. When the condition was switched, both the variances of epigenetic and genetic origins, as well as $r_i = V_g(i)/V_{ip}(i)$, increased to adapt novel environmental condition. Once the adaptation was achieved, the variances as well as $r_i$ decreased, to regain robustness (Figure 5).
When the target phenotype was changed periodically, we observed that the increase and decrease of the variances were consistently repeated, when the noise level was near the transition value ($\sigma_c$), where rapid adaptation to new environment and robustness in phenotype were compatible.
(vi) We also extended our GRN model to account for diploids with sexual recombination. Here, each individual had a pair of matrices $J^1_{ij}$ and $J^2_{ij}$, and the gene expression dynamics were given as a result of
summation of the two matrices instead of the equation in the Method. By considering recombination of two matrices from a parent, we evolved GRN to achieve a higher fitness. The proportionality of the 2 variances was again confirmed, while another noteworthy finding was that in the case of heterozygotes, the robustness was further enhanced (suppresses the variances of expression).
Further, the proportionality between the variances was not confined to the present model. Indeed, in a model of catalytic-reaction-network, such a proportionality between the variances of fitness [16] and over chemicals evolved, whereas robustness transition with the increase of noise was confirmed in an abstract spin model for protein folding [22]. We expect that the relationship holds true as long as the fitness is determined through complex developmental dynamics with noise and the high-fitness states are not easily achieved so that error catastrophe appears with the increase in the mutation rate.
**A phenomenological distribution theory**
Considering the generality of the proportionality between the phenotypic variances of genetic and epigenetic origins, we provide a distribution theory for it without going into detailed setups of the model. We adopt the evolutionary stability argument first introduced for the proportionality between $V_p$ and $V_g$ (of the fitness) through the course of evolution [16,23].
We consider a multivariate distribution function with regard to gene expression level $x_i$ and the genotype $a$. Considering that multiple genes are involved, we assume that the genotype is represented by a scalar parameter $a$ (e.g., by a Hamming distance from the fittest genetic
sequence). Now, we assume evolutionary stability, in which the distribution maintains a single peak through evolution. Then, by a suitable transformation of variables, this peak position is taken to be 0; further, the form is approximated by Gaussian distribution around the origin to give the following equation
$$P(x_i, a) = N_0 \exp\left(-\frac{x_i^2}{2\alpha_i} + C_i x_i a - \frac{a^2}{2\mu}\right). \quad (1)$$
with $N_0$, a normalization constant so that $\int P(x : a) dx = 1$. Here, $\alpha_i \equiv V_{ip}(i)$ is the variance of the gene expression level, while $\mu$ is the mutation rate that determines the variance in the genotypes. Only a linear change of $x_i$ with regard to $a$ is considered by neglecting higher order terms. Eq. (1) is rewritten as
$$P(x_i, a) = \exp\left(-\frac{(x_i - C_i a \alpha_i)^2}{2\alpha_i} - \frac{1}{2}\left(\frac{1}{\mu} - C_i^2 \alpha_i\right)a^2\right) \quad (2)$$
Now, recall the stability of the distribution $P(x_i, a)$, i.e., whether it has a peak in the space with $x_i$ and $a$. This condition is given by $\frac{1}{\mu} - C_i^2 \alpha_i > 0$, i.e., $\mu < \mu_{max}^i = \frac{1}{C_i^2 \alpha_i}$. For the mutation rate larger than $\mu_{max}^i$, the distribution is flattened. In this case, the
peaked distribution concentrated at a certain gene expression level is no longer sustained. This can be interpreted as a kind of error catastrophe originally introduced by Eigen [24], i.e., collapse of the sustenance of the localized distribution of functional genes. The critical mutation rate for the error catastrophe is given by $\mu = \mu_{\text{max}}^i$, which can take independent values by any gene $i$. However, GRNs that have achieved robustness to noise and mutation through evolution may have some constraints among expressions of different genes.
To have higher robustness, the error threshold for the fitness should be postponed to a higher mutation rate. When the system has achieved robustness to noise and mutation through evolution, the fitness level changes only to a small degree (i.e., remains almost neutral) against a considerable amount of change in the GRNs due to mutation [21]. Until the occurrence of this number of mutations, the genes rarely undergo any change in their expression statuses (on or off). This introduces a constraint on the change in gene expression against mutations.
If each of non-target genes were switched on or off independent of each other, the error in the expression of target genes that could be influenced by each switch would occur frequently. Indeed, the evolved robust GRN has some constraint on the errors by noise and mutation, as plotted in Figure 4. To achieve higher robustness, there needs to be some correlation between the changes in the expression statuses of genes. By suitable mutual interaction ($f_{ij} \neq 0$) among genes, the error frequency in the target gene expression can be reduced. This reduction works up to the mutation rate $\mu_{\text{max}}^i$, while for $\mu \sim \mu_{\text{max}}^i$, errors in the expression status of one gene can be propagated to the expression of many other genes, which in turn will induce changes in the expression statuses of other genes. Hence, for a robust network having higher error threshold mutation rate, many genes may be switched on or off simultaneously within a GRN, once an error catastrophe in one gene expression occurs. Accordingly, many genes $i$ are expected to share a common critical mutation rate. Hence, $\mu_{\text{max}}$ is roughly equal for many genes, when robustness is evolved, i.e., at $\sigma > \sigma_c$. In fact, we computed the expression pattern of GRNs by adding a larger number of mutations to the evolved GRN, to obtain the variances $V_{ip}(i)$ and $V_g(i)$. When the original network was evolved under high noise conditions, the variances touched with the line $V_{ip}(i) = V_g(i)$ simultaneously over many genes, as the mutation rate was increased (see Figure 6). The flattening of the distribution occurred at similar mutation rates over many genes.
Following this argument, we may expect that
$$\mu_{\text{max}}^i = (C_i^2 \alpha_i)^{-1} = \text{independent of many genes}$$ \hspace{1cm} (3)
when robustness is evolved (i.e., at $\sigma > \sigma_c$). Note that $\overline{x_i}$, the mean of $x_i$ for a given genotype $a$, is given by $C_i \alpha_i a$ according to eq. (2). The variance of $\overline{x_i}$ due to this "genetic change" is given by the distribution of $a$. Thus, we get
$$V_g(i) = <(\delta x_i)^2> = C_i^2 \alpha_i^2 <(\delta a)^2>.$$ \hspace{1cm} (4)
Since $V_{ip}(i) = \alpha_i$, we get
$$r_i = V_g(i)/V_{ip}(i) = C_i^2 \alpha_i <(\delta a)^2>,$$ \hspace{1cm} (5)
but this value is independent of gene $i$, according to eq.(3). Hence, the proportionality over genes is explained by a common error catastrophe threshold value over different gene expression levels.
As stated earlier, this proportionality over genes is not a general property of (gene expression) dynamical systems, but emerges only when evolution occurs under a sufficient level of noise (see Figure 3b). The use of the distribution function and assumption of stability implies that we are concerned with the stationary distribution that is attained after the progress in evolution. As the noise level reduces resulting in a decrease in robustness, the fraction of genes sharing a common value of $V_g(i)/V_{ip}(i) = \rho$ decreases. This
decrease is interpreted as a decrease in the number of genes that have a common error threshold value.
**Discussion**
The relationship between the variances over genes, rather than the evolutionary course, will be easier to confirm experimentally because variances in this case need not be traced across many generations. By measuring directly the isogenic phenotypic variance and mutational variance over many genes (proteins), the correlation between the 2 can be examined. Although direct experimental support is not yet available, recent studies conducted by Laundry et al. [25] on such variances in yeast suggest the existence of a correlation between the 2 types of variances (see also [26]). In fact, they measured "expression noise" for each gene as the variance from its expression in isogenic organisms, and "mutational variance" as the variance of the change in the expression levels of the genes after the occurrence of mutations. The former corresponds to $V_{ip}(i)$, while the latter correlates with $V_g(i)$ since both measure the variations in phenotype (gene expression) induced by genetic changes. Although experimental data are scattered, a positive correlation is noted between the expression noise and mutational variance, as is consistent with the inference of the proportionality between $V_{ip}(i)$ and $V_g(i)$. Note that this proportionality holds true only for a set of genes whose expression levels are mutually related and directly or indirectly related with the fitness, whereas the experimental data cover all genes. In fact, by choosing a set of genes that have a stronger mutual relationship, the correlation between the variances is increased [27].
According to the theoretical argument we presented here, the phenotype variable is not restricted to gene expression level, but can represent any trait. Stearns et al. carried out selection experiments of *Drosophila melanogaster* on several fitness conditions such as age at eclosion, weight, and lifespan. They measured the variance of these phenotype traits within lines (i.e., corresponding to $V_{ip}(i)$) and among lines (corresponding to $V_g(i)$). Interestingly, the 2 types of variances (even after being normalized by the mean) showed remarkable proportionality over different phenotypic traits (see Figure 2 of [28]), in agreement with our theory. Note that they used the population selected on the basis of certain phenotypic traits, as in our model and theory. This must explain why the observed proportionality between the 2 variances is clearer than that in [25].
**Conclusion**
The characterization of robustness, evolvability, and plasticity [1,29-31] is an important issue in the field of evolutionary and developmental biology; however, studies on this issue are often qualitative. In the present study, we have demonstrated that the phenotypic fluctuations provide quantitative measures for these. Consider a population of organisms evolved under a single fitness condition, where the phenotypes that directly or indirectly influence the fitness are given as a result of (gene expression) dynamics under noise (determined by transcriptional networks). Through selection and mutation, the rules for the dynamics (i.e., the transcriptional networks) evolve leading to the achievement of a higher fitness level. Previously, we defined $V_{ip}$ as the variance of the fitness within an isogenic population, and $V_g$ as the variance of the average fitness within a heterogenic population, and obtained $V_{ip} \propto V_g \propto$ evolution speed through the course of evolution [16,17,23].
In the present paper we defined the variances at each gene expression level $i$ due to noise as $V_{ip}(i)$ and that due to mutation as $V_g(i)$. Then, the conclusion of the present paper is summarized as follows:
(1) For a population of organisms at a given generation evolved after some generations, $V_{ip}(i) \propto V_g(i)$ for most genes $i$. In other words, $r_i = V_g(i)/V_{ip}(i)$ took a common value ($\rho < 1$) over many genes, and the number of such genes increased as the robustness of fitness increased. (Total phenotype variance is given by $V_{ip}(i) + V_g(i)$ if the 2 variances are added independently. In this case the heritability [18,19], defined as the ratio of $V_g(i)$ to the total phenotypic variance is given by $r_i/(1 + r_i)$. Hence, the heritability takes a common value for mutually correlated traits, for evolved population under a fixed, single fitness condition). The previous relationship through the evolutionary course, implies that organisms with larger phenotypic variances have higher rates of evolution. The relationship (1) we found here implies that genes (or phenotypic traits) that have larger fluctuations have higher evolution speed.
(2) As the fraction of genes sharing a common value $r_i = \rho < 1$ increased, there was a decrease in the degree of freedom to change the gene expressions independently by mutation. This increase in correlated change lead to an increase in robustness of the fitness to mutation and noise. On the other hand, the expression of genes with larger $r_i \sim 1$ was easily switched by noise or mutation, and provided plasticity to environmental as well as mutational change (see also Figure 5).
The generality of our results was confirmed by several extensions of the model including environmental fluctuations, gene-dependent noise amplitudes, diploid with recombination, and so forth, as well as a catalytic reaction network model. Here, we should note that a correspondence between the change induced by noise in development and that induced by mutation was a source
of correlation between $V_{ip}$ and $V_g$ in our study. Indeed, Waddington coined the term "genetic assimilation" as a process in which environment-induced phenotypic changes are subsequently embedded into genes [9]. The proportionality among phenotypic plasticity, $V_{ip}(i)$, and $V_g(i)$ is regarded as a quantitative expression of this genetic assimilation.
**Method**
A simplified gene expression dynamics with a sigmoid input-output behavior [32,33] is adopted here, although several simulations in the form of biological networks will give essentially the same result. In this model, the dynamics of a given gene expression level, $x_i$, is described by the following:
$$dx_i / dt = \gamma \{ \tanh[\beta \sum_{j>k}^M J_{ij} x_j] - x_i \} + \sigma \eta_i(t), \quad (6)$$
where $J_{ij} = -1, 1, 0$, and $\eta_i(t)$ is a Gaussian white noise given by $<\eta_i(t)\eta_j(t')> = \delta_{i,j}\delta(t-t')$. $M$ is the total number of genes, and $k$ is the number of target genes that determine fitness. The summation only for $j > k$ is introduced to eliminate possible influences from the target genes, which might also fix other gene expressions. Without this restriction and just by the summation over all genes, however, conclusions of the present numerical results are invariant. Of course, the matrix $J_{ij}$ is generally asymmetric. The amplitude of noise strength is given by $\sigma$ that determines stochasticity in gene expression. The initial condition is given by $(-1,-1,...,-1)$; i.e., all genes are off. The fitness $F$ is determined by how many of the "target" genes are on after a sufficient time, i.e., the number of $i$ such that $x_i > 0$ for $i = 1, 2, ..k < M$. Because the model includes a noise component, the fitness can fluctuate at each run, which leads to a distribution in the fitness $F$ and $x_j$, even among individuals sharing the same gene regulation network. For each network, we compute the average fitness $\bar{F}$ over $L$ runs.
At each generation, there are $N$ individuals with slightly different $f_{ij}$. Among the networks, we select those with higher fitness values. From the selected networks, $J_{ij}$ is "mutated," i.e., $J_{ij}$ for a certain pair $i, j$ selected randomly with a certain fraction is changed among $\pm 1, 0$. For example, each of the $N_s(< N)$ networks with higher values $\bar{F}$ produce $N/N_s$ mutants. (We also used the selection procedure such that the offspring number is proportional to the fitness, with normalization of the total population to $N$, but the same results presented here were obtained). We repeat this selection-mutation process over generations.
The variance of fitness or gene expression $Sign(x_i)$ of identical networks over $L$ runs gives $V_{ip}$ or $V_{ip}(i)$, while the variances of their mean over different networks $J_{ij}$ give $V_g$ or $V_g(i)$. We chose $N = L = 200$, and $N_s = N/4$, while the conclusion to be shown below does not change as long as these values are sufficiently large. We use $\beta = 7, \gamma = .1, M = 64$ and $k = 8$, and initially chose $J_{ij}$ randomly with equal probability for $\pm 1, 0$, unless otherwise mentioned.
**Additional material**
*Additional file 1: Figure S1 Dependence of the variances $V_{ip}(i)$ and $V_g(i)$ on the noise strength. Figure S2 Correlation between gene expressions. Figure S3 Correlation between errors by noise and mutation over all genes. Figure S4 Relationship between $V_g(i)$ and $V_{ip}(i)$ for evolved GRNs with a larger fraction of target genes, and a smaller fraction of nonzero genes. Figure S5 Relationship between $V_g(i)$ and $V_{ip}(i)$ for the gene expression dynamics whose noise level depends on each gene. Figure S6 Relationship between $V_g(i)$ and $V_{ip}(i)$ for a model with "extrinsic noise."*
**Acknowledgements**
I would like to thank B. Lehner, T. Yomo, C. Furusawa, M. Tachikawa, and S. Ishihara, for stimulating discussion. This research was supported by the people of Japan, via a Grant-in-Aid for scientific research(No.21120004) from MEXT Japan.
Received: 2 October 2010 Accepted: 26 January 2011 Published: 26 January 2011
**References**
1. de Visser JA, et al: Evolution and detection of genetic robustness. *Evolution* 2003, 57:1959-1972.
2. Wagner A: Robustness against mutations in genetic networks of yeast. *Nature Genetics* 2002, 24:355-361.
3. Ciliberti S, Martin OC, Wagner A: Robustness can evolve gradually in complex regulatory gene networks with varying topology. *PLOS Comp Biology* 2007, 3:e15.
4. Wagner GP, Booth G, Bagheri-Chaichian H: A population Genetic Theory of Canalization. *Evolution* 1997, 51:329-347.
5. Siegal ML, Bergman A: Waddington's canalization revisited: Developmental stability and evolution. *Proc Nat Acad Sci USA* 2002, 99:10528-10532.
6. Barkai N, Leibler S: Robustness in simple biochemical networks. *Nature* 1997, 387:913-917.
7. Alon U, Surette MG, Barkai N, Leibler S: Robustness in bacterial chemotaxis. *Nature* 1999, 397:168-171.
8. Schmalhausen II: (1949, reprinted 1986) *Factors of Evolution: The Theory of Stabilizing Selection* (University of Chicago Press, Chicago).
9. Waddington CH: *The Strategy of the Genes* (Allen & Unwin, London); 1957.
10. Elowitz MB, Levine AJ, Siggia ED, Swain PS: Stochastic gene expression in a single cell. *Science* 2002, 297:1183-1187.
11. Furusawa C, Suzuki T, Kashiwagi A, Yomo T, Kaneko K: Ubiquity of log-normal distributions in intra-cellular reaction dynamics. *Biophysics* 2005, 1:25-31.
12. Bar-Even A, et al: Noise in protein expression scales with natural protein abundance. *Nature Genetics* 2006, 38:636-643.
13. Kaern M, Elston TC, Blake WJ, Collins JJ: Stochasticity in gene expression: From theories to phenotypes. *Nat Rev Genet* 2005, 6:451-464.
14. Kaneko K: *Life: An Introduction to Complex Systems Biology* (Springer, Heidelberg and New York); 2006.
15. Sato K, Ito Y, Yomo T, Kaneko K: On the relation between fluctuation and response in biological systems. *Proc Nat Acad Sci USA* 2003, 100:14086-14090.
16. Kaneko K, Furusawa C: An evolutionary relationship between genetic variation and phenotypic fluctuation. *J Theo Biol* 2006, 240:78-86.
17. Kaneko K: Evolution of robustness to noise and mutation in gene expression dynamics. *PLoS ONE* 2007, 2:e434.
18. Futuyma DJ: *Evolutionary Biology*. Second edition. (Sinauer Associates Inc., Sunderland); 1986.
19. Hartl DL, Clark AG: *Principles of Population Genetics*. 4 edition. (Sinauer Assoc., Inc., Sunderland); 2007.
20. Fisher RA: *The Genetical Theory of Natural Selection* (Oxford University Press); 1930.
21. Kaneko K: Shaping Robust system through evolution. *Chaos* 2008, 18:026112.
22. Sakata A, Hukushima K, Kaneko K: Funnel landscape and mutational robustness as a result of evolution under thermal noise. *Phys Rev Lett* 2009, 102:148101.
23. Kaneko K: Relationship among Phenotypic Plasticity, Genetic and Epigenetic Fluctuations, Robustness, and Evolvability. *J BioSci* 2009, 34:529-542.
24. Eigen M, Schuster P: *The Hypercycle* (Springer, Heidelberg); 1979.
25. Landry CR, Lemos B, Dickinson WJ, Hartl DL: Genetic Properties Influencing the Evolvability of Gene Expression. *Science* 2007, 317:118.
26. Lehner B: Selection to minimize noise in living systems and its implications for the evolution of gene expression. *Mol Syst Biol* 2008, 4:170.
27. Lehner B: Genes Confer Similar Robustness to Environmental, Stochastic, and Genetic Perturbations in Yeast. *PLoS One*. 2010, e9035.
28. Stearns SC, Kawecki TJ: The differential genetic and environmental canalization of fitness components in *Drosophila melanogaster*. *J Evol Biol* 1995, 8:539-557.
29. Kirschner MW, Gerhart JC: *The Plausibility of Life* (Yale University Press); 2005.
30. Ancel LW, Fontana W: Plasticity, evolvability, and modularity in RNA. *J Exp Zool* 2002, 288:242-283.
31. West-Eberhard MJ: *Developmental Plasticity and Evolution* (Oxford University Press); 2003.
32. Mjolsness E, Sharp DH, Reinitz J: A connectionist model of development. *J Theor Biol* 1991, 152:429-453.
33. Salazar-Ciudad I, Garcia-Fernandez J, Sole RV: Gene networks capable of pattern formation: from induction to reaction-diffusion. *J Theor Biol* 2000, 205:587-605.
doi:10.1186/1471-2148-11-27
Cite this article as: Kaneko: Proportionality between variances in gene expression induced by noise and mutation: consequence of evolutionary robustness. *BMC Evolutionary Biology* 2011 11:27. |
Detecting and predicting forest degradation: A comparison of ground surveys and remote sensing in Tanzanian forests
Antje Ahrends\textsuperscript{1} \textsuperscript{✉} | Mark T. Bulling\textsuperscript{2} | Philip J. Platts\textsuperscript{3} \textsuperscript{✉} | Ruth Swetnam\textsuperscript{4} |
Casey Ryan\textsuperscript{5} \textsuperscript{✉} | Nike Doggart\textsuperscript{6} | Peter M. Hollingsworth\textsuperscript{1} | Robert Marchant\textsuperscript{3} |
Andrew Balmford\textsuperscript{7} | David J. Harris\textsuperscript{1} \textsuperscript{✉} | Nicole Gross-Camp\textsuperscript{8} | Peter Sumbi\textsuperscript{9}† |
Pantaleo Munishi\textsuperscript{10} | Seif Madoffe\textsuperscript{10}† | Boniface Mhoro\textsuperscript{11}† | Charles Leonard\textsuperscript{12} |
Claire Bracebridge\textsuperscript{13,14} | Kathryn Doody\textsuperscript{14,15} | Victoria Wilkins\textsuperscript{14} | Nisha Owen\textsuperscript{14,16} |
Andrew R. Marshall\textsuperscript{3,17,18} | Marije Schaafsma\textsuperscript{19} | Kerstin Pfliegener\textsuperscript{20} | Trevor Jones\textsuperscript{21} |
James Robinson\textsuperscript{1,5} | Elmer Topp-Jørgensen\textsuperscript{14,22} | Henry Brink\textsuperscript{23} | Neil D. Burgess\textsuperscript{24}
\textsuperscript{1}Royal Botanic Garden Edinburgh, Edinburgh, UK
\textsuperscript{2}Environmental Sustainability Research Centre, University of Derby, Derby, UK
\textsuperscript{3}Department of Environment and Geography, University of York, York, UK
\textsuperscript{4}Geography Staffordshire University, Stoke-on-Trent, UK
\textsuperscript{5}School of GeoSciences, University of Edinburgh, Edinburgh, UK
\textsuperscript{6}School of Earth and Environment, University of Leeds, Leeds, UK
\textsuperscript{7}Department of Zoology, University of Cambridge, Cambridge, UK
\textsuperscript{8}University of East Anglia, Norwich, UK
\textsuperscript{9}TanBE Ltd, Dar es Salaam, Tanzania
\textsuperscript{10}Sokoine University of Agriculture, Morogoro, Tanzania
\textsuperscript{11}Herbarium, Botany Department, University of Dar es Salaam, Dar es Salaam, Tanzania
\textsuperscript{12}Tanzania Forest Conservation Group, Dar es Salaam, Tanzania
\textsuperscript{13}North Carolina Zoo, Asheboro, NC, USA
\textsuperscript{14}The Society for Environmental Exploration (Frontier), London, UK
\textsuperscript{15}Frankfurter Zoological Society, Frankfurt am Main, Germany
\textsuperscript{16}On the EDGE Conservation, Chelsea, UK
\textsuperscript{17}Tropical Forests and People Research Centre, University of the Sunshine Coast, Sippy Downs, Qld, Australia
\textsuperscript{18}Flamingo Land Ltd., North Yorkshire, UK
\textsuperscript{19}Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
\textsuperscript{20}The Nature Conservancy, Berlin, Germany
\textsuperscript{21}Southern Tanzania Elephant Program, Iringa, Tanzania
\textsuperscript{22}Aarhus University, Aarhus C, Denmark
\textsuperscript{23}Arid Zone Research Institute, Alice Spring, NT, Australia
\textsuperscript{24}UNEP-WCMC, Cambridge, UK
Correspondence
Antje Ahrends, Royal Botanic Garden
Edinburgh, 20A Inverleith Row, EH3 5LR
Edinburgh, UK.
Email: email@example.com
Societal Impact Statement
Large areas of tropical forest are degraded. While global tree cover is being mapped with increasing accuracy from space, much less is known about the quality of that tree
cover. Here we present a field protocol for rapid assessments of forest condition. Using extensive field data from Tanzania, we show that a focus on remotely-sensed deforestation would not detect significant reductions in forest quality. Radar-based remote sensing of degradation had good agreement with the ground data, but the ground surveys provided more insights into the nature and drivers of degradation. We recommend the combined use of rapid field assessments and remote sensing to provide an early warning, and to allow timely and appropriately targeted conservation and policy responses.
Summary
- Tropical forest degradation is widely recognised as a driver of biodiversity loss and a major source of carbon emissions. However, in contrast to deforestation, more gradual changes from degradation are challenging to detect, quantify and monitor. Here, we present a field protocol for rapid, area-standardised quantifications of forest condition, which can also be implemented by non-specialists. Using the example of threatened high-biodiversity forests in Tanzania, we analyse and predict degradation based on this method. We also compare the field data to optical and radar remote-sensing datasets, thereby conducting a large-scale, independent test of the ability of these products to map degradation in East Africa from space.
- Our field data consist of 551 ‘degradation’ transects collected between 1996 and 2010, covering >600 ha across 86 forests in the Eastern Arc Mountains and coastal forests.
- Degradation was widespread, with over one-third of the study forests—mostly protected areas—having more than 10% of their trees cut. Commonly used optical remote-sensing maps of complete tree cover loss only detected severe impacts (>25% of trees cut), that is, a focus on remotely-sensed deforestation would have significantly underestimated carbon emissions and declines in forest quality. Radar-based maps detected even low impacts (<5% of trees cut) in ~90% of cases. The field data additionally differentiated types and drivers of harvesting, with spatial patterns suggesting that logging and charcoal production were mainly driven by demand from major cities.
- Rapid degradation surveys and radar remote sensing can provide an early warning and guide appropriate conservation and policy responses. This is particularly important in areas where forest degradation is more widespread than deforestation, such as in eastern and southern Africa.
KEYWORDS
biodiversity conservation, carbon emissions, community-based forest management, East Africa, global forest watch, human disturbance, synthetic aperture radar, village land forest reserves
1 | INTRODUCTION
Large areas of tropical forest are degraded through human impacts such as overexploitation, fragmentation, pollution, exotic species invasion and fire (Sloan & Sayer, 2015). While there is no globally agreed definition for forest degradation, it can be broadly defined as changes to a forest stand resulting in the long-term reduction of particular attributes and functions such as biodiversity, and the potential supply of goods and services (FAO, 2011; Ghazoul et al., 2015). Deforestation—the complete replacement of forest by another land use—is easier to define, detect and monitor, and consequently has been the focus of global policy development (Sasaki & Putz, 2009). As a result, the impacts of forest degradation on biodiversity and carbon balances are comparatively poorly understood,
but they are likely to be substantial (Alroy, 2017). For instance, recent studies have shown that carbon emissions from forest degradation may have been underestimated and could account for as much as 25%–69% of the combined gross carbon losses due to deforestation and degradation in the tropics (Baccini et al., 2017; Berenguer et al., 2014; Pearson et al., 2017).
Significant progress has been made with measuring deforestation and forest degradation from space (Woodcock et al., 2020). Changes in tree cover can now be monitored at high spatial and temporal resolution, providing policy makers and conservation planners with an unprecedented wealth of data to guide interventions (Blackman, 2013; DeVries et al., 2015; Fuller, 2006). The technology is also increasingly available to non-specialists (Asner, 2009). While there are many easily accessible datasets to assist national and global monitoring of forest cover (e.g. Hansen et al., 2013; Miettinen et al., 2011; Sexton et al., 2013), remotely-sensed forest degradation data are sparser and more challenging to obtain. At a country level, quantitative assessments of degradation are often lacking (Romijn et al., 2015). Radar data hold particular promise as they overcome the challenges presented by cloud cover and variable phenology, and they correlate with changes in biomass (McNicol et al., 2018; Mitchell et al., 2017; Ryan et al., 2012). However, using such data sources for detecting and quantifying degradation from space remains limited by the extent to which degradation is associated with a reduction in canopy cover and/or biomass (Ryan et al., 2012). Airborne radar and light detection and ranging (LiDAR; Ene et al. 2017), as well as the use of unmanned aerial vehicles (Baena et al., 2018; Ota et al., 2019) can provide higher resolution data, but these technologies require expertise, lack global coverage and historical archives, and can be prohibitively expensive. Ground-based sensing methods such as hemispherical photographs (Fournier & Hall, 2017) and terrestrial LiDAR (Decuyper et al., 2018) used to quantify stand structural attributes also hold promise, but again, their implementation requires expertise.
At the other end of the spectrum, there are detailed field assessments (Thompson et al., 2013), such as permanent sample plots for assessing changes in forest vegetation. Collecting data on species, stem diameter, height, crown cover and various biotic and abiotic parameters, they are an extremely important tool in biodiversity and environmental research (Baker et al., 2017), and are used to locally characterise biodiversity, growing stock, biomass, carbon, ecosystem function and impacts of degradation. However, permanent plots are also labour intensive and time consuming to set up, and surveying them requires expertise. Consequently, few countries conduct exhaustive plot-based inventories as part of their national forest reporting, and even fewer consistently monitor them (FAO, 2011). In addition, while permanent plots are essential to understand the impacts of degradation, they are often not the most effective method to understand the extent and patterns of degradation itself. Unless they are systematically placed to cover an entire area at high density, they rarely capture the breadth of degrading activities that occur. On the contrary—the presence of researchers and permanent tags on trees may deter illegal activities. Plots are also often placed in a stratified random or subjective fashion, that is, purposefully located in pre-selected areas viewed as representative of a given vegetation type and/or level of disturbance. In addition, as degradation is generally not the main focus, it is often not quantified in a robustly comparable and systematic way.
Consequently, while countries increasingly monitor wall-to-wall forest cover change using remote sensing, and they also have some inventory data, they still lack representative quantitative data on forest degradation (Romijn et al., 2015). Difficulties with monitoring forest degradation and associated gaps in policy interventions create opportunities for unregulated and/or illegal logging and corruption. There can be a tendency to shift the blame for forest loss among actors, whereby existing prejudice against already marginalised groups such as farmers practising shifting cultivation or charcoal producers may be reinforced (Hosonuma et al., 2012; Ryan et al., 2014). Knowledge of which forests are degraded, where degradation is likely to spread to next, and what the main drivers are is vital for formulating appropriately targeted policy interventions and management.
Here we present a framework protocol for rapid area-standardised assessments of forest condition. The protocol sits in the middle of the spectrum between detailed ground surveys and remote sensing, and its implementation does not require professional training. The protocol assesses human use and disturbance, which depending on their levels and the forest type may lead to a deterioration of stocks and services, and thus degradation.
Using the example of threatened and highly biodiverse forests in Tanzania, we investigate:
1. how ground data collected using this protocol compare to remotely-sensed datasets; specifically, radar-based maps of biomass change (McNicol et al., 2018) and commonly used maps of complete tree cover loss (which underpin ‘Global Forest Watch’; Hansen et al., 2013);
2. the value of ground data for understanding and predicting degradation in combination with spatially explicit models (e.g. whether data collected using this approach in 1996–2010 could have predicted human impacts in 2020).
The overall aim is to assess whether these rapid assessments are a useful addition to remote sensing and detailed vegetation assessments in (permanent) plots in informing conservation policy and practice.
## METHODS
### 2.1 Protocol overview
The method presented here rapidly quantifies standing woody resources and resource extraction in forests with a view to gauging forest condition (Frontier Tanzania, 2001). While the protocol is flexible and can be adjusted to the target vegetation and area, methodological
details naturally need to be standardised to facilitate comparisons. The assessment is done along transects, which typically have a width of 10 m. Their length is variable and can be adjusted to the target vegetation type and forest size. The transects are located in either a random, stratified random or systematic fashion, and should cover the forest edge as well as the interior. Within each transect, all trees, as well as stumps and other signs of human use (such as charcoal production or clearance for agriculture) are recorded. The minimum assessment threshold is typically 5 cm diameter at breast height (dbh; measured 1.3 m above ground), but this can be adjusted to the type of vegetation being surveyed. In its simplest form, the method focusses on assessing the number of cut trees versus those that are (left) standing or died naturally. Size categories can be added to distinguish cutting for different end uses. Depending on the aims of the sampling, recording can consist of simple counts within categories, or include more detailed information such as diameter (over bark), height, species identification and voucher collection. Identifying at least the commonly used timber species will indicate resource preference and hint at the likely nature of the market behind that—for example, whether trees are cut for local use or international export (Furukawa et al., 2011) (noting that timber trade names often refer to collectives of species and/or an entire genus, i.e. overharvesting of individual species can be masked when using trade names only). However, the time spent collecting, measuring and identifying trades off against the primary aim of the method—to rapidly cover many areas, often with the help of non-specialists, in order to obtain reasonably reliable estimates of degradation and to support the identification of areas in need of conservation interventions. A detailed protocol and a recommended set of core measurements are provided as part of the Table S4.
2.2 | Example application
2.2.1 | Study area
The study area (see also Methods S1) spans the Eastern Arc Mountains and part of the coastal forests, both of which are of global importance for biodiversity conservation due to high levels of localised endemism (Mittermeier et al., 2011; Olson & Dinerstein, 2002; Stattersfield, 1998). These forest systems also provide critical ecosystem services to local communities and the nation as a whole (Ashagre et al., 2018; Fisher et al., 2011; Schaafsma et al., 2014; Swetnam et al., 2011). In southern Africa (here defined as roughly −1° to −34° latitude), the livelihoods of an estimated 150 million people are thought to be dependent on the goods and services provided by woodlands and forests (Ryan et al., 2016). Rapid urbanisation and population growth mean that demand for wood products is substantial and increasing, with fuel wood being the main source of energy for over 90% of the population (Bailis et al., 2005). The Tanzanian forestry sector—both formal and informal—is an important source of income, GDP and employment (Doggart et al., 2020; United Republic of Tanzania, 2001). While the trade in wood products is often small-scale and livelihood driven (Cavanagh et al., 2015), wood is also exported to generate foreign revenue (Lukumbuzya & Sianga, 2017). Exact figures are difficult to obtain (Lukumbuzya & Sianga, 2017), but although Tanzania has a comprehensive legal framework for the conservation and management of forest resources, and although the forests studied here mostly occur in protected areas, overharvesting is likely to be widespread (Milledge et al., 2007). An ability to monitor and to identify drivers and patterns of forest loss and degradation is vital to the conservation of these forest systems, and to ensure the long-term provision of forest resources for sustainable livelihoods.
2.2.2 | Field data
The data used for this example application were collected between 1996 and 2010 (median 2004–2005) by a wide range of institutions and individual collectors (see Acknowledgements). In total, there were 551 transects of 10 m width with a combined length of 609 km from 86 forests. The transects were placed in either a systematic or stratified random fashion to sample both easily accessible and remote areas (Figure 1a). All transects recorded standing, naturally dead and cut trees in two size categories: ‘poles’ (slender stems frequently used in house construction; 5 to 15 cm dbh), and ‘trees’ (>15 cm dbh). In total 430,116 stems and stumps were recorded. Stumps were classed into two age categories: recent (generally cut ≤6 months prior to observation) or old (>6 months), and records were made of all other types of extractive activities such as the presence of charcoal kilns. A small subset of transects (n = 45 covering 18.75 ha in the coastal forests; Ahrends et al., 2010) made more detailed assessments, including exact dbh measurements and species identification. For spatially explicit analyses (comparison with remotely-sensed datasets and modelling) we excluded 11 transects where there was a mismatch in recorded length and/or locality, and the length or locality inferred from the transects’ GPS coordinates.
2.2.3 | Comparison with remotely-sensed datasets
We compared the ground data against two remotely-sensed datasets:
1. widely used maps for tree cover loss produced by the initiative ‘Global Forest Watch’ (Hansen et al., 2013), hereafter GFW, which are based on Landsat data and assess complete canopy loss at an approximate resolution of 28 m on the ground;
2. a radar-based dataset (McNicol et al., 2018) (hereafter MN18), which uses a probabilistic approach to map deforestation and degradation in southern Africa between 2007 and 2010 based on L-Band radar from ALOS-PALSAR; MN18 aggregated the data from a resolution of 25 m to 100 m. We focussed on cells with a probability ≥0.5 of degradation or deforestation.
For both comparisons we looked at buffers of up to 100 m around transects. The ground data were restricted to the relevant period of satellite data acquisition (2000–2005 for comparisons to GFW, and
2007–2010 for comparisons to GFW and MN18). Only ‘recent’ stumps (i.e. stumps no older than 6 months) were included. Degradation counted as ‘detected’ if the remotely-sensed data reported a pixel as degraded or deforested anywhere within that buffer. Here we focus on true positives only. Due to widespread harvesting, it was not possible to assess the rate of false positives. Specificity (the true negative rate) however has equally important implications for the practical application of these datasets and should ideally be assessed in future studies.
2.2.4 | Modelling and predicting degradation
We used a spatially explicit modelling approach to investigate which factors were most influential in explaining the spatial patterns of degradation, and whether the spread was predictable. Models were developed using Boosted Regression Trees—an ensemble method, which combines regression trees and boosting, and fits multiple simple regression trees in a forward iterative fashion. The algorithm is able to fit complex non-linear patterns and interactions, and handles different types of predictor variables (Elith et al., 2008). We focussed on three dependent variables: (a) density of charcoal kilns, (b) percentage of poles (stems 5–15 cm dbh) cut and (c) percentage of trees (>15 cm dbh) cut. A transect constituted an individual data point. For modelling the percentage of trees cut we discarded transects with an overall tree density <50 ha\(^{-1}\) and no reported logging (\(n = 25\)), assuming that in these areas there were hardly any trees to be cut in the first place. We considered 15 candidate predictors representative of physical accessibility, likely demand, availability of resources, forest management type and tenure (Tables S1 and S2). For each dependent variable we tested eight models with different (pre-selected) combinations of predictors (Table S3), including a correction for spatial autocorrelation. The final models were selected based on model performance when validated against test data (cross-validation correlations on up to 25% of randomly set aside test data) and maximum parsimony in terms of the number of predictors used (Table S4). Further details on model settings,
parameterisation and performance are summarised in Tables S3–S5, and software notes are provided in Methods S2. In order to test the predictive ability of the models we extrapolated them at 1 km resolution for all ~12,000 km$^2$ of forest reserves in the study area, using predictor values for 2020 (from scenarios developed in 2010; Swetnam et al. 2011). These scenarios (broadly correctly) predicted population to increase at a rate of 3% annually, but they are conservative in that they did not make predictions around infrastructure expansion. The predictions were then compared to actual tree cover losses recorded by GFW and local reports on degradation.
3 | RESULTS
3.1 | Observed rates of tree cutting
Tree cutting (here ≥5 cm dbh; see Notes S1 for trees >15 cm dbh) occurred in all but one forest between 1996 and 2010. Over one third of forests surveyed during this time had at least 10% of trees ≥5 cm dbh cut (mean among transects). Cutting levels were highly variable across forests, ranging from 0% to 81% with a mean of 10% (±15% SD) and a median of 5% (±6% MAD [median absolute deviation]). The availability of standing trees was greatly reduced in some forests, being as low as <100 stems ≥5 cm dbh per ha in some of the most degraded forests (as opposed to >1,000 in some of the least degraded forests, and a mean stem density of 849 ± 89 SE). Losses were particularly severe in the lowland coastal forests (mean across forests 20% ±28% SD; median 8% ±8% MAD), which are in direct vicinity of Dar es Salaam, a major centre of demand. Cutting levels for larger trees were similar to those of trees ≥5 cm dbh (Notes S1).
While the above statistics represent tree cutting over several years (the lifetime of a stump), the density of recent stumps can be seen as indicative of offtake rates at a given time (with a recent stump generally being 6 months or maximally 1 year old). On average (among forests) there were 3 (±0.74 SE) recent stumps >15 cm dbh per ha between 1996 and 2010. If logging rates were thus three to six trees per ha and year, then some 2.2–4.3 million trees >15 cm dbh would have been felled annually across the forest reserves in the study area (here restricted to ~7,200 km$^2$ with tree cover ≥50% according to GFW). Using a very simple above-ground tree biomass function (Chave et al., 2001; FAO, 2011) (which does not assume any knowledge of species or stand-level wood densities) this would be equivalent to a gross carbon loss of 0.41–0.82 TgC per year if the cut trees were 20 cm dbh. However, establishing above-ground carbon is extremely challenging without detailed dbh measurements and at least approximate wood density estimates. In addition, recent tree cutting was highly spatially and temporally clustered. While our data thus did not allow for a robust quantification of annual carbon losses between 1996 and 2010, they did however indicate that losses were substantial. In addition, there was evidence for an increase in cutting rates over the 14 years covered by the data—from less than one tree per ha and year (approximately) pre 2000 (0.4 ± 0.36 SE), to around three trees per ha and year between 2000 and 2005 (3.3 ± 1 SE), and around four trees per ha and year post 2005 (4 ± 1.2 SE). Out of 16 forests that have been visited twice (in ~2004 and ~2010) 13 had a greater density (and 14 a larger percentage) of recently cut trees in 2010 (Figure S1).
A subset of transects ($n = 45$ covering 18.75 ha in the coastal forests; Ahrends et al., 2010) with more detailed assessments allowed for the computation of above-ground tree biomass based on exact dbh and species or genus level wood specific gravity (extracted from Chave et al. 2009). Following equation 7 from Chave et al. (2014) and assuming a carbon fraction of dry matter of 0.5 we estimate that the area lost 8.9 MgC per ha due to cutting (over the lifetime of a stump), and 1.1 MgC in the year of the survey (2004/2005). Reducing the data to the type of information that would be available with the simpler counting methodology (and assuming that poles measure 10 cm dbh and trees 20 cm dbh) we calculate a loss of 8.1 MgC per ha using Chave et al. (2001). Figures for standing carbon are 28.4 and 35.3 MgC per ha, respectively. Thus, (a) the area lost a significant amount of standing carbon due to cutting (24% over the lifetime of a stump, and 4% in the survey year, which was characterised by a logging boom; Milledge et al., 2007); and (b) while the simple rapid counting methodology can provide rough carbon estimates, more detailed dbh measurements and the inclusion of at least stand-level averages for wood-specific gravity will considerably enhance the accuracy of these estimates.
3.2 | Comparison with remotely-sensed datasets
There was broad agreement between the spatial patterns of tree (cover) losses recorded in the field and by GFW. However, as one would expect, more subtle degradation was not picked up by this dataset focusing on complete tree cover loss in ~28 × 28 m cells. GFW reported tree cover losses for only 20% of the transects that recorded new tree cutting between 2000 and 2005. The larger the proportion of cut trees the more often GFW detected loss (Table 1). A very similar picture emerged when looking at a lower dbh threshold of ≥5 cm dbh (Table S6).
To illustrate this with specific examples, Figure 2 shows a comparison of ground data and remotely-sensed data for three coastal reserves visited in 2004. While GFW detected some canopy losses between 2000 and 2005 (affecting 2% of the area with ≥50% canopy cover in 2000), degradation on the ground was already severe (with a mean of $11 \pm 7$ SD recently cut trees ≥5 cm dbh, and $10 \pm 7$ SD charcoal pits per ha). GFW record large losses from these areas in the following years (26% of the area with ≥50% canopy cover in 2000), confirming the early warning signals provided by the ground data. Indeed, a field survey in 2016 estimated that, since 2004, the density of trees in these areas had halved, with timber tree densities having dropped threefold, and above-ground carbon being reduced by 40% (Ahrends et al., 2020). In one of the reserves (Vikindu) large areas of forest had entirely disappeared by 2016 (Figure 2i). The GFW data did not reflect Vikindu's severe state of degradation in 2004 (when much of the natural vegetation had been replaced by *Eucalyptus*, and widespread logging and charcoal production was
occurring), nor the disappearance of large parts of the remaining forest by 2016. Less than 1% tree cover loss was detected by GFW between 2000 and 2005, and ‘only’ another 15% loss between 2006 and 2018 (1% and 18% of tree cover ≥50%, respectively).
The radar-based maps on the other hand did detect subtle changes in forest condition. MN18 classed at least one pixel as either degraded or deforested in 81% of transects that recorded losses between 2007 and 2010, whereas GFW recorded losses for less than a
### Table 1
| Trees >15 cm dbh recently cut (2000–2005) | N transects | N transects with ≥1 pixel recording tree cover loss between 2000 and 2005 according to GFW |
|------------------------------------------|-------------|------------------------------------------------------------------------------------------|
| | | 100-m buffer | 50-m buffer | 28-m buffer |
| >0% | 88 | 31 (35%) | 23 (26%) | 18 (20%) |
| ≥1% | 55 | 20 (36%) | 15 (27%) | 11 (20%) |
| ≥5% | 18 | 12 (67%) | 9 (50%) | 5 (28%) |
| ≥10% | 9 | 7 (78%) | 5 (56%) | 2 (22%) |
| ≥25% | 2 | 2 (100%) | 1 (50%) | 1 (50%) |
| ≥50% | 1 | 1 (100%) | 1 (100%) | 1 (100%) |
### Figure 2
Comparison of ground data collected in 2004 and maps generated by Hansen et al. (2013; GFW) for three coastal reserves: Pugu (a–c), Ruvu South (d–g), and Vikindu (h–l). Their location is shown in the overview map (m). Left panels (a), (d) and (h) show the location of transects (colours reflect rates of new cutting). The dark green background is tree cover ≥50% in 2000 reported by GFW. Black lines are reserve outlines. Purple areas have experienced tree cover loss between 2000 and 2005 according to GFW. Much of the degradation recorded on the ground (e.g. see pictures b, e, f, i, j taken in 2004) is not reflected in the remotely-sensed deforestation maps. The GFW maps register larger tree cover losses in subsequent years (2006–2018; right panels (c), (g) and (k)), confirming the early warning signal set by the ground data (e.g. l)
third of these transects (Table 2 and Table S7). As above, the larger the percentage of cut trees, the more often losses were detected from space. The field data did not allow for a robust quantification of specificity (false positive rate) of either dataset; there were only three transects from 2007–2010 that recorded no losses at all (recent and old), and both GFW and MN18 recorded losses for one of these transects. The losses may well have occurred after the ground data were collected (mostly 2009), and/or may not have taken the form of tree cutting.
Overall, MN18 and GFW recorded similar amounts of deforestation (187 and 198 km$^2$, respectively) between 2007 and 2010 (data aggregated to 100-m resolution, and masked to 9,565 km$^2$ in forest reserves for which there were radar data). Aggregated to the scale of individual reserves ($n = 143$), the two datasets provided moderately correlated estimates of percentages of area deforested (Pearson’s $R = 0.51$). Assessing both deforestation and degradation, MN18 reported an additional 727 km$^2$ of degradation. While some reserves experienced both deforestation and degradation, the degradation data did not correlate with the deforestation data, and instead highlighted a different set of reserves as particularly impacted.
### 3.3 Modelled predictions of resource harvesting
Forest resource extraction increased steeply with accessibility and proximity to centres of demand (Figures S2–S4). Particularly in the case of charcoal production, and to some extent in the case of tree cutting, models that only considered local factors such as population density and management type performed less well than models that included predictors representative of city distance and wider population pressure (with a correlation [R] between predictions and test data under 10-fold cross validation of 0.57 as opposed to 0.75 in the case of charcoal burning, and 0.62 versus 0.68 in the case of tree cutting; Table S4). Protected area management explained some variation (Tables S4 and S5), with harvesting being highest in unreserved areas. However, it is important to note that the reserve categories conflate a range of factors, for example, all productive reserves analysed here were situated at Tanzania’s easily accessible coast. In addition, sample sizes were unequal (e.g. there were over 400 transects for 54 government forest reserves, and only 27 transects for 13 reserves on village land). Management on its own explained comparatively little variation (with cross-validation correlations of 0.39–0.56), which will in part be due to the data inadequacy mentioned above, and in part due to the overriding influence of demand and accessibility. For more details see Figure S5.
The relative importance of predictors differed for the different types of disturbances. Spatial patterns in tree cutting were almost entirely explained by urban population pressure (a distance decay function of population density; Table S1), with additional variation accounted for by distances to Dar es Salaam, roads, major cities, and steepness of terrain. Patterns in charcoal production were also mainly related to distance to Dar es Salaam and population pressure. Pole cutting, on the other hand, was best explained by a multitude of factors, including management, distances to Dar es Salaam, roads and cities, and local population density (Table S5). In interpreting the relative importance of predictors, it is important to note covariation and a degree of inter-exchangeability between them (Table S2). For instance, dropping population pressure from the full model only had a moderate effect on model performance as long as population size and city distance where still present. However, overall there was a notable difference between tree cutting and charcoal production on the one hand (almost entirely explained by variables related to accessibility from urban centres), and pole cutting on the other hand where local population density and management played a greater role in explaining the variation.
All final models performed reasonably well, achieving 10-fold cross validation correlations between 0.68 and 0.78 (Table S4). When setting aside 20% of the reserves as test data, it was generally possible to predict the top three most degraded forests from the rest of the data.
In order to broadly investigate whether the model for tree harvesting (>15 cm dbh) was able to indicate areas under future threat, we extrapolated the model to ~2020 and compared the predictions to tree cover losses recorded by GFW between 2000 and 2018 (Figure 3) and local reports (see below). There was general agreement between the areas predicted to face high levels of cutting by ~2020 and tree cover loss detected by GFW (Figure 3). Obvious differences arose in areas managed as rotational plantations, where GFW detected large losses while the model predicted low impacts (Figure 3a). For instance, Sao
| Trees >15 cm dbh recently cut (2007–2010) | N transects | N transects ≥1 pixel tree cover loss 2007–2010 (GFW) | N transects | N transects ≥1 pixel deforestation/degradation 2007–2010 (MN18) |
|----------------------------------------|-------------|-----------------------------------------------------|-------------|---------------------------------------------------------------|
| | | Deforestation | Degradation | Deforestation or degradation |
| >0% | 52 | 15 (29%) | 42 | 6 (14%) | 33 (79%) | 34 (81%) |
| ≥1% | 30 | 7 (23%) | 23 | 4 (17%) | 21 (91%) | 21 (91%) |
| ≥5% | 6 | 1 (17%) | 3 | 1 (33%) | 3 (100%) | 3 (100%) |
| ≥10% | 3 | 1 (33%) | 1 | 1 (100%) | 1 (100%) | 1 (100%) |
| ≥25% | 1 | 1 (100%) | 0 | Na | Na | Na |
Hill southwest of Iringa has lost a lot of tree cover due to plantation rotation, but according to local reports the non-plantation natural forest is not impacted by degradation (BirdLife International, 2013). In several other areas the model predicted high levels of tree cutting and GFW did not report major losses; here the modelled predictions were generally confirmed by local reports suggesting that degradation has occurred, but may not (yet) have manifested as complete tree cover loss at the Landsat pixel scale. For example, Chome Nature Forest Reserve and Kwizu and Chambogo Forest Reserves in the Pare Mountains, Kisimagonja in the West Usambara Mountains, and Nianganje in the Udzungwa Mountains (Figure 3b) are all reported to have been extensively degraded (BirdLife International, 2020a, 2020b; Gereau et al., 2014; Makero & Malimbwi, 2012). Moderate levels of disturbance have also been reported for Uluguru and Mkingu Nature Forest Reserves (Gereau et al., 2014). However, it is important to note that all of these reports are qualitative and terms such as ‘extensively degraded’ or ‘managed well’ are likely to be used in different ways across the reports. In addition, while GFW measure complete tree cover loss in 28-m pixels the model predicts tree harvesting pressure (not clear felling). Thus, the GFW data cannot be used to validate the model predictions and vice versa.
4 | DISCUSSION
Here, we presented a tested protocol for rapid quantitative assessments of degradation in the field, and we compared data collected with this method in Tanzanian forests with optical and radar-based remotely-sensed datasets. Covering over 600 ha our field data allowed for one of the first large-scale independent tests of these spatial datasets in southern Africa. Radar-based maps (McNicol et al., 2018) appeared to perform well, with even low levels of tree cutting generally coinciding with the detection of biomass loss. However, our study also suggests that there still is an important role for field data, which provided valuable additional information on the types of degradation and likely drivers. For instance, patterns in the field data implied that a major driver of forest degradation is demand for woody resources emanating from larger cities—a pattern that has also been confirmed in radar-based assessments (McNicol et al., 2018). The field data additionally allowed for a finer differentiation of the underlying processes, suggesting, for example, that it is specifically the urban demand for timber and charcoal which drives a lot of harvesting (whereby charcoal production was particularly high close to Dar es Salaam, whereas timber cutting was more widespread), with important consequences for where and how to target conservation interventions.
Degradation was pervasive in the study area, meaning that a focus on deforestation would significantly underestimate losses of carbon and declines in forest quality. Indeed, the ‘Global Forest Watch’ data (GFW), which are commonly used in national forest inventories and conservation assessments, and which measure complete canopy loss at a 28-m spatial resolution, did not routinely detect even high levels of cutting associated with severe impacts on the ground in terms of loss of natural vegetation and carbon. This echoes findings from other studies which show that small-scale deforestation tends to be underestimated by GFW, particularly in areas with low and/or seasonally dry woody cover (Bos et al., 2019; McNicol et al., 2018) where time-series analyses (Verbesselt et al. 2010, 2012) may perform better (Bos et al., 2019); but also in moist forest in Tanzania (Hamunyela et al., 2020) and elsewhere (Bos et al., 2019; Milodowski et al., 2017).
**FIGURE 3** Comparison of tree cover losses according to Hansen et al. (2013; GFW) and modelled prediction of tree cutting by 2020. Note that the legends are not directly comparable. (a) The percent area (in forest reserves) affected by tree cover losses between 2000 and 2018 according to GFW. (b) The mean predicted percentage of trees (≥15 cm dbh) cut. The model achieved a 10-fold cross-validation correlation between actual and fitted values of 0.68 (±0.04 SE); for details on model parameterisation and performance see Tables S4 and S5 and Figure S2. The general patterns between modelled and actual tree (cover) losses appear similar. Circled areas in (a) contain reserves managed as plantations, where tree cover losses are larger than the model would suggest. Circled areas in (b) experienced less detectable tree cover losses than the model suggests but are highly degraded according to local reports.
This is not a critique of the data generated by GFW, but it serves as a reminder that in areas where smaller scale deforestation and degradation are a significant cause of carbon emission and biodiversity loss, such as southern and eastern Africa (Baccini et al., 2017; McNicol et al., 2018; Pearson et al., 2017; Sedano et al., 2020), it is necessary to go beyond easily accessible deforestation data and to use a combination of approaches to detect these changes.
While radar data correlated well with disturbance on the ground, they cannot detect activities that have little impact on vegetative biomass—such as low levels of harvesting, collection of non-timber products, hunting, or the introduction of invasive alien species (McNicol et al., 2018; Ryan et al., 2012). Using remotely-sensed data, it is also very challenging to distinguish types of disturbances, plantations versus natural forests, and primary vegetation versus the rapid secondary growth following logging (Asner et al., 2004). Here we counted degradation as ‘detected’ even if only one pixel in or around a transect, that is, an area of up to ~20 ha, was classed as degraded or deforested. It is entirely possible that the removed tree(s) was not detected, and that the reported biomass loss was due to an unrelated co-incidental process or noise. Finally, given that almost all transects used in this study contained tree stumps, it was not possible to robustly establish the specificity (=false positive rate) of the radar dataset with our data. In summary, while radar data give increasingly accurate wall-to-wall quantifications of degradation, there is still an important role for field data in aiding their interpretation, and providing an ‘even earlier’ warning signal in terms of subtle changes that can be detected before there is any notable impact on canopy or biomass. Similarly, early warning signals can also be provided by ground-based sensing, for example, hemispherical photography and terrestrial LiDAR (Decuyper et al., 2018; Fournier & Hall, 2017).
Capturing the spatial patterns and types of degrading activities, particularly when they are illegal, requires surveying relatively large areas. Field-based inventories and monitoring are however frequently restricted to a small sub-sample of areas of interest (O’Connell, 2018). The framework presented here can be used for quick assessments of large areas without professional training, thereby also allowing for community participation (Danielsen et al., 2011; DeVries et al., 2016). Details can be adapted to the target system and question (but should of course be standardised to ensure comparability; for a recommended set of core measurements, please see the Supporting information: Field Protocol file). In particular, we would recommend using a higher size class resolution than used here and/or detailed dbh measurements. Our models for tree cutting performed less well than those for pole cutting and charcoal burning, which is likely due to tree harvesting >15 cm dbh serving a multitude of purposes, ranging from high-grade export timber to local construction and partly also charcoal production. Differentiating three to five size classes can still be done rapidly by eye, and even detailed dbh measurements are not too time-consuming. Particularly, if combined with the identification of main timber species, this would provide more information on likely markets and scale of operation. Such higher resolution data would also enable estimation of likely levels of sustainability of the resource extraction, whereby a decline in high-value species and/or larger trees are often indicators of unsustainability (Ahrends et al., 2010). In addition, more details, particularly on stem sizes, would also improve estimates of above-ground carbon (loss), which could only be crudely estimated using the simple counts. Another useful potential addition is collaborative work with social scientists in order to capture local knowledge, and to understand whether the resource extraction leads to win-lose or lose-lose scenarios locally (Smith et al., 2019). The transects can be done as a stand-alone activity or in addition to more detailed assessments in long-term vegetation plots (The SEOSAW Partnership, 2020), opportunistic botanical sampling or other types of surveys. Rapid transects cannot replace the depth of assessment possible in permanent plots, and large plots are also necessary for the calibration of radar (McNicol et al., 2018) as using narrow transects to relate radar to biomass is very challenging (Réjou-Méchain et al., 2014; Smith, 2018).
A key benefit of field data is that they can provide information on the type of biomass loss (e.g. charcoal, poles, planks or agricultural clearing) and sometimes on the type and sophistication of equipment that was used, allowing insights into the likely drivers and tailoring interventions appropriately (Doggart et al., 2020). Here we showed that while pole cutting may partly be driven by local demand, activities such as tree cutting and charcoal production correlated almost entirely with distances to major cities such as Dar es Salaam. Degradation thus appeared to be mainly driven by energy and timber demand emanating from larger cities and international markets, as opposed to mainly local demand (Ahrends et al., 2010)—a pattern that has been observed throughout southern Africa (McNicol et al., 2018; Sedano et al., 2020). Deforestation on the other hand is thought to be mainly driven by agriculture, highlighting the need for coordinated policy responses (Doggart et al., 2020; Hamunyela et al., 2020). It should also be noted that while the clear spatial patterns meant that degradation was to some extent predictable, dynamics in markets, human behaviour and policies can lead to rapid changes on the ground—such as the introduction of sesame farming in Tanzania (Brockington, 2019; Gross-Camp et al., 2019; Müller et al., 2014). Thus, although models can to some extent be used to extrapolate patterns in space and time, there is a clear need for regularly updated data (Sloan & Pelletier, 2012).
Protection on the ground has had some success in halting degradation but the type of management was less important in explaining patterns of forest condition than demand and accessibility. This suggests that any form of protection can be better than none, and putting land under the tenure and management of local communities might be a more mutually beneficial way to reserve some of the 170,000 km² of forest on general land in Tanzania (Mbwambo et al., 2012) than excluding rural populations from the resources their livelihoods rely upon. Tree cutting in village-owned reserves only slightly exceeded levels in protective forests and nature reserves, and this was to be expected as village land forest reserves often allow sustainable extraction. The effectiveness of village participation in forest management on government-owned land (co-management) could not be robustly assessed because much of
the data were collected when joint forest management agreements were in very early stages (Mbwambo et al., 2012).
The early warning provided by both radar and field data compared to GFW is a key advantage, because severe degradation and deforestation often follow the early stages of degradation (FAO, 2011)—a sequence we also observed here. However, in terms of (temporal) data availability, a significant advantage of GFW is that the readily processed data are freely available on an annual basis with global coverage, explaining their widespread use. This is not yet true for radar-based maps; although raw data are now freely available, costs arise in the form of trained technician(s) and fieldwork to relate radar backscatter to biomass. In areas where there already are vegetation plots for calibration and ground-truthing, a trained spatial analyst will need around two weeks (currently ~£1.5k at UK post-graduate salary) to produce biomass maps for an area of ~1 million ha. If no field data are available around 10 sufficiently sized (~1 ha) vegetation plots are needed at an approximate cost of £2k per plot (in East Africa). Species identification, data cleaning and analysis require approximately 2 months, that is, total costs amount to c. £26k. This is a significant initial investment, but once calibration plots are available, the costs of radar analyses are low compared to those of field surveys. To give an example, a rapid survey of 26 ha in 10 Tanzanian forests in 2016 (with detailed dbh measurements for ~15k trees; ~85% identified to species) cost around ~£30k, that is, ~£1.2k per ha. This involved 40 field days with a team of five people, and 4 months herbarium work and data cleaning. If species identification is not required, the costs will come down to around ~£350 per ha for field work and £100 per ha for data cleaning. (This assumes that time spent in the field is approximately half; depending on the vegetation, the transects can almost be done at walking pace if species identification is not attempted, that is, covering >1 ha per day is generally realistic.) Thus, annually updated maps of biomass loss are minimally ~£1.5k for radar versus minimally ~£13.5k for rapid surveys (30 transects of 1-km length to capture sufficient variation; mapped area size depends on levels of heterogeneity and desired accuracy). In practice, a reasonable compromise may be to produce at least annual radar-based maps of biomass change, combined with rapid field surveys at 3–5-year intervals to facilitate a better understanding of the nature and drivers of biomass loss.
Strictly speaking, the method presented here only quantifies woody resource extraction and not necessarily degradation. The latter is challenging to establish—particularly in systems where little is known about regeneration and growth rates. However, while systems adapted to frequent natural disturbance may be resilient to some resource extraction, the selective extraction of larger trees in old-growth forest can negatively impact ecosystem function and biodiversity (Jew et al., 2015; Tripathi et al., 2019; Yguel et al., 2019). In addition, while there is controversy over the role of wood products in carbon storage, the damage to the surrounding vegetation in denser forests, as well as the associated transportation and processing of the timber, tend to lead to substantial emissions (Ingerson, 2009; Pearson et al., 2014). Resource extraction in old-growth forests thus requires careful regulation. The vast majority of extraction recorded here took place in protective (as opposed to productive) reserves, and was consequently mostly unregulated and illegal with no concomitant legal revenue benefits for Tanzania as a state (Milledge et al., 2007).
In conclusion, the consideration of degradation in global forest reporting is important—particularly in southern Africa where the area affected by degradation is likely to be double the size of the area that is deforested, and overall carbon emissions from forest degradation are likely to exceed those from deforestation (McNicol et al., 2018). We recommend to routinely use radar-based monitoring combined with, wherever possible, rapid field assessments to better understand the quality of forests and the reasons for their decline, to provide an early warning, and to allow for informed and timely policy interventions.
ACKNOWLEDGEMENTS
We are very grateful for the contributions of extensive data from Frontier Tanzania, the Tanzanian Forest Conservation Group, WWF Tanzania, the Forestry and Beekeeping Division of the Ministry of Natural Resources and Tourism, the Sokoine University of Agriculture and the University of Dar es Salaam. We would like to thank the many volunteers and data collectors working for these institutions between 1996 and 2010. Permissions for fieldwork were provided by the Tanzanian Commission for Science and Technology. Funding was provided inter alia by the Darwin Initiative (grant 25-019), the Global Environment Facility, Marie Curie Actions (grant MEXT-CT-2004-517098 to R.M.), the Leverhulme Trust’s Valuing the Arc grant to A.B. (P.J.P., R.S.), the Critical Ecosystem Partnership Fund, and the Governments of Finland and Denmark. The Royal Botanic Garden Edinburgh is supported by the Scottish Government’s Rural and Environment Science and Analytical Services Division.
AUTHOR CONTRIBUTIONS
A.A., N.D.B., M.T.B., R.M., P.J.P. and P.M.H. designed the study; A.A. and M.T.B. performed analyses with analytical advice from P.J.P., R.S., C.R. and N.D.; field data were collected by A.A., P.M., S.M., B.M., C.L., C.B., K.D., V.W., N.O., A.R.M., K.P., T.J., E.T.J., and H.B. All authors discussed the results and commented on the manuscript.
ORCID
Antje Ahrends https://orcid.org/0000-0002-5083-7760
Philip J. Platts https://orcid.org/0000-0002-0153-0121
Casey Ryan https://orcid.org/0000-0002-1802-0128
David J. Harris https://orcid.org/0000-0002-6801-2484
REFERENCES
Ahrends, A., Burgess, N. D., Milledge, S. A. H., Bulling, M. T., Fisher, B., Smart, J. C. R., Clarke, G. P., Mhoro, B. E., & Lewis, S. L. (2010). Predictable waves of sequential forest degradation and biodiversity loss spreading from an African city. *Proceedings of the National Academy of Sciences of the USA*, 107(33), 14556–14561. https://doi.org/10.1073/pnas.0914471107
Ahrends, A., Malugu, I., Kindeketa, W., Gross-Camp, N., Burgess, N. D., Freeman, S. ... Hollingsworth, P. M. (2020). Current status and trends in the Tanzanian coastal forests and their woody resources. Royal Botanic Garden Edinburgh, WWF Tanzania Country Office, Tanzania Commission for Science and Technology, University of East Anglia, TRAFFIC East Africa, UNEP World Conservation Monitoring Centre, Dar es Salaam.
Alroy, J. (2017). Effects of habitat disturbance on tropical forest biodiversity. *Proceedings of the National Academy of Sciences of the USA*, 114(23), 6056–6061. https://doi.org/10.1073/pnas.1611855114
Ashagre, B. B., Plattt, P. J., Njana, M., Burgess, N. D., Balmford, A., Turner, R. K., & Schaafsma, M. (2018). Integrated modelling for economic valuation of the role of forests and woodlands in drinking water provision to two African cities. *Ecosystem Services*, 32, 50–61. https://doi.org/10.1016/j.ecoser.2018.05.004
Asner, G. P. (2009). Automated mapping of tropical deforestation and forest degradation: CLASlite. *Journal of Applied Remote Sensing*, 3(1), 033543. https://doi.org/10.1117/1.3223675
Asner, G. P., Keller, M., Pereira, J. R., Zweede, J. C., & Silva, J. N. M. (2004). Canopy damage and recovery after selective logging in Amazonia: Field and satellite studies. *Ecological Applications*, 14(spa4), 280–298. https://doi.org/10.1890/01-6019
Baccini, A., Walker, W., Carvalho, L., Farina, M., Sulla-Menashe, D., & Houghton, R. A. (2017). Tropical forests are a net carbon source based on aboveground measurements of gain and loss. *Science*, 358(6360), 230–234. https://doi.org/10.1126/science.aam5962
Baena, S., Boyd, D. S., & Moat, J. (2016). UAVs in pursuit of plant conservation – Real world experiences. *Ecological Informatics*, 47, 2–9. https://doi.org/10.1016/j.ecoinf.2017.01.001
Baillis, R., Ezzati, M., & Kammen, D. M. (2005). Mortality and greenhouse gas impacts of biomass and petroleum energy futures in Africa. *Science*, 308(5718), 98–103. https://doi.org/10.1126/science.1106881
Baker, T. R., Pennington, R. T., Dexter, K. G., Fine, P. V. A., Fortune-Hopkins, H., Honorio, E. N., Huamantupa-Chuquimaco, I., Klitgård, B. B., Lewis, G. P., de Lima, H. C., Ashton, P., Baraloto, C., Davies, S., Donoghue, M. J., Kaye, M., Kress, W. J., Lehmann, C. E. R., Monteagudo, A., Phillips, O. L., & Vasquez, R. (2017). Maximising Synergy among Tropical Plant Systematists, Ecologists, and Evolutionary Biologists. *Trends in Ecology & Evolution*, 32(4), 258–267. http://dx.doi.org/10.1016/j.tree.2017.01.007
Berenquer, E., Ferreira, J., Gardner, T. A., Aragao, L. E., De Camargo, P. B., Cerri, C. E., & Barlow, J. (2014). A large-scale field assessment of carbon stocks in human-modified tropical forests. *Global Change Biology*, 20(12), 3713–3726. https://doi.org/10.1111/gcb.12627
BirdLife International. (2013). Community-based management improves forest condition in an East African biodiversity hotspot… as more species continue to be discovered. https://www.birdlife.org/africa/news/community-based-management-improves-forest-condition-east-african-biodiversity-hotspot%E2%80%A6
BirdLife International. (2020a). Important bird areas factsheet: South Pare Mountains. http://datazone.birdlife.org/site/factsheet/7026
BirdLife International. (2020b). Important bird areas factsheet: West Usambara Mountains. http://datazone.birdlife.org/site/factsheet/west-usambara-mountains-iba-tanzania/text
Blackman, A. (2013). Evaluating forest conservation policies in developing countries using remote sensing data: An introduction and practical guide. *Forest Policy and Economics*, 34, 1–16. https://doi.org/10.1016/j.forpol.2013.04.006
Bos, A. B., De Sy, V., Duchelle, A. E., Herold, M., Martius, C., & Tsendbazdar, N.-E. (2019). Global data and tools for local forest cover loss and REDD+ performance assessment: Accuracy, uncertainty, complementarity and impact. *International Journal of Applied Earth Observation and Geoinformation*, 80, 295–311. https://doi.org/10.1016/j.jag.2019.04.004
Brockington, D. (2019). Persistent peasant poverty and assets. Exploring dynamics of new forms of wealth and poverty in Tanzania 1999–2018. *The Journal of Peasant Studies*, 1–20. https://doi.org/10.1080/0306150.2019.1658081
Cavanagh, C. J., Vedeld, P. O., & Traedal, L. T. (2015). Securitizing REDD+: Problematising the emerging illegal timber trade and forest carbon interface in East Africa. *Geoforum*, 60, 72–82. https://doi.org/10.1016/j.geoforum.2015.01.011
Chave, J., Coomes, D., Jansen, S., Lewis, S. L., Swenson, N. G., & Zanne, A. E. (2009). Towards a worldwide wood economics spectrum. *Ecology Letters*, 12(4), 351–366. https://doi.org/10.1111/j.1461-0248.2009.01285.x
Chave, J., Rejou-Mechain, M., Burquez, A., Chidumayo, E., Colgan, M. S., Delitti, W. B., Duque, A., Eid, T., Fearnside, P. M., Goodman, R. C., Henry, M., Martinez-Yrizar, A., Mugasha, W. A., Muller-Landau, H. C., Mencuccini, M., Nelson, B. W., Ngomanda, A., Nogueira, E. M., Ortiz-Malavassi, E., ... Vieilledent, G. (2014). Improved allometric models to estimate the aboveground biomass of tropical trees. *Global Change Biology*, 20(10), 3177–3190. https://doi.org/10.1111/gcb.12629
Chave, J., Riéra, B., & Dubois, M.-A. (2001). Estimation of biomass in a neotropical forest of French Guiana: Spatial and temporal variability. *Journal of Tropical Ecology*, 17(1), 79–96. https://doi.org/10.1017/s0266467401001055
Danielsen, F., Skutsch, M., Burgess, N. D., Jensen, P. M., Andrianandrasana, H., Karky, B., Lewis, R., Lovett, J. C., Massao, J., Ngaga, Y., Parthiyal, P., Poulsen, M. K., Singh, S. P., Solis, S., Sørensen, M., Tewari, A., Young, R., & Zahabu, E. (2011). At the heart of REDD+: a role for local people in monitoring forests?. *Conservation Letters*, 4(2), 158–167. http://dx.doi.org/10.1111/j.1755-263x.2010.00159.x
Decuyper, M., Mulatu, K. A., Brede, B., Calders, K., Armston, J., Rozendaal, D. M. A., Mora, B., Clevers, J. G. P. W., Kooistra, L., Herold, M., & Bongers, F. (2018). Assessing the structural differences between tropical forest types using terrestrial laser scanning. *Forest Ecology and Management*, 429, 327–335. https://doi.org/10.1016/j.foreco.2018.07.032
DeVries, B., Prathihast, A. K., Verbesselt, J., Kooistra, L., & Herold, M. (2016). Characterizing forest change using community-based monitoring data and Landsat time series. *PLoS One*, 11(3), e0147121. https://doi.org/10.1371/journal.pone.0147121
DeVries, B., Verbesselt, J., Kooistra, L., & Herold, M. (2015). Robust monitoring of small-scale forest disturbances in a tropical montane forest using Landsat time series. *Remote Sensing of Environment*, 161, 107–121. https://doi.org/10.1016/j.rse.2015.02.012
Doggart, N., Morgan-Brown, T., Lyimo, E., Mbilinyi, B., Meshack, C. K., Sallu, S. M., & Spracklen, D. V. (2020). Agriculture is the main driver of deforestation in Tanzania. *Environmental Research Letters*, 15(3). https://doi.org/10.1088/1748-9326/ab6b35
Elith, J., Leathwick, J. R., & Hastie, T. (2008). A working guide to boosted regression trees. *J Anim Ecol*, 77(4), 802–813. https://doi.org/10.1111/j.1365-2656.2008.01390.x
Ene, L. T., Nasset, E., Gobakken, T., Bollandsås, O. M., Mauya, E. W., & Zahabu, E. (2017). Large-scale estimation of change in aboveground biomass in miombo woodlands using airborne laser scanning and national forest inventory data. *Remote Sensing of Environment*, 188, 106–117. https://doi.org/10.1016/j.rse.2016.10.046
FAO. (2011). Assessing forest degradation. Towards the development of globally applicable guidelines. Forest Resources Assessment Working Paper 177. UN Food and Agriculture Organization, Rome.
Fisher, B., Turner, R. K., Burgess, N. D., Swetnam, R. D., Green, J., Green, R. E., & Balmford, A. (2011). Measuring, modeling and mapping ecosystem services in the Eastern Arc Mountains of Tanzania. *Progress in Physical Geography: Earth and Environment*, 35(5), 595–611. https://doi.org/10.1177/0309133311422968
Fournier, R. A., & Hall, R. J. (2017). *Hemispherical photography in forest science: Theory, methods, applications*. Springer.
Frontier Tanzania. (2001). *Udzungwa mountains biodiversity survey – Methods manual*. In K. Doody, K. Howell, & E. Fanning (Eds.), *Report for the Udzungwa mountains forest management and biodiversity conservation project MEMA* (pp. 1–55).
Fuller, D. O. (2006). Tropical forest monitoring and remote sensing: A new era of transparency in forest governance? *Singapore Journal of Tropical Geography*, 27(1), 15–29. https://doi.org/10.1111/j.1467-9493.2006.00237.x
Furukawa, T., Fujiwara, K., Kiboi, S. K., & Mutiso, P. B. C. (2011). Can stumps tell what people want: Pattern and preference of informal wood extraction in an urban forest of Nairobi, Kenya. *Biological Conservation*, 144(12), 3047–3054. https://doi.org/10.1016/j.biocon.2011.09.011
Gereau, R. E., Kariuki, M., Ndang’ang’a, P. K., Werema, C., & Muoria, P. (2014). *Biodiversity status and trends in the Eastern Arc Mountains and coastal forests of Kenya and Tanzania region, 2008–2013*. BirdLife International Africa Partnership Secretariat.
Ghazoul, J., Burivalova, Z., Garcia-Ulloa, J., & King, L. A. (2015). Conceptualizing forest degradation. *Trends in Ecology & Evolution*, 30(10), 622–632. https://doi.org/10.1016/j.tree.2015.08.001
Gross-Camp, N., Rodriguez, I., Martin, A., Inturias, M., & Massao, G. (2019). The type of land we want: Exploring the limits of community forestry in Tanzania and Bolivia. *Sustainability*, 11(6). https://doi.org/10.3390/su11061643
Hamunyela, E., Brandt, P., Shirima, D., Do, H. T. T., Herold, M., & Roman-Cuesta, R. M. (2020). Space-time detection of deforestation, forest degradation and regeneration in montane forests of Eastern Tanzania. *International Journal of Applied Earth Observation and Geoinformation*, 88. https://doi.org/10.1016/j.jag.2020.102063
Hansen, M. C., Potapov, P. V., Moore, R., Hancher, M., Turubanova, S. A., Tyukavina, A., & Townshend, J. R. G. (2013). High-resolution global maps of 21st-century forest cover change. *Science*, 342(6160), 850–853. https://doi.org/10.1126/science.1244963
Hosonuma, N., Herold, M., De Sy, V., De Fries, R. S., Brockhaus, M., Verchot, L., Angelsen, A., & Romijn, E. (2012). An assessment of deforestation and forest degradation drivers in developing countries. *Environmental Research Letters*, 7(4), 044009. https://doi.org/10.1088/1748-9326/7/4/044009
Ingerson, A. (2009). *Wood products and carbon storage: Can increased production help solve the climate crisis?* Wilderness Society.
Jew, E. K. K., Loos, J., Dougill, A. J., Sallu, S. M., & Benton, T. G. (2015). Butterfly communities in miombo woodland: Biodiversity declines with increasing woodland utilisation. *Biological Conservation*, 192, 436–444. https://doi.org/10.1016/j.biocon.2015.10.022
Lukumbuyiza, M., & Sianga, C. (2017). *Overview of the timber trade in East and Southern Africa*. TRAFFIC.
Makero, J., & Malimbi, R. (2012). Extent of illegal harvesting on availability of timber species in Nyangarje Forest Reserve, Tanzania. *International Forestry Review*, 14(2), 177–183. https://doi.org/10.1505/146554812800923372
Mbawombo, L., Eid, T., Malimbwi, R. E., Zahabu, E., Kajembe, G. C., & Luoga, E. (2012). Impact of decentralised forest management on forest resource conditions in Tanzania. *Forests, Trees and Livelihoods*, 21(2), 97–113. https://doi.org/10.1080/14728028.2012.698583
McNicol, I. M., Ryan, C. M., & Mitchard, E. T. A. (2018). Carbon losses from deforestation and widespread degradation offset by extensive growth in African woodlands. *Nature Communications*, 9(1), 3045. https://doi.org/10.1038/s41467-018-05386-z
Miettinen, J., Shi, C., & Liew, S. C. (2011). Deforestation rates in insular Southeast Asia between 2000 and 2010. *Global Change Biology*, 17(7), 2261–2270. https://doi.org/10.1111/j.1365-2486.2011.02398.x
Milledge, S., Gelvas, I., & Ahrends, A. (2007). Forestry, governance and national development: Lessons learned from a logging boom in Southern Tanzania. TRAFFIC East/Southern Africa, Tanzania Development Partners Group, Tanzania Ministry of Natural Resources and Tourism, Dar es Salaam.
Milodowski, D. T., Mitchard, E. T. A., & Williams, M. (2017). Forest loss maps from regional satellite monitoring systematically underestimate deforestation in two rapidly changing parts of the Amazon. *Environmental Research Letters*, 12(9). https://doi.org/10.1088/1748-9326/aa7e1e
Mitchell, A. L., Rosenqvist, A., & Mora, B. (2017). Current remote sensing approaches to monitoring forest degradation in support of countries measurement, reporting and verification (MRV) systems for REDD. *Carbon Balance and Management*, 12(1), 9. https://doi.org/10.1186/s13021-017-0078-9
Mittermeier, R. A., Turner, W. R., Larsen, F. W., Brooks, T. M., & Gascon, C. (2011). Global biodiversity conservation: The critical role of hotspots. In F. E. Zachos & J. C. Habel (Eds.), *Biodiversity hotspots* (pp. 3–22). Springer.
Müller, D., Sun, Z., Vongvisouk, T., Pflugmacher, D., Xu, J., & Mertz, O. (2014). Regime shifts limit the predictability of land-system change. *Global Environmental Change*, 28, 75–83. https://doi.org/10.1016/j.gloenvcha.2014.06.003
O’Connell, M. (2018). Detecting and measuring threats to biodiversity: The use of spatial ecological data to derive and evaluate conservation action. *Ecological Informatics*, 47. https://doi.org/10.1016/j.ecoinf.2018.07.001
Olson, D. M., & Dinerstein, E. (2002). The Global 200: Priority ecoregions for global conservation. *Annals of the Missouri Botanical Garden*, 89(2), 199–224. https://doi.org/10.2307/3298564
Ota, T., Ahmed, O. S., Minn, S. T., Khai, T. C., Mizoue, N., & Yoshida, S. (2019). Estimating selective logging impacts on aboveground biomass in tropical forests using digital aerial photography obtained before and after a logging event from an unmanned aerial vehicle. *Forest Ecology and Management*, 433, 162–169. https://doi.org/10.1016/j.foreco.2018.10.058
Pearson, T. R. H., Brown, S., & Casarim, F. M. (2014). Carbon emissions from tropical forest degradation caused by logging. *Environmental Research Letters*, 9(3). https://doi.org/10.1088/1748-9326/9/3/034017
Pearson, T. R. H., Brown, S., Murray, L., & Sidman, G. (2017). Greenhouse gas emissions from tropical forest degradation: An underestimated source. *Carbon Balance and Management*, 12(1), 3. https://doi.org/10.1186/s13021-017-0072-2
Réjou-Méchain, M., Muller-Landau, H. C., Detto, M., Thomas, S. C., Le Toan, T., Saatchi, S. S., & Chave, J. (2014). Local spatial structure of forest biomass and its consequences for remote sensing of carbon stocks. *Biogeosciences*, 11(23), 6827–6840. https://doi.org/10.5194/bg-11-6827-2014
Romijn, E., Lantican, C. B., Herold, M., Lindquist, E., Ochieng, R., Wijaya, A., & Verchot, L. (2015). Assessing change in national forest monitoring capacities of 99 tropical countries. *Forest Ecology and Management*, 352, 109–123. https://doi.org/10.1016/j.foreco.2015.06.003
Ryan, C. M., Berry, N. J., & Joshi, N. (2014). Quantifying the causes of deforestation and degradation and creating transparent REDD+ baselines: A method and case study from central Mozambique. *Applied Geography*, 53, 45–54. https://doi.org/10.1016/j.apgeog.2014.05.014
Ryan, C. M., Hill, T., Woollen, E., Ghee, C., Mitchard, E., Cassells, G., & Williams, M. (2012). Quantifying small-scale deforestation and forest degradation in African woodlands using radar imagery. *Global Change Biology*, 18(1), 243–257. https://doi.org/10.1111/j.1365-2486.2011.02551.x
Ryan, C. M., Pritchard, R., McNicol, I., Owen, M., Fisher, J. A., & Lehmann, C. (2016). Ecosystem services from southern African woodlands and their future under global change. *Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences*, 371, 1703. https://doi.org/10.1098/rstb.2015.0312
Sasaki, N., & Putz, F. E. (2009). Critical need for new definitions of “forest” and “forest degradation” in global climate change agreements. *Conservation Letters*, 2(5), 226–232. https://doi.org/10.1111/j.1755-263X.2009.00067.x
Schaafsma M., Morse-Jones S., Posen P., Swetnam R.D., Balmford A., Bateman I.J., Burgess N.D., Chamshama S.A.O., Fisher B., Freeman T., Geofrey V., Green R.E., Hepelwa A.S., Hernández-Sirvent A., Hess S., Kajembe G.C., Kayharara G., Kilonzo M., Kulindwa K., Lund J.F., Madoffe S.S., Mbwambo L., Meilby H., Ngaga Y.M., Theilade I., Treue T., van Beukering P., Vyamana V.G., & Turner R.K. (2014). The importance of local forest benefits: Economic valuation of Non-Timber Forest Products in the Eastern Arc Mountains in Tanzania. *Global Environmental Change*, 24, 295–305. http://dx.doi.org/10.1016/j.gloenvcha.2013.08.018.
Sedano, F., Lisboa, S., Duncanson, L., Ribeiro, N., Sitoe, A., Sahajpal, R., & Tucker, C. (2020). Monitoring intra and inter annual dynamics of forest degradation from charcoal production in Southern Africa with Sentinel – 2 imagery. *International Journal of Applied Earth Observation and Geoinformation*, 92. https://doi.org/10.1016/j.jag.2020.102184
Sexton, J. O., Song, X.-P., Feng, M., Noojipady, P., Anand, A., Huang, C., & Townshend, J. R. (2013). Global, 30-m resolution continuous fields of tree cover: Landsat-based rescaling of MODIS vegetation continuous fields with lidar-based estimates of error. *International Journal of Digital Earth*, 6(5), 427–448. https://doi.org/10.1080/17538947.2013.786146
Sloan, S., & Pelletier, J. (2012). How accurately may we project tropical forest-cover change? A validation of a forward-looking baseline for REDD. *Global Environmental Change*, 22(2), 440–453. https://doi.org/10.1016/j.gloenvcha.2012.02.001
Sloan, S., & Sayer, J. A. (2015). Forest Resources Assessment of 2015 shows positive global trends but forest loss and degradation persist in poor tropical countries. *Forest Ecology and Management*, 352, 134–145. https://doi.org/10.1016/j.foreco.2015.06.013
Smith, F. (2018). Using satellite remote sensing to detect forest degradation in the coastal forests of Tanzania (dissertation for the degree of MSc in Geographical Information Science), University of Edinburgh.
Smith, H. E., Ryan, C. M., Vollmer, F., Woollen, E., Keane, A., Fisher, J. A., & Mahamane, M. (2019). Impacts of land use intensification on human wellbeing: Evidence from rural Mozambique. *Global Environmental Change*, 59. https://doi.org/10.1016/j.gloenvcha.2019.101976
Stattersfield, A. J. (1998). *Endemic bird areas of the world-priorities for biodiversity conservation*. Bird Life International.
Swetnam, R. D., Fisher, B., Mbilinyi, B. P., Munishi, P. K., Willcock, S., Ricketts, T., ... Lewis, S. L. (2011). Mapping socio-economic scenarios of land cover change: A GIS method to enable ecosystem service modelling. *Journal of Environmental Management*, 92(3), 563–574. https://doi.org/10.1016/j.jenvman.2010.09.007
The SEOSAW Partnership. (2020). A network to understand the changing socio-ecology of the southern African woodlands (SEOSAW): Challenges, benefits, and methods. *Plants, People, Planet*. https://doi.org/10.1002/ppp3.10168
Thompson, I. D., Guariguata, M. R., Okabe, K., Bahamondez, C., Nasi, R., Heymell, V., & Sabogal, C. (2013). An operational framework for defining and monitoring forest degradation. *Ecology and Society*, 18(2), https://doi.org/10.5751/ES-05443-180220
Tripathi, H. G., Mzumara, T. I., Martin, R. O., Parr, C. L., Phiri, C., & Ryan, C. M. (2019). Dissimilar effects of human and elephant disturbance on woodland structure and functional bird diversity in the mopane woodlands of Zambia. *Landscape Ecology*, 34(2), 357–371. https://doi.org/10.1007/s10980-019-00774-2
United Republic of Tanzania. (2001). *Forestry outlook studies Africa*. Ministry of Natural Resources and Tourism.
Verbesselt, J., Hyndman, R., Newnham, G., & Culvenor, D. (2010). Detecting trend and seasonal changes in satellite image time series. *Remote Sensing of Environment*, 114(1), 106–115. https://doi.org/10.1016/j.rse.2009.08.014
Verbesselt, J., Zeileis, A., & Herold, M. (2012). Near real-time disturbance detection using satellite image time series. *Remote Sensing of Environment*, 123, 98–108. https://doi.org/10.1016/j.rse.2012.02.022
Woodcock, C. E., Loveland, T. R., Herold, M., & Bauer, M. E. (2020). Transitioning from change detection to monitoring with remote sensing: A paradigm shift. *Remote Sensing of Environment*, 238. https://doi.org/10.1016/j.rse.2019.111558
Yguel, B., Piponiot, C., Mirabel, A., Dourdain, A., Hérault, B., Gourlet-Fleury, S., ... Fontaine, C. (2019). Beyond species richness and biomass: Impact of selective logging and silvicultural treatments on the functional composition of a neotropical forest. *Forest Ecology and Management*, 433, 528–534. https://doi.org/10.1016/j.foreco.2018.11.022
**SUPPORTING INFORMATION**
Additional supporting information may be found online in the Supporting Information section.
**How to cite this article:** Ahrends A, Bulling MT, Platts PJ, et al. Detecting and predicting forest degradation: A comparison of ground surveys and remote sensing in Tanzanian forests. *Plants, People, Planet*. 2021;3:268–281. https://doi.org/10.1002/ppp3.10189 |
Interspecific Hybridization of Sturgeon Species Affects Differently Their Gonadal Development
Zuzana Linhartová¹, Miloš Havelka¹,²*, Martin Pšenička¹, Martin Flajšhans¹
¹South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Faculty of Fisheries and Protection of Waters, University of South Bohemia in České Budějovice, Vodňany, Czech Republic
²Faculty and Graduate School of Fisheries Sciences, Hokkaido University, Hakodate, Japan
*Corresponding author: firstname.lastname@example.org
ABSTRACT
Linhartová Z., Havelka M., Pšenička M., Flajšhans M. (2018): Interspecific hybridization of sturgeon species affects differently their gonadal development. Czech J. Anim. Sci., 63, 1–10.
Gonad development in fish is generally assumed to be negatively influenced by interspecific hybridization, resulting in sterility or sub-sterility. However, this is not the case in sturgeons (Acipenseridae), in which fertile hybrids are common. In the present study, we investigated gonad development in several sturgeon interspecific hybrids and purebred species. Six interspecific hybrid groups and three purebred groups were analyzed including 20 hybrid specimens with even ploidy, 40 specimens having odd ploidy levels, and 30 purebred specimens. Hybrids of species with the same ploidy (even ploidy – 2n, 4n) exhibited normally developed gonads similar to those seen in purebred specimens. In contrast, hybrids of species differing in ploidy (odd ploidy – 3n) did not display fully developed gonads. Ovaries were composed of oocytes or nests of differentiating oocytes that ceased development in early stages of meiosis (pachytene to zygote) with a higher content of adipose and apoptotic tissue. Testes contained single spermatogonia along with Sertoli cells and spaces lacking germ cells. The obtained results showed that gonad development was influenced by genetic origin and ploidy of the sturgeon hybrids and were consistent with full fertility of hybrids with even ploidy. Sterility of females, but possibly limited fertility of males, is suggested for hybrids with odd ploidy.
Keywords: Acipenseriformes; gonad histology; hybrid fertility; hybrid gametogenesis
Hybridization is presumed to have evolutionary significance in speciation of plants (Soltis and Soltis 2009) and animals (Dowling and Secor 1997), and has been widely utilized in plant and animal breeding (Allard 1999; Rosati et al. 2007). Hybridization may generate novel phenotypes, including advantages of hybrid vigour or positive heterosis and disadvantages mediated by intrinsic or environmentally mediated incompatibilities. Among fish, interspecific hybridization is not rare. It occurs in natural populations (Fahy et al. 1988) and is used in fish breeding (Bartley et al. 2001). In general, the fertility of interspecific hybrids depends on the compatibility of karyotypes and their structure in the parent species. Hybridization between closely related species enables appropriate chromosome pairing and segregation in meiosis. As a result, meiosis is not disrupted, and viable gametes are formed, and such interspecific hybrids with the same number of chromosomes as the parent species are fertile (Seehausen 2004). Hybridization of species differing in chromosome number causes meiotic mismatch of parent chromosomes or karyotypes and may result in sterility of the allopolyploid hybrids (Seehausen 2004).
Sturgeons (order Acipenseriformes) are one of the oldest groups of fish, having evolved more than 200 Mya (Bemis et al. 1997). Their evolution is inherently connected to several polyploidization and hybridization events (Fontana et al. 2007) and current species are distinguished by two scales of ploidy: \((i)\) the evolutionary scale, which presumes tetraploid (4n) – octaploid (8n) – dodecaploid – (12n) relationships (Birstein et al. 1997) and refers to ancient ploidy; and \((ii)\) the functional scale, which presumes diploid (2n) – tetraploid (4n) – hexaploid (6n) relationships (Fontana et al. 2007) arising from significant functional genome re-diploidization in sturgeon evolution (Havelka et al. 2013).
Sturgeons are prone to interspecific hybridization under natural conditions (e.g. Ludwig et al. 2009) as well as in artificial propagation (Zhang et al. 2013). It is generally considered that sturgeon hybrids resulting from crosses of species of the same ploidy exhibit the ploidy of the parents and are fertile (Arefyjev 1997), while hybrids of species differing in ploidy levels exhibit a ploidy intermediate to those of parents (Flajshans and Vajcova 2000) and are sterile or partially sterile (Vasilev et al. 2014). Although this assumption is widely accepted, data on gonad development in sturgeon hybrids are scarce in the literature.
Determining the point at which gonad development breaks down in hybrids is important for understanding both the consequences of hybridization on fitness and the extent of fertility. This requires characterization of gonad development in sturgeon hybrids of differing origins and ploidy. The goal of this study was to investigate whether genetic origin and ploidy of five sturgeon hybrids affect differently their gonadal development. Results may have significance for sturgeon aquaculture, in which crossbreeds are utilized for their hybrid vigour (Bronzi et al. 2011), as well as for conservation of wild populations, as interspecific hybridization is considered to be a serious genetic threat to endangered sturgeon populations (Ludwig et al. 2009), and fertile hybrids may contribute to the problem.
**MATERIAL AND METHODS**
*Ethics.* The study was conducted in the aquaculture facility of the Genetic Fisheries Center at the Faculty of Fisheries and Protection of Waters (FFPW), University of South Bohemia in České Budějovice (USB), in Vodňany, Czech Republic. All experiments were carried out in accordance with the Animal Research Committee of the FFPW. Fish were maintained according to the principles based on the EU harmonized animal welfare act of the Czech Republic and principles of laboratory animal care in compliance with the national law (Act No. 246/1992 on the protection of animals against cruelty).
*Fish and rearing conditions.* The hybrid and purebred groups under study were established by factorial mating of sterlet *Acipenser ruthenus* L. 1758 (2n), beluga *Huso huso* L. 1758 (2n), Siberian sturgeon *Acipenser baerii* Brandt 1869 (4n), and Russian sturgeon *Acipenser gueldenstaedtii* Brandt & Ratzeburg 1833 (4n), all ploidy levels of the functional scale (Table 1; Supplementary Table S1 in Supplementary Online Material (SOM)). Ploidy and genome size of the species were described by Bytyutskyy et al. (2012, 2014).
Initial rearing of larvae of each mating was carried out in 0.3 m$^3$ separate indoor tanks. The larvae were fed by *Artemia nauplii* and diced Sludge worms (*Tubifex tubifex*) *ad libitum*. After 100 days of initial rearing, progeny of each mating was moved to 3.5 m$^3$ separate outdoor tanks with average temperature of 22°C, initial stocking density 7 kg/m$^3$, and fed *ad libitum* a commercial diet (Coppens® Start Premium; Coppens International B.V., the Netherlands) containing 54% of protein, 15% of fat, 1% of crude fibre, and 9.4% of ash). After the first summer, each fish was marked with a Visible Implant Elastomer tag (VIE) (Northwest Marine Technologies, USA) on the ventral side of the rostrum to indicate group origin and transferred for overwintering in 4 m$^3$ indoor tanks at 4°C, without feeding. During the subsequent rearing season, fish were held in 3.5 m$^3$ outdoor tanks with average temperature of 22°C and fed daily at 4% of fish biomass a commercial diet (Coppens®
Supreme-10 containing 49% of protein, 10% of fat, 0.8% of crude fibre, and 7.9% of ash). At the age of two years, Individual Passive Integrated Transponder (PIT) tags (134.2 kHz; AEG Comp., Germany) were implanted subcutaneously. The fish were reared in outdoor earth ponds in the initial stocking density 25 kg/m$^3$, and fed daily from April till October at 4% of fish biomass the abovementioned commercial diet until 4–6 years old (671.7 ± 438.4 g mean body mass ($M_B$) and 565.8 ± 133.0 mm mean total length ($L_T$)) and at that time histology was carried out. The parameters specified above represented extensive rearing conditions. The groups comprised four interspecific hybrid groups: SR$_5$ = Siberian sturgeon × Russian sturgeon (5 years), SST$_4$ = Siberian sturgeon × sterlet (4 years), RSt$_6$ = Russian sturgeon × sterlet (6 years), StB$_5$ = sterlet × beluga (5 years), and SS$_5$ = Siberian sturgeon control group (5 years).
Separate ongrowing of a second group of sturgeon larvae and juveniles was conducted in 0.5 m$^3$ recirculating indoor tanks at mean water temperature 15°C, feeding as described above. Marking with VIE tags was done as described, and all tagged fish-of-the-year were stocked in density of 10 kg/m$^3$ together for further indoor rearing in 3.2 m$^3$ tanks at 16–18°C, at feeding rate of 4% of the fish biomass. These parameters represented intensive rearing conditions. Histological analysis was conducted at 1 year of age (328.0 ± 143.2 g mean $M_B$ and 390.3 ± 90.6 mm mean $L_T$). Two hybrid groups (SSt$_1$ = Siberian sturgeon × sterlet, StS$_1$ = sterlet × Siberian sturgeon) and 2 purebred control groups (SS$_5$ = Siberian sturgeon, StS$_1$ = sterlet) were included.
The examination of gonads was carried out in April 2014 in 4 to 6-year-old fish and in April 2015 in 1-year-old fish. Fish were separated according to individual PIT or VIE tags. Ten fish from each hybrid and control group were anaesthetized by immersion in 0.07 ml/l clove oil, sacrificed, gonads were dissected, and fat tissue was separated from gonads of the 4 to 6-year-old fish (not possible in gonads of 1-year-old fish due to size). The gonads were washed in phosphate buffered saline (PBS, 248 mOsm/kg, pH 8) (Sigma-Aldrich, USA), cut into small pieces, fixed in Bouin’s fixative (Sigma-Aldrich) overnight, and stored in 80% ethanol until further processing at 4°C.
**Histology.** Pieces of middle part of gonad (1–3 cm$^3$) were cut, dehydrated in an ethanol-xylene series, embedded in paraffin blocks, cut into 5 μm sections using a rotary microtome Diapath (Diapath Galileo, Italy), and placed on glass slides using a water bath (42°C). Three preparations were made from each gonad sample. Paraffin slides were machine stained with haematoxylin and eosin (Tissue-Tek DRS 2000; Sakura, USA) according to standard procedures. The stage of development of the gonad was evaluated from histological sections under optical microscope Olympus BH2 (Olympus Corp., Japan) at ×200 and ×400 magnification, photographed with a Nikon 5100 camera (Nikon, Japan), and analyzed using Olympus MicroImage software (Version 4.0 for MS Windows 95/NT/98).
### Table 1. Analyzed hybrid and purebred (control) groups including species origin, number of dams and sires used for factorial mating (in square brackets), age, functional ploidy levels (in brackets), number of specimens processed for histology ($n$), and mean weight ± standard deviation (SD)
| Dam species (No. of dams) | Sire species (No. of sires) | Code | Age (years) | Ploidy level | $n$ | Mean weight ± SD (g) |
|---------------------------|-----------------------------|------|-------------|--------------|-----|---------------------|
| **Hybrid groups** | | | | | | |
| Siberian sturgeon (4n) [1]| Russian sturgeon (4n) [1] | SR$_5$ | 5 | 4n | 10 | 795.00 ± 295.43 |
| Siberian sturgeon (4n) [2]| sterlet (2n) [2] | SST$_4$ | 4 | 3n | 10 | 143.00 ± 106.26 |
| Siberian sturgeon (4n) [3]| sterlet (2n) [3] | SST$_1$ | 1 | 3n | 10 | 457.50 ± 20.48 |
| Russian sturgeon (4n) [1]| sterlet (2n) [3] | RSt$_6$ | 6 | 3n | 10 | 982.22 ± 198.08 |
| sterlet (2n) [3] | Siberian sturgeon (4n) [3] | StS$_1$ | 1 | 3n | 10 | 390.19 ± 49.57 |
| sterlet (2n) [3] | beluga (2n) [5] | StB$_5$ | 5 | 2n | 10 | 854.44 ± 515.56 |
| **Purebred groups** | | | | | | |
| Siberian sturgeon (4n) [1]| Siberian sturgeon (4n) [3] | SS$_5$ | 5 | 4n | 10 | 632.00 ± 295.38 |
| Siberian sturgeon (4n) [3]| Siberian sturgeon (4n) [3] | SS$_1$ | 1 | 4n | 10 | 358.89 ± 79.13 |
| sterlet (2n) [3] | sterlet (2n) [3] | StS$_1$ | 1 | 2n | 10 | 105.78 ± 30.02 |
Evaluation of cell apoptosis. A colorimetric non-isotopic FragEL™ DNA Fragmentation Detection Kit (QIA33; Merck Millipore, USA) was used for the quantification and morphological characterization of normal and apoptotic gonad cells in paraffin-embedded tissue sections of all fish. The kit allows recognition of apoptotic nuclei in paraffin-embedded tissue sections by Fragment End Labeling (FragEL™) of DNA. Moreover, counterstaining with methyl green aids in the morphological evaluation of normal and apoptotic cells. Apoptosis was detected by dark labelling of DNA breaks in cell nuclei according to the manufacturer’s protocol. Photographs were taken using an optical microscope Olympus BH2 (Olympus Corp.) at ×200 magnification with a Nikon 5100 camera (Nikon, Japan).
Molecular analyses. As suitable markers for direct identification of sturgeon species under study were not available, an alternative approach was used to investigate genetic makeup of parent fish based on the mtDNA control region and nuclear markers (microsatellites).
The genomic DNA was extracted from fin-clips of all parent fish using the NucleoSpin® tissue kit (MACHEREY-NAGEL, Germany). A standard polymerase chain reaction (PCR) protocol was followed to amplify the mtDNA fragment of the control region (Mugue et al. 2008) using forward primer AHR3 (CATACCATAATGTTTCATCTACC) and reverse primer DL651 (ATCTTAACATCTTCAGTGT). The PCR reaction was carried out in 30 µl containing 0.25 µM of each primer, 75 mM Tris–HCl, pH 8.8, 20 mM (NH₄)₂SO₄, 0.01% Tween 20, 2.5 mM MgCl₂, 800 µM dNTP, 2.5 U Taq-Purple DNA polymerase, and 25 ng of DNA template. For PCR cycling, the following conditions were used: 95°C for 120 s, 5 cycles at 95°C for 60 s, 53°C for 60 s, 72°C for 60 s, followed by 30 cycles at 95°C for 30 s, 53°C for 45 s, 72°C for 60 s, and a final extension at 72°C for 12 min. The PCR products were purified and sequenced by Macrogen Europe Inc. (the Netherlands) using primer AHR3. Partial sequences of the mtDNA control region were trimmed to 550 bp, aligned by GENEIOUS 6.1.8 software (http://www.geneious.com), and resulting haplotypes were searched against the NCBI nucleotide database using Mega-BLAST.
The multidimensional factorial correspondence analysis (FCA) was used to reveal the origin of the parent fish based on nuclear markers. For this purpose, the following DNA samples were additionally included in the analysis (number of samples in brackets): sterlet (19), beluga (19), Siberian sturgeon (15), Russian sturgeon (22), sterlet × beluga (10), Russian sturgeon × sterlet (10), Siberian sturgeon × Russian sturgeon (10), sterlet × Siberian sturgeon (10), Siberian sturgeon × sterlet (10). The microsatellite genotypes of these specimens were used to form reference clusters against which the genotypes of parental individuals were compared. Nine markers including *Afu* 19, *Afu* 68 (May et al. 1997); *Aox* 27, *Aox* 45 (King et al. 2001); and *Spl* 101, *Spl* 105, *Spl* 107, *Spl* 163, and *Spl* 173 (McQuown et al. 2000) were used for the analysis. Amplification was carried out according to the protocol described by Havelka et al. (2013). Microsatellite fragment analysis was performed on a 3500 ABI Genetic Analyzer (Applied Biosystems, USA) using a GeneScan LIZ 600 size standard (Applied Biosystems), and genotypes were scored in GENEIOUS 6.1.8 (http://www.geneious.com), using Microsatellite Plugin 1.4. Genetic relationships among individuals based on the multilocus genotypes were visualized by FCA in GENETIX software (Version 4.05, 2004) for MS Windows. This enabled visualization of data in multidimensional space, with no a priori assumptions on grouping, using each allele as an independent variable.
RESULTS
All parent fish had mtDNA haplotypes (Supplementary Table S2 in SOM) corresponding to their species, showing no evidence of maternal gene introgression from other sturgeon species. The nuclear markers placed parent fish in clusters of *A. ruthenus*, *A. baerii*, *A. gueldenstaedtii*, and *H. huso* (Figure 1). Hence, they were assumed to be pure species based on nuclear markers, which, together with results of mtDNA analysis, showed parent fish to be pure specimens.
Gonad histology. Gonad development was assessed by their external appearance and histological examination of the inner structure and localization of germ cells. The sex ratio was 50 ± 5% for male and female, and only 3 intersexes were identified in purebred group of sterlet (StST₁). In all 4 to 6-year-old fish, the gonads were at maturity stage II, clearly discernible and surrounded by visceral fat in
cranial and mostly median region of gonads. Testes had flat structure and colour varied from pinkish to white. Ovaries were clearly distinguished from testes by rugae (ovigerous lamellae), and colour varied from white to yellow. In all one-year-old fish, the gonadal ridges were at maturity stage I, with narrow tube-like structure, and were poorly distinguishable because of high fat content. Sex of the gonads was distinguished by the formation of folds and a lateral longitudinal fissure on the ovary and by colour, with testicular tissue being milky white and ovarian tissue white or yellowish.
The 4 to 6-year-old hybrids of species with the same ploidy (sterlet × beluga – StB$_5$ and Siberian sturgeon × Russian sturgeon – SR$_5$) exhibited normally developed gonads comparable to those of the purebred Siberian sturgeon (SS$_5$). The ovaries comprised nests of previtellogenic oocytes with large nuclei and contained multiple nucleoli (Figure 2A) or, nests of oocytes at earlier stages of the first meiotic prophase and lymphocytes (Figure 2B). The cysts of testes consisted of spermatogonia in a loose connective tissue (Figure 2C) as well as spermatoocytes (Figure 2D). This level of development of gonads was comparable with gonads of the purebred group (SS$_5$). In ovaries, spherical dictyotene stage oocytes were surrounded by flattened follicle cells and a thick basal lamina that was covered by thecal cells. Ovary comprised also nests of oocytes (= ovarian nests) arrested in primary growth phase (chromatin-nucleolus stage; Figure 2E). In testes, spermatogonia in connective tissue showed the beginnings of mitotic division (Figure 2F).
Hybrids of sturgeon species differing in ploidy (Siberian sturgeon × sterlet – SS$_4$ and Russian sturgeon × sterlet – RSt$_6$) showed gonads at earlier stages of development. Ovaries exhibited folds with follicles and oocytes in the primary growth phase (prophase of the first meiotic division, chromatin-nucleolus stage), cessation of development at these early stages of meiosis, with a higher adipose tissue content. These ovaries did not reach next growth stage (perinucleolus) and did not finish the prophase of the first meiotic division (Figure 2G, H). Testes contained spermatogonia and spaces without germ cells (Figure 2I).
A similar pattern was observed in 1-year-old fish. Purebred sterlet (StSt$_1$) and Siberian sturgeon (SS$_1$) female showed developed ovaries composed
of previtellogenic oocytes with spherical, voluminous nuclei (Figure 3A). Testes of purebred fish contained single spermatogonia in a loose connective tissue beginning to form seminiferous tubules (Figure 3B). The gonads of hybrid Siberian sturgeon × sterlet (SSt$_1$) and sterlet × Siberian sturgeon (StS$_1$) showed anomalies similar to gonads of older hybrids of odd ploidy. Ovaries were composed of adipose tissue with nests of oocytes arrested in early stages of meiosis (mainly pachytene to zygotene) with many lymphocytes and spaces where germ cells would normally be localized (Figure 3C, D). Testes exhibited single spermatogonia along with Sertoli cells and spaces without germ cells (Figure 3E, F). No differences in gonad development were observed between hybrids resulting from crossing female sterlet with male Siberian sturgeon and vice versa.
**Evaluation of apoptotic cells.** The specific staining confirmed our hypothesis that hybrids of sturgeon of differing ploidy did not fully develop ovaries, and ovaries showed clear signs of apoptosis of germ cells as demonstrated in RSt$_5$ (Figure 4A) and SSt$_4$ (Figure 4B). No sign of apoptosis was observed in testes of these hybrids (Figure 4C). Hybrids with even ploidy exhibited a lower number of apoptotic cells comparing to hybrids with odd ploidy in ovaries (StB$_2$) (Figure 4D) or in testes (StB$_5$) (Figure 4E), almost similar to purebred specimens. Blue, green, and darker brown staining

**Figure 2.** Histological sections of gonads of 4 to 6-year-old hybrid/purebred sturgeon Haematoxylin and eosin stained transverse sections (5 µm) show normally developed ovaries (A, B, E) and testes (C, D, F) of hybrids of sturgeon species with the same ploidy (sterlet × beluga (A and C), Siberian sturgeon × Russian sturgeon (B and D)), and Siberian sturgeon (E and F); white spaces without germ cells represent adipose tissue or spaces between ovarian folds. Partially developed ovaries (G and H) and testes (I) exhibiting spaces without germ cells (white ellipses) in hybrids of sturgeon species differing in ploidy: Siberian sturgeon × sterlet (G and I) and Russian sturgeon × sterlet (H) AC = apoptotic cell, AT = adipose tissue, BD = blood vessel, BL = basal lamina, FC = follicular cell, NO = nucleoli, OOC = oocyte, SC = spermatocyte, SG = spermatogonia, T = thecal cell; white lines are used for delimitation, scale bars = 100 µm
Figure 3. Histological sections of gonad of 1-year-old hybrid/purebred sturgeon
Haematoxylin and eosin stained transverse 5 µm sections show normally developed ovaries (A) and testes (B) of purebred (control) sterlet and Siberian sturgeon and partially developed ovaries (C, D) and testes (E, F) exhibiting spaces without germ cells (white ellipses in F), degenerated cells or adipose tissue of hybrids of sturgeon with odd ploidy level: Siberian sturgeon × sterlet (C, E) and sterlet × Siberian sturgeon (D, F)
AT = adipose tissue, LC = lymphocyte, OOC = oocyte, ST = seminiferous tubule, SC = spermatocyte, SG = spermatogonia, SR = Sertolli cell; white lines are used for delimitation, scale bars = 100 µm
Figure 4. Apoptotic germ cells (dark labelling, in white ellipses) in paraffin-embedded 5 µm sections of ovaries of hybrid sturgeon with odd ploidy levels: Russian sturgeon × sterlet (A) and Siberian sturgeon × sterlet (B). No apoptosis detected in testes of hybrid sturgeon with odd ploidy: Russian sturgeon × sterlet (C) and hybrid sturgeon with even ploidy (sterlet × beluga) ovaries (D), testes (E). Darker accumulations in (D) and (E) represent lipid droplets or stronger counterstaining with methyl green, but not apoptotic cells; scale bars = 100 µm
of cells represents counterstaining with methyl green used for a better morphological evaluation.
**DISCUSSION**
The thermal environment is known to play an important role in sturgeon growth and gonad development (Billard and Lecointre 2001), as it does in teleosts and other ectothermic vertebrates. The growth and gonad development of 4 to 6-year-old fish were probably negatively influenced by high stocking density, low water temperate, and no feeding during overwintering. However, as these fish were reared in pooled stock, development of their gonads was equally affected by unfavourable rearing conditions. Hence, observed differences in gonad development between hybrids with odd ploidy and pure fish and hybrids with even ploidy were most likely caused by genetic origin and ploidy. It is important to mention that gonad development of these fish might be retarded if compared to a fish reared in standard conditions.
Hybrids are often unviable or, if they live, their gonad development is significantly influenced by functional incompatibility of multiple interacting genes (Lu et al. 2010). Abnormal gonad development has been observed in several fish hybrids (Legendre et al. 1992; Kopiejewska et al. 2003), with abnormalities including inhibition of development in some cells at early meiosis, abnormal appearance of early previtellogenic oocytes, pycnosis of germ cell nuclei, gonad hermaphroditism, ovarian tumours, and gonad aplasia or atrophy.
In sturgeon aquaculture, fertile interspecific hybrids are commonly reared and reproduced (Bronzi et al. 2011). The most common hybrids are crosses of the beluga with sterlet (bester), the Adriatic sturgeon *Acipenser naccarii* Bonaparte 1836 with Siberian sturgeon, the Russian sturgeon with Siberian sturgeon, and the kaluga *Huso dauricus* (Georgi 1775) with the Amur sturgeon *Acipenser schrenckii* Brandt 1869 or their reciprocal hybrid. These hybrids originate from crossbreeding of parent species of the same ploidy level. In the present study, normally developed gonads similar to those of purebred controls were observed in hybrids of sturgeon species with the same ploidy. Development of their gonads was also in accordance with previous studies on Siberian sturgeon and Russian sturgeon (no intersex individuals in Rzepkowska et al. 2014). It clearly showed that the gonad development in hybrids with even ploidy was not affected by the hybrid origin.
Previtellogenic oocytes and both previtellogenic and vitellogenic oocytes in ovarian follicles, as seen in the present study, were already described in Russian sturgeon (Zelazowska et al. 2015) and in Siberian sturgeon (Zelazowska and Fopp-Bayat 2017).
In this study, all hybrids of parent species with different ploidy were functional triploids. Hence, their gonad development should be affected by lack of chromosome pairing during the zygote stage of meiosis prophase I. These hybrids displayed protracted or interrupted development of gonads, with evidence of apoptosis in ovaries. Similarly, Omoto et al. (2002) described interrupted gonad development in triploid bester.
Vasilev et al. (2014) reported fertile male hybrids of sterlet × kaluga, species with differing ploidy. The sterlet is a functional diploid with about 120 chromosomes (Rab et al. 1986), and the kaluga is a functional tetraploid species having about 250–270 chromosomes (Vasilev et al. 2010). Their hybrid is a functional triploid with 185–195 chromosomes and would be presumed sterile. Vasilev et al. (2014) stated that the males of this hybrid were able to produce sperm, and confirmed fertilization ability in this sperm by producing backcrosses with sterlet and kaluga females, showing possible fertility in hybrids of species differing in ploidy. In the present study it was difficult to draw conclusions about fertility or sterility of hybrids of sturgeon species with different ploidy, because individuals under study did not reach full maturity until analyzed. Interrupted ovary development with high incidence of apoptosis implied sterility of females, while histology of testes suggested a potential limited fertility in males. The observation of odd ploidy in sturgeon hypothetically originating from backcrossing of presumably fertile hybrids of sterlet and Siberian sturgeon or Russian sturgeon to the parental species (Flajshans and Vajcova 2000) reinforces our assumption of possible male fertility in hybrids of sterlet and Siberian sturgeon.
Sturgeons are presumed to be female heterogametic (Fopp-Bayat 2010; Keyvanshokooh and Gharaei 2010). Possible fertility of male hybrids of species with different ploidy corresponds to Haldane’s rule, which assumes that, in hybrids, only the homogametic sex is fully fertile, while the heterogametic sex is sterile (Haldane 1922).
Complete fertility of both sexes in hybrids of species with the same ploidy does not comply with this presumption, suggesting that sturgeon hybrid fertility is complex and may present unique characteristics, possibly due to the allopolyploid origin of the species (Fontana et al. 2007) and different levels of genome re-diploidization among sturgeon ploidy groups (Havelka et al. 2013).
CONCLUSION
Gonad mass as a proportion of the total body mass was not influenced by genetic origin or by ploidy in the analyzed specimens. Hybrids of species of the same ploidy showed gonad development similar to that of purebred controls. In contrast, hybrids of species with differing ploidy displayed inhibition of gonad development in some cells at early stages of meiosis. While ovarian development was interrupted and showed high incidence of apoptosis, testes continued to develop, and limited fertility of the hybrid males could not be excluded. In light of the evidence given in the present study and other studies to date, the general assumption of sterility of hybrid sturgeon with odd ploidy and therefore lack of concern with respect to their escape from farms should be seriously reconsidered. As a precaution, we suggest that all males of sturgeon hybrids should be assumed to be potentially fertile.
Acknowledgements. Granting agencies had no participation in the design of the study or interpretation of the results. The Lucidus Consultancy is gratefully acknowledged for English correction and suggestions. The authors declare that they have no competing interests.
REFERENCES
Allard R.W. (ed.) (1999): Principles of Plant Breeding. John Wiley and Sons, New York, USA.
Arefyev V.A. (1997): Sturgeon hybrids: natural reality and practical prospects. Aquaculture Magazine, 23, 53–58.
Bartley D.M., Rana K., Immink A.J. (2001): The use of interspecific hybrids in aquaculture and fisheries. Reviews in Fish Biology and Fisheries, 10, 325–337.
Bemis W.E., Findeis E.K., Grande L. (1997): An overview of Acipenseriformes. Environmental Biology of Fishes, 48, 25–71.
Billard R., Lecointre G. (2001): Biology and conservation of sturgeon and paddlefish. Reviews in Fish Biology and Fisheries, 10, 355–392.
Birstein V.J., Hanner R., DeSalle R. (1997): Phylogeny of the Acipenseriformes: cytogenetic and molecular approaches. Environmental Biology of Fishes, 48, 127–155.
Bronzi P., Rosenthal H., Gessner J. (2011): Global sturgeon aquaculture production: an overview. Journal of Applied Ichthyology, 27, 169–175.
Bytyutskyy D., Srp J., Flajshans M. (2012): Use of Feulgen image analysis densitometry to study the effect of genome size on nuclear size in polyploid sturgeons. Journal of Applied Ichthyology, 28, 704–708.
Bytyutskyy D., Kholodny V., Flajshans M. (2014): 3-D structure, volume, and DNA content of erythrocyte nuclei of polyploid fish. Cell Biology International, 38, 708–715.
Dowling T.E., Secor C.L. (1997): The role of hybridization and introgression in the diversification of animals. Annual Review of Ecology and Systematics, 28, 593–619.
Fahy E., Martin S., Mulrooney M. (1988): Interactions of roach and bream in an Irish reservoir. Archiv für Hydrobiologie, 114, 291–309.
Flajshans M., Vajcova V. (2000): Odd ploidy levels in sturgeon suggest a backcross of interspecific hexaploid sturgeon hybrids to evolutionary tetraploid and/or octaploid parental species. Folia Zoologica, 49, 133–138.
Fontana F., Zane L., Pepe A., Congiu L. (2007): Polyploidy in Acipenseriformes: cytogenetic and molecular approaches. In: Pisano E., Ouzouf-Costaz C., Foresti F., Kapoor B.G. (eds): Fish Cytogenetics. Science Publisher, Enfield, USA, 385–403.
Fopp-Bayat D. (2010): Meiotic gynogenesis revealed not homogametic female sex determination system in Siberian sturgeon (Acipenser baeri Brandt). Aquaculture, 305, 174–177.
Haldane J. (1922): Sex ration and unisexual sterility in animal hybrids. Journal of Genetics, 12, 101–109.
Havelka M., Hulak M., Bailie D.A., Prodohl P.A., Flajshans M. (2013): Extensive genome duplications in sturgeons: new evidence from microsatellite data. Journal of Applied Ichthyology, 29, 704–708.
Keyvanshokooh S., Gharaei A. (2010): A review of sex determination and searches for sex-specific markers in sturgeon. Aquaculture Research, 41, e1–e7.
King T.L., Lubinski B.A., Spidle A.P. (2001): Microsatellite DNA variation in Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) and cross-species amplification in the Acipenseridae. Conservation Genetics, 2, 103–119.
Kopiejewska W., Terlecki J., Chybowski L. (2003): Varied somatic growth and sex cell development in reciprocal hybrids of roach Rutilus rutilus (L.) and ide Leuciscus idus (L.). Archives of Polish Fisheries, 11, 33–44.
Legendre M., Teugels G.G., Cauty C., Jalabert B. (1992): A comparative study on morphology, growth rate and reproduction of *Clarias gariepinus* (Burchell, 1822), *Heterobranchus longifilis* Valenciennes, 1840, and their reciprocal hybrids (*Pisces*, Claridae). Journal of Fish Biology, 40, 59–79.
Lu X., Shapiro J.A., Ting C.T., Li Y., Li C., Xu J., Huang H., Cheng Y.J., Greenberg A.J., Li S.H., Wu M.L., Shen Y., Wu C.I. (2010): Genome-wide misexpression of X-linked versus autosomal genes associated with hybrid male sterility. Genome Research, 20, 1097–1102.
Ludwig A., Lippold S., Debus L., Reinartz R. (2009): First evidence of hybridization between endangered sterlets (*Acipenser ruthenus*) and exotic Siberian sturgeons (*Acipenser baerii*) in the Danube River. Biology Invasions, 11, 753–760.
May B., Krueger C.C., Kincaid H.L. (1997): Genetic variation at microsatellite loci in sturgeon: primer sequence homology in *Acipenser* and *Scaphirhynchus*. Canadian Journal of Fisheries and Aquatic Sciences, 54, 1542–1547.
McQuown E.C., Sloze B.L., Sheehan R.J., Rodzen J., Tranah G.J., May B. (2000): Microsatellite analysis of genetic variation in sturgeon (*Acipenseriidae*): new primer sequences for *Scaphirhynchus* and *Acipenser*. Transactions of the American Fisheries Society, 129, 1380–1388.
Mugue N.S., Barminseva A.E., Rastorguev S.M., Muguev N.V., Barminstev V.A. (2008): Polymorphism of the mitochondrial DNA control region in eight sturgeon species and development of a system for DNA-based species identification. Russian Journal of Genetics, 44, 793–798.
Omoto N., Maebayashi M., Adachi S., Arai K., Yamauchi K. (2002): Histological observations of gonadal development in gynogenetic diploids and triploids of a hybrid sturgeon, bester. Fisheries Science, 68 (Suppl. 2), 1271–1272.
Rab P. (1986): A note on the karyotype of the sterlet, *Acipenser ruthenus* (*Pisces*, Acipenseriidae), Folia Zoologica, 35, 73–78.
Rosati A., Tewolde A., Mosconi C. (eds) (2007): Animal Production and Animal Science Worldwide. Wageningen Academic Publishers, Wageningen, the Netherlands.
Rzepkowska M., Ostaszewska T., Gibala M., Roszko M.L. (2014): Intersex gonad differentiation in cultured Russian (*Acipenser gueldenstaedtii*) and Siberian (*Acipenser baerii*) sturgeon. Biology of Reproduction, 90, 1–10.
Seehausen O. (2004): Hybridization and adaptive radiation. Trends in Ecology and Evolution, 19, 198–207.
Soltis P.S., Soltis D.E. (2009): The role of hybridization in plant speciation. Annual Review in Plant Biology, 60, 561–588.
Vasilev V.P., Vasileva E.D., Shedko S.V., Novomodny G.V. (2010): How many times has polyploidization occurred during Acipenserid evolution? New data on the karyotypes of sturgeons (*Acipenseridae, Actinopterygii*) from the Russian Far East. Journal of Ichthyology, 50, 950–959.
Vasilev V.P., Rachek E.I., Lebedeva E.B., Vasileva E.D. (2014): Karyological study in backcross hybrids between the sterlet, *Acipenser ruthenus*, and kaluga, *A. dauricus* (*Actinopterygii; Acipenseriformes: Acipenseridae*): *A. ruthenus* × (*A. ruthenus × A. dauricus*) and *A. dauricus* × (*A. ruthenus × A. dauricus*). Acta Ichthyologica et Piscatoria, 44, 301–308.
Zelazowska M., Fopp-Bayat D. (2017): Previtellogenic and vitellogenic oocytes in ovarian follicles of cultured Siberian sturgeon *Acipenser baerii* (Chondrostei, Acipenseriformes). Journal of Morphology, 278, 50–61.
Zelazowska M., Jankowska W., Plewniak E., Rajek U. (2015): Ovarian nests in cultured Russian sturgeon *Acipenser gueldenstaedtii* and North American paddlefish *Polyodon spathula* comprised of previtellogenic oocytes. Journal of Fish Biology, 86, 1669–1679.
Zhang X., Wu W., Li L., Ma X., Chen J. (2013): Genetic variation and relationships of seven sturgeon species and ten interspecific hybrids. Genetics Selection Evolution, 45, 21.
Received: 2016–03–23
Accepted after corrections: 2017–10–06 |
United Utilities Group PLC
Notice of Annual General Meeting 2021
Dear Shareholder
2021 Annual General Meeting
I am pleased to provide details of the annual general meeting of United Utilities Group PLC (the ‘company’) (the ‘AGM’ or ‘annual general meeting’, or the ‘meeting’) and enclose our notice of meeting and form of proxy. The meeting will be held in the Exchange Rooms at Manchester Central Convention Complex, Windmill St, Manchester M2 3GX (the ‘venue’), on Friday 23 July 2021 at 11.00am.
The notice of annual general meeting is set out on pages 10 to 13, together with explanatory notes on pages 14 to 23. The 31 March 2021 annual report and financial statements are available on our website along with an electronic copy of this notice of meeting at unitedutilities.com/corporate
Important information about the Annual General Meeting this year
At the time of writing, whilst some of the restrictions have been lifted, we are still living with the uncertainties associated with the COVID-19 pandemic.
The health and wellbeing of the company’s shareholders, customers and employees is of paramount importance. With this in mind, although we are arranging a physical annual general meeting (as required by the company’s articles of association), we are broadcasting the meeting live on the day via the internet, enabling shareholders to observe the meeting, and submit questions in writing. Please refer to pages 28 to 29 for further details and a step-by-step guide on how to access the broadcast of the AGM. The guide also contains details of how to access the broadcast if you hold your shares through a nominee or custodian account. Please note the broadcast is provided for information purposes only and shareholders will not form part of the meeting for legal purposes, nor will shareholders be able to vote virtually via the website during the proceedings, as this is not permitted by the company’s existing articles of association.
We will continue to review our AGM arrangements in light of changes to Government guidance and will implement whatever measures are required on the day. In the event of changes being made to the arrangements for the meeting, shareholders are encouraged to monitor the AGM page on the company’s website for any updates.
**Voting**
Whatever the restrictions, you are strongly encouraged to exercise your right to vote, you can do this by:
- going online at sharevote.co.uk and voting electronically. To do this you will need the three numbers (voting ID, task ID and shareholder reference number) that are printed on your proxy form; or
- complete your proxy voting form and appoint the chairman of the meeting to act in accordance with your instructions, and post it to the pre-printed address, or take a photograph of your completed proxy form and email it to email@example.com; or
- if you have registered with Equiniti’s online portfolio service, you can appoint the chairman as your proxy at shareview.co.uk; or
- vote in person by attending the meeting.
Proxy votes must be received by 11.00am on Wednesday 21 July 2021. Further information can be found on page 24. The results of the poll will be announced to the London Stock Exchange and will be published on our website as soon as reasonably practicable after the meeting.
**Final dividend**
Subject to approval at the annual general meeting, the final dividend of the financial year ended 31 March 2021 of 28.83 pence per ordinary share will be paid on 2 August 2021 to those members whose names appear on the register at the close of business on 25 June 2021.
**Recommendation**
The directors are of the opinion that all resolutions to be proposed at the annual general meeting are in the best interests of the shareholders as a whole. Accordingly, the board unanimously recommends that you vote in favour of all the proposed resolutions.
Yours faithfully
Sir David Higgins
Chairman
Chairman and Chief Executive Officer’s review
We have responded well to the challenges of a year that has been dominated by the impact of COVID-19 in maintaining service and support so critical to customers in the North West. Our operational performance has been strong, building on the improvements we delivered in the previous regulatory period and providing us with a great start to achieving our targets for the new 2020–25 price review period (AMP7).
This has been an unprecedented year in which we have had to adapt our operations to protect customers, employees and supply chain partners from the impact of COVID-19.
We responded well to the challenges and delivered our best ever year of operational performance for customers and the environment. Customer satisfaction remains high and we have made a strong start against our customer outcome delivery incentives (ODIs). This year has seen us reduce leakage to its lowest ever level and supply interruptions to customers have been halved. We are on track to achieve the maximum 4 star rating in the Environment Agency’s assessment for 2020, and have reduced environmental pollution incidents by around a third.
Our operational performance has been strong against key metrics and we are pleased to have met or exceeded over 80 per cent of our performance commitments for year 1 of AMP7. In those areas where we have fallen short of our target – such as sewer flooding – we are innovating and investing in new technology in order to improve performance and service to customers over the longer term.
We witnessed further variability in weather conditions now characteristic of climate change. Our region experienced a hot, dry spring that, coupled with people spending more time at home, resulted in a high level of demand for water. We continued to encourage customers to save water through water efficiency programmes, helping them to preserve this precious resource and save money on their bills. Throughout this period we maintained supplies to customers, demonstrating the benefits of our Systems Thinking approach and supported by the investment we made in previous regulatory periods to enhance the resilience of our services.
We have a deep and strong relationship with the environment and communities of the North West. Our plans ensure we protect and improve the natural environment and for many years we have been at the forefront of addressing climate change. We are proud to be a signatory to the UN’s Race to Zero campaign and we are delivering against all of our six carbon pledges. Our purpose drives us to make a real, positive contribution to the communities we serve through everything we do, and our investment programme plays a significant role in supporting the North West economy.
This excellent start to the delivery of our AMP7 plans provides a strong platform for us to play our full part in the economic recovery of the communities we serve as the country emerges from the COVID-19 pandemic.
**Maintaining excellent service to customers whilst supporting our employees**
Our continued focus on delivering the best service to customers has never been more important. We delivered significant and sustainable improvements over AMP6 and we ended the period as a leading water and wastewater company. The way Ofwat measures customer satisfaction in AMP7 has changed, with C-MeX measuring household customer satisfaction and D-MeX measuring developer satisfaction. Despite a challenging operating environment, customer satisfaction remains high, earning us an outperformance payment for both C-MeX and D-MeX and positioning us in the sector upper quartile for all-round customer satisfaction.
The impact of COVID-19 has led to many customers facing increasing financial hardship. At the start of the pandemic we saw an increase in the number of customers needing affordability support and the initiatives we put in place in AMP6 enabled us to respond swiftly and effectively. We were the first water company to secure support and regulatory approval for an extension to the scale and scope of our social tariff, providing an additional £15 million to help a further 45,000 customers. We had to consider the appropriateness of continuing our normal billing and collection activities and the most suitable means of engagement. As part of our COVID-19 response, we proactively encouraged customers to contact us if they had been impacted financially by the pandemic. We carried out targeted activities aligned to specific customer segments and changes in customer behaviour to engage with customers, ensuring they knew they could talk to us about their water bill, and highlighting alternative ways to pay.
We could not have delivered such great service to customers during this time without highly engaged and motivated colleagues right across the organisation who demonstrate tremendous resilience and adaptability to deliver for a region hard hit by the pandemic. To keep employees safe, early on in the year we moved 60 per cent of our workforce to home working and the remainder continued working at our COVID-19 secure facilities. We have continued to work in this way in line with the government roadmap out of lockdown, whilst defining and shaping the way for future working. Our employee engagement score this year positioned us above the norm for UK high performing companies – a remarkable score given the past year and testimony to the cohesiveness of the United Utilities team.
Chairman and Chief Executive Officer’s review
Transforming into a digital utility
Through our Systems Thinking approach we make use of technology, automation and machine intelligence to deliver better performance for customers and the environment.
Through implementation of Dynamic Network Management – an example of the most advanced form of Systems Thinking in the water sector – we are shifting from reactive management of our wastewater network to using a web of sensors that will provide near real-time performance information. This new digital capability will optimise performance in a predictive and preventative way, delivering greater efficiency, improved service to customers and helping to enhance the environment.
We recognise the benefits to be gained through building digital skills among our workforce, and our purpose-built technical training academy, established in 2014, has provided skills development and certification to over 2,800 colleagues. The focus on digital skills means that we have the in-house ability to develop and deploy breakthrough technologies at pace and efficiently.
We make extensive use of apps, many of which are developed in-house, to create digital capability for our field and customer service facing teams. Our new voids app, aimed at unbilled but occupied properties, has helped us to earn the maximum customer ODI outperformance payment on voids this year as well as securing future year benefits of a further £24 million over AMP7.
Delivering a robust financial performance
We have delivered another year of robust financial performance, supported by the strength of our balance sheet.
Underlying earnings per share is 56.2 pence, a decrease of 21 per cent but more than covering the dividend for the year. The anticipated decrease is due to lower allowed regulatory revenue in the first year of the new regulatory period, and an increase in infrastructure renewals expenditure due to planned work to optimise the performance of our network, higher depreciation reflecting continued investment in the asset base and a slight increase in the remaining cost base. This is partly offset by a decrease in the underlying net finance expense reflecting lower inflation applied to our index-linked debt. We have simplified our approach to alternative performance measures (APMs) this year and are no longer, as a matter of course, adjusting our underlying earnings for restructuring costs, net pension interest, capitalised borrowing costs and prior years’ tax matters. This brings our approach more in line with peers and therefore makes cross-company comparisons easier.
Reported earnings per share is 66.5 pence per share, which is higher than the underlying figure, mainly due to fair value movements. Adjusting items are outlined in the reconciliation table on pages 82 and 83 of the annual report and financial statements 2021, and reflect our change in approach to APMs with prior year numbers re-presented for comparability.
The board has proposed a final dividend of 28.83 pence per ordinary share, taking the total dividend for 2020/21 to 43.24 pence. This is an increase of 1.5 per cent, in line with our policy in this regulatory period of targeting an annual growth rate of CPIH inflation through to 2025.
Our balance sheet continues to be one of the strongest in the sector, with low customer debtor risk, net debt to regulatory capital value within our target range and a pension scheme that is fully funded on a low dependency basis.
Given the uncertainty created by the COVID-19 pandemic, the recoverability of household debtors has been a key area of focus. It has been an area of focus for us for most of the last decade, during which we have managed the position robustly. This manifests itself in the balance reducing from £115 million in 2016 to £78 million in 2021. Our net debtor balance as at 31 March 2021 is the lowest it has been for five years and is one of the best managed positions in the sector. Knowing this gives us added confidence as we emerge from the pandemic.
We have retained our policy of targeting gearing of 55–65 per cent, measured as net debt to regulatory capital value, for this new regulatory period and at 62 per cent, our gearing remains within this target range. During the year, we changed our definition of net debt to exclude the impact of derivatives that are not hedging specific debt instruments. This provides a better reflection of the debt balances we are contractually obliged to repay and is more consistent with the approach taken by credit rating agencies and the regulatory economics. Our gearing policy is supportive of United Utilities Water Limited’s A3 credit rating with Moody’s and we have liquidity extending out to August 2023. This provides us with resilience and financial flexibility as we progress through AMP7 and demonstrates our prudent and responsible approach to financial risk management.
We have eliminated our pension funding deficit on a low-dependency basis and our pension position is in surplus on an IAS 19 basis. Having no pension funding deficit puts us at an underlying advantage versus most other companies in the sector, as well as against many companies in the Financial Times Stock Exchange (FTSE), that continue to make cash contributions into their pension schemes to achieve a fully funded position. We are proud to have already achieved this, protecting employees past and present and shareholders from the risk of a large pension deficit.
In November 2020, we published our new sustainable finance framework, which allows us to raise financing based on our strong environmental, social and governance (ESG) credentials. This replaces the green funding we have previously secured through the European Investment Bank (EIB), which is no longer available post-Brexit. We issued our debut sustainable bond in January 2021 and were extremely pleased by the high level of interest. As a result, we secured not only our lowest ever coupon at that particular maturity, but also the lowest ever coupon for any UK corporate at that maturity, locking in financing outperformance.
**Good start to the new regulatory period (AMP7)**
We are performing well against the principal areas of our regulatory contract for AMP7 despite many targets getting tougher.
Our accelerated investment strategy and digital transformation is delivering value across the breadth of our customer outcome delivery incentives (ODIs). The £21 million outperformance payment earned this year is ten times the performance we delivered in the first year of AMP6. The net reward earned this year will be reflected in an increase to revenues earned in 2022/23. This provides a great platform for continued delivery against our customer ODIs for the remainder of the AMP and gives us the confidence to target a cumulative outperformance payment of around £150 million for the 2020–25 period.
Chairman and Chief Executive Officer’s review
Thanks to our good performance in AMP6, we started AMP7 at a totex run rate which supports delivery of our AMP7 scope within our Final Determination totex allowance. Since accepting our Final Determination, our investment plan has been extended by a further £300 million, which we expect to be fully remunerated through regulatory mechanisms, with this expenditure extending our environmental programme, accelerating our digital transformation and exploiting spend to save opportunities.
While we continue to seek efficiencies in the delivery of totex, as we have demonstrated through the £300 million extension to our totex plans, we will invest totex where we are confident we can deliver improved customer or environmental outcomes and better customer ODI performance.
On financing performance, we have consistently issued debt at efficient rates that compare favourably with the industry average, thanks to our leading treasury management, clear and transparent financial risk management policies, and ability to act swiftly to access pockets of opportunity as they arise. This delivered significant financing outperformance during AMP6 and the rates we have already locked in for AMP7 compare favourably with the price review assumptions.
ESG at our heart
Our purpose drives us to deliver our services in an environmentally sustainable, economically beneficial and socially responsible manner and what we do creates a deep connection with the stakeholders we serve. We have a long-standing commitment to deliver against our ESG objectives and we have a strong track record of doing so. We are also looking to our supply chain partners to adopt these values and objectives via the United Supply Chain (USC) initiative, a fundamental step change as to how we engage with them in AMP7 and into AMP8.
Having achieved our climate change objectives up to 2020, reducing greenhouse gas emission by 73 per cent, we made six carbon pledges and have made good progress against them all. From October of this year, 100 per cent of our electricity will be sourced from renewable technologies and we have set ambitious science-based scope 3 emissions targets that have been submitted for endorsement by the Science Based Targets initiative (SBTi).
Our Catchment Systems Thinking (CaST) approach continues to mature. We have been working with the Environment Agency (EA) and other stakeholders to develop a North West natural capital baseline and once this process is complete, we will engage with other partners across the region to drive a consistent approach to delivering greater natural capital value. This year, we pledged a £300,000 CaST Fund, for which charities and community groups are able to bid, to boost the idea of working collaboratively to address the challenges facing the environment.
We are in a unique position to make a real, positive contribution to society and have an ambitious and innovative approach to addressing affordability and vulnerability. We have an extensive range of schemes available to help customers and around 200,000 are currently benefiting from that help. We are providing more customers than ever with access to Priority Services in times of need, with over 133,000 now on our register. We have committed to providing £71 million in financial support over AMP7, and have accelerated payments this year to provide much needed assistance to households struggling as a result of the economic impact of the pandemic. During the early stages of the pandemic, recognising the importance of cash flow to businesses, we took swift action to accelerate payment terms with suppliers, paying them within seven days where possible.
We want fantastic people from a range of different backgrounds and life experiences to enable us to deliver a great public service, and we are committed to creating a diverse and inclusive workforce, reaching and recruiting from every part of our community. We were delighted to be one of the top one per cent of 15,000 companies across Europe in the Financial Times’ Statista Survey for Diversity and Inclusion Leadership and to achieve inclusion in the Bloomberg Gender Equality Index.
We operate in a manner that aims to maintain high ethical standards of business conduct and corporate governance. We have attained World Class status on the Dow Jones Sustainability Index for the 14th consecutive year. We were delighted to retain the Fair Tax Mark independent certification which recognises our commitment to paying our fair share of tax and acting in an open and transparent manner in relation to our tax affairs. We continue to focus on our long-term financial resilience, supported by our strong balance sheet and prudent approach to financial risk management, maintaining a responsible level of gearing and well-controlled pension position for many years.
**Outlook**
We started the new regulatory period as one of the sector’s best performers and have delivered further improvements this year, giving us the confidence that we will continue to be able to meet our targets across AMP7. Our transformation to a digital utility is helping us operate more efficiently and deliver better service to customers whilst protecting and improving the natural environment. Although it remains uncertain how the country will emerge from the COVID-19 pandemic, we have proven to be resilient over this period and will continue to rise to the challenges that lie ahead, playing our part in the recovery of the North West economy.
**Grateful to our stakeholders for their support**
We would like to express our gratitude to our highly engaged and motivated employees and supply chain partners who have shown great resilience and adaptability in continuing to deliver excellent performance over such a challenging period, and we extend our thanks to customers, shareholders and other stakeholders for their continued support.
---
**Sir David Higgins**
Chairman
**Steve Mogford**
Chief Executive Officer
---
**Annual report and financial statements**
Our 2021 annual report and financial statements can be accessed at unitedutilities.com/corporate
Notice of Annual General Meeting
This document is important and requires your immediate attention
If you are in doubt as to the action you should take, you are recommended to seek your own financial advice from your stockbroker, bank manager, solicitor, accountant or other independent adviser who, if you are taking advice in the United Kingdom, is duly authorised under the Financial Services and Markets Act 2000 or an appropriately authorised independent financial adviser if you are in a territory outside the United Kingdom. If you have sold or otherwise transferred all your shares in United Utilities Group PLC, you should pass this document, together with all accompanying documents, to the bank, stockbroker or other agent through whom the sale or transfer was effected for transmission to the purchaser or transferee.
Notice of 2021 annual general meeting (AGM)
Notice is given that the AGM of United Utilities Group PLC (the company) will be held at 11.00am on Friday 23 July 2021 in the Exchange Rooms at Manchester Central Convention Complex, Windmill St, Manchester M2 3GX to transact the business set out below.
Resolutions 1 to 15, and 21 will be proposed as ordinary resolutions and resolutions 16 to 20 will be proposed as special resolutions.
The board considers each resolution to be proposed at the AGM would promote the success of the company for the benefit of its members as a whole, and unanimously recommends shareholders to vote in favour of all resolutions, as they intend to do in respect of their own shareholdings. The formal resolutions are set out on the following pages, along with explanatory notes given in respect of each resolution.
Resolution 1: annual report and financial statements
That the audited annual report and financial statements for the year ended 31 March 2021 be received.
Resolution 2: declaration of dividend
That the final dividend of 28.83 pence per ordinary share be declared.
Resolution 3: to approve the directors’ remuneration report
That the directors’ remuneration report (other than the part containing the directors’ remuneration policy) for the year ended 31 March 2021 be approved.
Resolution 4: reappointment of a director
That Sir David Higgins be reappointed as a director.
Resolution 5: reappointment of a director
That Steve Mogford be reappointed as a director.
Resolution 6: election of a director
That Phil Aspin be elected as a director.
Resolution 7: reappointment of a director
That Mark Clare be reappointed as a director.
Resolution 8: reappointment of a director
That Stephen Carter be reappointed as a director.
Resolution 9: election of a director
That Kath Cates be elected as a director.
Resolution 10: reappointment of a director
That Alison Goligher be reappointed as a director.
Resolution 11: reappointment of a director
That Paulette Rowe be reappointed as a director.
Resolution 12: election of a director
That Doug Webb be elected as a director.
Resolution 13: reappointment of auditor
That KPMG LLP be reappointed as auditor of the company.
Resolution 14: remuneration of auditor
That the audit committee of the board be authorised to set the auditor’s remuneration.
Resolution 15: authorising the directors to allot shares
That the board be generally and unconditionally authorised to allot ordinary shares pursuant to section 551 of the Companies Act 2006 (the Act) in the company and to grant rights to subscribe for or convert any security into ordinary shares in the company:
(A) up to a nominal amount of £11,364,806 (such amount to be reduced by any allotments or grants made under paragraph (B) below in excess of such sum); and
(B) comprising equity securities (as defined in section 560(1) of the Act) up to a nominal amount of £22,729,613 (such amount to be reduced by any allotments or grants made under paragraph (A) above) in connection with an offer by way of a rights issue:
(i) to ordinary shareholders in proportion (as nearly as may be practicable) to their existing holdings; and
(ii) to holders of other equity securities as required by the rights of those securities or as the board otherwise considers necessary,
and so that the board may impose any limits or restrictions and make any arrangements which it considers necessary or appropriate to deal with treasury shares, fractional entitlements, record dates, legal, regulatory or practical problems in, or under the laws of, any territory or any other matter, such power to apply until the end of the 2022 annual general meeting of the company but, in each case, during this period the company may make offers and enter into agreements which would, or might, require shares to be allotted or rights to subscribe for or convert securities into shares to be granted after the authority ends and the board may allot shares or grant rights to subscribe for or convert securities into shares under any such offer or agreement as if the authority had not ended. All authorities vested in the board on the date of the notice of this meeting to allot shares or grant rights that remain unexercised at the commencement of this meeting are revoked.
Resolution 16: general power to disapply statutory pre-emption rights
That, if resolution 15 is passed, the board be given the power to allot equity securities (as defined in the Companies Act 2006 (the Act)) for cash under the authority given by that resolution and/or to sell ordinary shares of five pence each held by the company as treasury shares for cash as if section 561 of the Act did not apply to any such allotment or sale, such power to be limited:
(A) to the allotment of equity securities and sale of treasury shares for cash in connection with an offer of, or invitation to apply for, equity securities (but in the case of the authority granted under paragraph (B) of resolution 15, by way of a rights issue only):
(i) to ordinary shareholders in proportion (as nearly as may be practicable) to their existing holdings; and
(ii) to holders of other equity securities, as required by the rights of those securities or, as the board otherwise considers necessary,
Notice of Annual General Meeting
and so that the board may impose any limits or restrictions and make any arrangements which it considers necessary or appropriate to deal with treasury shares, fractional entitlements, record dates, legal, regulatory or practical problems in, or under the laws of, any territory or any other matter; and
(B) in the case of the authority granted under paragraph (A) of resolution 15 and/or in the case of any sale of treasury shares for cash, to the allotment (otherwise than under paragraph (A) above) of equity securities or sale of treasury shares up to a nominal amount of £1,704,721,
such power to apply until the end of the 2022 annual general meeting of the company but, in each case, during this period the company may make offers and enter into agreements which would, or might, require equity securities to be allotted (and treasury shares to be sold) after the power ends and the board may allot equity securities (and sell treasury shares) under any such offer or agreement as if the power had not ended.
Resolution 17 specific power to disapply pre-emption rights in connection with an acquisition or specified capital investment
That, if resolution 15 is passed, the board be given the power, in addition to any power granted, under resolution 16 to allot equity securities (as defined in the Companies Act 2006 (the Act)) for cash under the authority granted under paragraph (A) of resolution 15 and/or to sell ordinary shares held by the company as treasury shares for cash as if section 561 of the Act did not apply to any such allotment or sale, such power to be:
(A) limited to the allotment of equity securities or sale of treasury shares up to a nominal amount of £1,704,721; and
(B) used only for the purposes of financing a transaction which the board of the company determines to be an acquisition or other capital investment of a kind contemplated by the Statement of Principles on Disapplying Pre-Emption Rights most recently published by the Pre-Emption Group prior to the date of this notice or for the purposes of refinancing such a transaction within six months of its taking place,
such power to apply until the end of the 2022 annual general meeting but, in each case, during this period the company may make offers, and enter into agreements, which would, or might, require equity securities to be allotted (and treasury shares to be sold) after the power ends and the board may allot equity securities (and sell treasury shares) under any such offer or agreement as if the power had not ended.
Resolution 18: authorising the company to make market purchases of its own shares
That the company be generally and unconditionally authorised for the purposes of section 701 of the Companies Act 2006 (the Act) to make one or more market purchases (as defined in section 693(4) of the Act) of its ordinary shares of five pence each, such power to be limited:
(A) to a maximum aggregate number of 68,188,841 ordinary shares of five pence each; and
(B) by the condition that the minimum price which may be paid for an ordinary share is the nominal amount of that share and the maximum price which may be paid for an ordinary share is the higher of:
(i) an amount equal to 5 per cent above the middle market value of an ordinary share (as derived from the London Stock Exchange plc’s Daily Official List) for the five business days immediately preceding the day on which that ordinary share is contracted to be purchased; and
(ii) the higher of (i) the price of the last independent trade of an ordinary share; and (ii) the highest current independent bid for an ordinary share on the trading venues where the purchase is carried out,
in each case, exclusive of expenses.
Such power to apply until the end of the 2022 annual general meeting of the company. The company may enter into a contract to purchase ordinary shares which will or may be completed or executed wholly or partly after the power ends and the company may purchase ordinary shares pursuant to any such contract as if the power had not ended.
**Resolution 19: articles of association**
That with effect from the conclusion of the AGM the articles of association produced to the meeting and initialled by the Chairman of the meeting (for the purposes of identification) be adopted as the company’s articles of association in substitution for, and to the exclusion of, the existing articles of association of the company.
**Resolution 20: notice of general meeting**
That a general meeting other than an annual general meeting may be called on not less than 14 clear days’ notice.
**Resolution 21: authorising political donations and political expenditure**
That, in accordance with Part 14 of the Companies Act 2006 (the Act), the company and each company which is or becomes a subsidiary of the company at any time during the period for which this resolution has effect, be and are hereby authorised:
(A) to make political donations to political parties and/or independent election candidates;
(B) to make political donations to political organisations other than political parties; and
(C) to incur political expenditure;
in each case during the period beginning with the date of the passing of this resolution and ending on the conclusion of the 2022 annual general meeting of the company. In any event, the aggregate amount of political donations and political expenditure made or incurred by the company and its subsidiaries pursuant to this resolution shall not exceed £50,000. For the purposes of this resolution the terms ‘political donations’, ‘independent election candidates’, ‘political organisations’, ‘political expenditure’ and ‘political parties’ have the meanings set out in sections 363 to 365 of the Act.
**By order of the board:**
Simon Gardiner Company Secretary
26 May 2021
**Registered office:**
Haweswater House Lingley Mere Business Park Lingley Green Avenue Great Sankey Warrington WA5 3LP
Explanatory notes of resolutions
Resolution 1: annual report and financial statements
The directors are required to lay before the meeting the annual report and financial statements of the company for the year ended 31 March 2021, the strategic report, the directors’ report, the remuneration report and the audited parts thereof, and the auditor’s report on the financial statements.
Resolution 2: declaration of dividend
The board is recommending a final dividend of 28.83 pence per ordinary share. If approved, it will be paid on 2 August 2021 to the shareholders on the register at the close of business on 25 June 2021.
Resolution 3: directors’ remuneration report
In accordance with the Companies Act 2006, the company proposes an ordinary resolution to approve the directors’ remuneration report for the financial year ended 31 March 2021. The directors’ remuneration report can be found on pages 160 to 189 of the annual report and financial statements 2021 and for the purposes of this resolution, does not include the parts of the directors’ remuneration report containing the directors’ remuneration policy which is set out on pages 182 to 188. The vote on resolution 3 is advisory only and the directors’ entitlement to remuneration is not conditional on it being passed.
Resolutions 4 to 12: reappointment/election of directors
The board is mindful of the recommendation contained within the Financial Reporting Council’s 2018 UK Corporate Governance Code (the code) that all directors of FTSE 350 companies should be subject to annual appointment by shareholders. All directors retire at the AGM, the biographies of those offering themselves for reappointment/election are set out on the following pages.
With the exception of the Chairman, who met the independence criteria as set out in provision 10 of the code when he was appointed, all our non-executive directors are determined to be independent in accordance with provision 10 of the code and free from any business or other relationship which could compromise their independent judgement. Should they need it, the non-executive directors are supported in their role by the ability to seek independent specialist advice.
As confirmed by the board evaluation exercise, conducted by external provider, Independent Audit Limited, the board fully endorses the reappointment/election of the directors offering themselves for the same at the AGM. All of whom are considered to be making a valuable and effective contribution to the board. All the non-executive directors were considered to be independent and demonstrating the expected level of commitment to their roles. The board recommends that shareholders vote all the directors, offering themselves for reappointment/election, back into office at the 2021 AGM. Biographical details of the directors can be found on pages 15 to 19 of this document along with the specific reasons why each director’s contribution is, and continues to be, important to the company’s long-term sustainable success.
Executive and non-executive directors offering themselves for reappointment/election
Sir David Higgins
Chairman
Responsibilities: Responsible for the leadership of the board, setting its agenda and ensuring its effectiveness on all aspects of its role.
Qualifications: BEng Civil Engineering, Diploma Securities Institute of Australia, Fellow of the Institute of Civil Engineers and the Royal Academy of Engineering.
Appointment to the board: May 2019; appointed as Chairman in January 2020.
Skills and experience: Sir David has spent his career overseeing high profile infrastructure projects, including: the delivery of the Sydney Olympic Village and Aquatics centre; Bluewater Shopping Centre, Kent; and the delivery of the 2012 London Olympic Infrastructure Project.
Career experience: Sir David was previously chief executive of: Network Rail Limited; The Olympic Delivery Authority; and English Partnerships. He has held non-executive roles as chairman of both High Speed Two Limited and Sirius Minerals plc. In December 2019 he stepped down as non-executive director and chair of the remuneration committee at Commonwealth Bank of Australia.
Current directorships/business interests: Chairman of Gatwick Airport Limited and a member of the Council at the London School of Economics. He is Chairman of United Utilities Water Limited.
Independence: Sir David met the 2018 UK Corporate Governance Code’s independence criteria (provision 10) on his appointment as a non-executive director and chairman designate.
Specific contribution to the company’s long-term success: Sir David’s experience of major infrastructure projects and his knowledge and understanding of the role of regulators will be invaluable in meeting the challenges of the current regulatory period and beyond. As chairman of the nomination committee, he is responsible for ensuring the succession plans for the board and senior management identify the right skillsets to face the challenges of the business.
Steve Mogford
Chief Executive Officer
Responsibilities: To manage the group’s business and to implement the strategy and policies approved by the board.
Qualifications: BSc (Hons) Astrophysics/Maths/Physics.
Appointment to the board: January 2011.
Skills and experience: Steve’s experience of the highly competitive defence market and of complex design, manufacturing and support programmes has driven forwards the board’s strategy of improving customer service and operational performance at United Utilities. His perspective of the construction and infrastructure sector provides valuable experience and insight to support United Utilities’ capital investment programme.
Career experience: Steve was previously chief executive of SELEX Galileo, the defence electronics company owned by Italian aerospace and defence organisation Finmeccanica, chief operating officer BAE Systems PLC and a member of its PLC board. His early career was spent with British Aerospace PLC. Steve ceased to be a non-executive director of G4S plc following its takeover in April 2021.
Current directorships/business interests: He is Chief Executive Officer of United Utilities Water Limited and a non-executive director of Water Plus, a joint venture with Severn Trent serving business customers.
Specific contribution to the company’s long-term success: As the Chief Executive Officer, Steve has driven a step change in the company’s operational performance, and has implemented a Systems Thinking approach to underpin future operational activities and improved performance.
Executive and non-executive directors offering themselves for reappointment/election
**Phil Aspin**
Chief Financial Officer
**Responsibilities:** To manage the group’s financial affairs, to contribute to the management of the group’s business and to the implementation of the strategy and policies approved by the board.
**Qualifications:** BSc (Hons) Mathematics, Chartered Accountant (ACA), Fellow of the Association of Corporate Treasurers (FCT).
**Appointment to the board:** July 2020.
**Skills and experience:** Phil has extensive experience of financial and corporate reporting, having qualified as a chartered accountant with KPMG and more latterly through his role as group controller. He has a comprehensive knowledge of capital markets and corporate finance underpinned through his previous role as group treasurer and his FCT qualification. Having been actively engaged in the last four regulatory price reviews he has a strong understanding of the economic regulatory environment.
**Career experience:** Phil has over 25 years’ experience working for United Utilities. Prior to his appointment as CFO in July 2020, he was group controller with responsibility for the group’s financial reporting and prior to that he was group treasurer with responsibility for funding and financial risk management. He has been a member of EFRAG TEG and chaired the EFRAG Rate Regulated Activities Working Group.
**Current directorships/business interests:** Phil was appointed as a member of the UK Accounting Standards Endorsement Board in March 2021. He is chair of the 100 Group pensions committee and a member of both the 100 Group main committee and the stakeholder communications and reporting committee. He is Chief Financial Officer of United Utilities Water Limited and a non-executive director of Water Plus, a joint venture with Severn Trent serving business customers.
**Specific contribution to the company’s long-term success:** Phil has driven forward the financial performance of the group and delivered the group’s competitive advantage in financial risk management and excellence in corporate reporting.
---
**Mark Clare**
Senior independent non-executive director
**Responsibilities:** Responsible, in addition to his role as an independent non-executive director, for discussing any concerns with shareholders that cannot be resolved through the normal channels of communication with the Chairman or Chief Executive Officer.
**Qualifications:** Chartered Management Accountant (FCMA).
**Appointment to the board:** November 2013.
**Skills and experience:** Through his previous roles at British Gas and BAA, Mark has a strong background operating within regulated environments. His extensive knowledge of customer-facing businesses is particularly valuable for United Utilities in the pursuit of our strategy to improve customer service.
**Career experience:** Mark was previously chief executive of Barratt Developments plc. He is a former trustee of the Building Research Establishment and the UK Green Building Council. Mark held senior executive roles in Centrica plc and British Gas. He is a former non-executive director at BAA plc and Ladbrokes Coral PLC.
**Current directorships/business interests:** Mark was appointed as a non-executive director at Aggreko plc in October 2020. He was appointed as senior independent non-executive director at Wickes Group plc and as chair of the remuneration committee in April 2021. He is non-executive chairman at Grainger plc and a non-executive director at Premier Marinas Holdings Limited. He is an independent non-executive director of United Utilities Water Limited.
**Specific contribution to the company’s long-term success:** As senior independent non-executive director, Mark applies his own considerable board experience gained during his career to United Utilities and provides a sounding board to the executive in many areas.
Executive and non-executive directors offering themselves for reappointment/election
Stephen Carter CBE
Independent non-executive director
Responsibilities: To challenge constructively the executive directors and monitor the delivery of the strategy within the risk and control framework set by the board and to lead the board’s agenda on acting responsibly as a business.
Qualifications: Bachelor of Laws (Hons).
Appointment to the board: September 2014.
Skills and experience: As the chief executive of a FTSE 100 listed company, Stephen brings current operational experience to the board. His public sector experience provides additional insight in regulation and government relations. His day-to-day experience in the information and technology industries ensures that the board is kept abreast of these areas of the company’s operating environment.
Career experience: Stephen previously held senior executive roles at Alcatel Lucent Inc. and a number of public sector/service roles, including serving a term as the founding chief executive of Ofcom. He stepped down as a non-executive director at the Department for Business Energy and Industrial Strategy in December 2020. Former chairman Ashridge Business School. A Life Peer since 2008.
Current directorships/business interests: Group chief executive Informa plc. He is an independent non-executive director of United Utilities Water Limited.
Specific contribution to the company’s long-term success: Stephen’s experience as a current chief executive and his previous work in the public sector and government provides valuable insight for board discussions on regulatory matters.
Kath Cates
Independent non-executive director
Responsibilities: To challenge constructively the executive directors and monitor the delivery of the strategy within the risk and control framework set by the board.
Qualifications: Solicitor of England and Wales.
Appointment to the board: September 2020.
Skills and experience: Kath has spent most of her career working in a regulated environment in the financial services industry. Since 2014, she has focused on her non-executive roles, chairing all the main board committees and undertaking the role of senior independent director.
Career experience: Kath previously was chief operating officer at Standard Chartered plc before which she held a number of roles at UBS Limited over a 22-year period, prior to which she qualified as a solicitor. She stepped down as a non-executive director at Brewin Dolphin Holdings plc in February 2021.
Current directorships/business interests: Kath is a non-executive director at RSA Insurance Group plc and chair of the remuneration committee. She is a non-executive director at Columbia Threadneedle Investments where she chairs the TPEN audit committee and a non-executive director of TP ICAP Group Plc. She is an independent non-executive director of United Utilities Water Limited.
Specific contribution to the company’s long-term success: Kath’s broad board experience enables her to contribute to board governance and risk management at United Utilities.
Executive and non-executive directors offering themselves for reappointment/election
**Alison Goligher**
Independent non-executive director
**Responsibilities:** To challenge constructively the executive directors and monitor the delivery of the strategy within the risk and control framework set by the board and to lead the board’s activities concerning directors’ remuneration.
**Qualifications:** BSc (Hons) Mathematical Physics, MEng Petroleum Engineering.
**Appointment to the board:** August 2016.
**Skills and experience:** Alison has strong technical and capital project management skills, having been involved in large projects and the production side of Royal Dutch Shell’s business. This experience of engineering and industrial sectors provides the board with additional insight into delivering United Utilities’ capital investment programme.
**Career experience:** Royal Dutch Shell (2006 to 2015), where Alison’s most recent executive role was Executive Vice President Upstream International Unconventionals. Prior to that she spent 17 years with Schlumberger, an international supplier of technology, integrated project management and information solutions to the oil and gas industry.
**Current directorships/business interests:**
Alison is a non-executive director and chair of the remuneration committee at Meggitt PLC and a part-time executive chair at Silixa Ltd. She was appointed as a non-executive director of Technip Energies NV in February 2021. She is an independent non-executive director of United Utilities Water Limited.
**Specific contribution to the company’s long-term success:** Alison’s understanding of the operational challenges of large capital projects and the benefits of deploying technology provides valuable insight into addressing the longer-term strategic risks faced by the business. Her role as the designated non-executive director for workforce engagement will provide the board with a better understanding of the views of employees and greater clarity on the culture of the company.
---
**Paulette Rowe**
Independent non-executive director
**Responsibilities:** To challenge constructively the executive directors and monitor the delivery of the strategy within the risk and control framework set by the board.
**Qualifications:** MEng + Man (Hons), MBA.
**Appointment to the board:** July 2017.
**Skills and experience:** Paulette has spent most of her career in the regulated finance industry and so provides the board with additional perspective and first-hand regulatory experience. Her experience of technology-driven transformation will contribute to United Utilities’ customer experience programme and its Systems Thinking approach.
**Career experience:** Previously held senior executive roles in banking and technology at Facebook, Barclays and the Royal Bank of Scotland/NatWest. Former trustee and chair of children’s charity The Mayor’s Fund for London.
**Current directorships/business interests:** CEO of Integrated and Ecommerce Solutions and member of the Paysafe Group executive since January 2020. Paysafe, a former FTSE 250 company, is now privately owned by PE firms CVC and Blackstone. She is an independent non-executive director of United Utilities Water Limited.
**Specific contribution to the company’s long-term success:** Paulette’s wide-ranging experience in regulated sectors, profit and loss management, technology and innovation enables her to provide a first-hand contribution to many board topics of discussion. In her current executive role she often faces many of the same issues, and has been able to provide support to senior management at United Utilities.
Executive and non-executive directors offering themselves for reappointment/election
**Doug Webb**
Independent non-executive director
**Responsibilities:** To challenge constructively the executive directors and monitor the delivery of the strategy within the risk and control framework set by the board.
**Qualifications:** MA Geography and Management Science, Chartered Accountant (FCA).
**Appointment to the board:** September 2020.
**Skills and experience:** Doug has extensive career experience in finance from qualifying as a chartered accountant with Price Waterhouse, his executive roles as CFO of major listed companies and more recently through his non-executive positions and focus on audit committee activities.
**Career experience:** Doug was previously chief financial officer at Meggitt PLC from 2013 to 2018 and prior to that, he was chief financial officer at both the London Stock Exchange Group plc and QinetiQ Group plc. He is a former non-executive director and audit committee chair at SEGRO plc, having stepped down in 2019.
**Current directorships/business interests:** Doug currently serves as a non-executive director and audit committee chair at Johnson Matthey plc, BMT Group Ltd and the Manufacturing Technology Centre Ltd. He is an independent non-executive director of United Utilities Water Limited.
**Specific contribution to the company’s long-term success:** Doug’s financial capabilities and his experience as an audit committee chair strengthen the board’s financial expertise.
Explanatory notes of resolutions
Resolutions 13 and 14: reappointment and remuneration of auditor
The board is recommending the reappointment of KPMG LLP as external auditor to the company. There are no contractual obligations that restrict the committee’s choice of external auditor; the recommendation is free from third party influence and no auditor liability agreement has been entered into. An authority for the audit committee of the board to set the remuneration of the auditor will also be sought.
Resolution 15: authorising the directors to allot shares
Paragraph (A) of this resolution 15 would give the directors the authority to allot ordinary shares or grant rights to subscribe for or convert any securities into ordinary shares up to an aggregate nominal amount equal to £11,364,806 (representing 227,296,120 ordinary shares of five pence each). This amount represents approximately one-third of the issued ordinary share capital of the company as at 26 May 2021, the latest practicable date prior to publication of this notice.
In line with the Share Capital Management Guidelines issued by the Investment Association, paragraph (B) of this resolution would give the directors authority to allot ordinary shares or grant rights to subscribe for or convert any securities into ordinary shares in connection with a rights issue in favour of ordinary shareholders up to an aggregate nominal amount equal to £22,729,613 (representing 454,592,260 ordinary shares of five pence each), as reduced by the nominal amount of any shares issued under paragraph (A) of this resolution. This amount (before any reduction) represents approximately two-thirds of the issued ordinary share capital of the company as at 26 May 2021, the latest practicable date prior to publication of this notice.
The authorities sought under paragraphs (A) and (B) of this resolution will expire at the conclusion of the annual general meeting of the company held in 2022.
The directors have no present intention to exercise the authorities sought under paragraph (B) of this resolution. As at the date of this notice, no ordinary shares are held by the company in treasury.
Resolutions 16 and 17: disapplying statutory pre-emption rights will be proposed as special resolutions
Resolutions 16 and 17 seek to give the directors the authority to allot ordinary shares (or sell any ordinary shares which the company elects to hold in treasury) for cash without first offering them to existing shareholders in proportion to their existing shareholdings.
The power set out in resolution 16 would be limited to: (i) allotments or sales in connection with pre-emptive offers and offers to holders of other equity securities if required by the rights of those shares, or as the board otherwise considers necessary, or (ii) otherwise up to an aggregate nominal amount of £1,704,721 (representing 34,094,420 ordinary shares of five pence each). This aggregate nominal amount represents approximately 5 per cent of the issued ordinary share capital of the company as at 26 May 2021, the latest practicable date prior to publication of this notice.
In respect of the power under resolution 16(B), the directors confirm their intention to follow the provisions of the Pre-Emption Group’s Statement of Principles (the Principles) regarding cumulative usage of authorities within a rolling three-year period where the Principles provide that usage in excess of 7.5 per cent of the issued ordinary share capital of the company should not take place without prior consultation with shareholders.
Resolution 17 is intended to give the company flexibility to make non-preemptive issues of ordinary shares in connection with acquisitions and other capital investments as contemplated by the Principles. The power under resolution 17 is in addition to that proposed by resolution 16 and would be limited to allotments or sales of up to an aggregate nominal amount of £1,704,721 (representing 34,094,420 ordinary shares of five pence each) in addition to the power set out in resolution 16. This aggregate nominal amount represents an additional 5 per cent of the issued ordinary share capital of the company as at 26 May 2021, the latest practicable date prior to publication of this notice.
The powers under resolutions 16 and 17 will expire at the conclusion of the annual general meeting of the company held in 2022.
**Resolution 18: authorising the company to make market purchases of its own shares will be proposed as a special resolution**
Authority is sought for the company to purchase up to 10 per cent of its issued ordinary shares (excluding any treasury shares), renewing the authority granted by the shareholders at previous annual general meetings. The directors have no present intention of exercising the authority to make market purchases, but the authority provides the flexibility to allow them to do so in the future. The directors will exercise this authority only when to do so would be in the best interests of the company, and of its shareholders generally, and could be expected to result in an increase in the earnings per share of the company. The authority will expire at the conclusion of the annual general meeting of the company held in 2022.
Ordinary shares purchased by the company pursuant to this authority may be held in treasury or may be cancelled. The directors would consider holding any ordinary shares the company may purchase as treasury shares. The company currently has no ordinary shares in treasury. The minimum price, exclusive of expenses, which may be paid for an ordinary share is its nominal value. The maximum price, exclusive of expenses, which may be paid for an ordinary share is the higher of:
(i) an amount equal to 105 per cent of the middle market value for an ordinary share for the five business days immediately preceding the date of the purchase; and
(ii) the higher of the price of the last independent trade and the highest current independent bid for an ordinary share on the trading venues where the purchase is carried out.
There are share awards outstanding over 1,828,379 ordinary shares, representing 0.27 per cent of the company’s ordinary issued share capital as at 26 May 2021. If the authority to purchase ordinary shares was exercised in full and those shares were subsequently cancelled, these share awards would represent 0.30 per cent of the company’s ordinary issued share capital.
Explanatory notes of resolutions
Resolution 19: articles of association will be proposed as a special resolution
It is proposed that the company adopt new articles of association (the ‘new articles’) in order to update the company’s current articles of association (the ‘current articles’).
As a result of the COVID-19 pandemic, the company recognises that some shareholders may wish to attend general meetings virtually in the future. The new articles will permit the company to hold general meetings which allow for attendance and participation both in person and through electronic means. The company believes that this will encourage the greatest degree of shareholder engagement, and is in line with current market practice.
The principal proposed changes relate to the provisions dealing with the definitions of a meeting under the articles, the manner in which votes are taken in a partly virtual meeting, and practicalities relating to the security, adjournment or postponement of general meetings. The amendments to these provisions are designed to facilitate shareholder engagement by providing a choice to shareholders as to their means of attendance and participation at a general meeting. Under the new articles, the company is still required to hold a physical general meeting which any shareholder retains the right to attend. An exclusively electronic meeting is expressly prohibited. Any documents that are made available to the general meeting in person are also to be made available to shareholders attending by electronic means.
Due to the practicalities of voting at a meeting held partly by electronic means, under the new articles any resolution put to the vote at such a general meeting will be decided on a poll vote. Under the current articles, whilst this is already the case for Substantive Resolutions, the default position is that Other Resolutions are voted on by a show of hands unless a poll is demanded by any parties eligible to request one. Other Resolutions are of a strictly procedural nature (for example, to amend patent errors in Substantive Resolutions, elect the chairman of the meeting, or to adjourn a meeting) and the company believes this to be a significant change. We have also updated the articles relating to security and order to account for virtual attendance. Finally, a provision has been added in line with current market practice to provide the board with the flexibility to postpone or move the location of the general meeting, or alter any electronic facilities, to account for any difficulties arising from technological or other unexpected issues.
As the company is proposing to make the changes described above, the opportunity has been taken generally to incorporate amendments of a more minor nature to reflect changes in applicable law or current market best practice including the adoption of gender-neutral language throughout, and to include some clearer language in other parts of the new articles.
The company is also proposing to increase the borrowing limit multiple in article 90(B) of the current articles from two and a half times share capital and reserves to three times share capital and reserves. The company’s external borrowings are already limited by existing internal controls, the need to maintain an acceptable credit rating and the principles of sound corporate governance. The adoption of the higher limit will not materially change the company’s borrowing policy and the board believes it to be timely and in the best commercial interests of the group to refresh its borrowing limits which will bring these in line with the utilities sector and peer group companies.
The new articles (and the current articles) are available for inspection as noted on page 24 of this notice.
Resolution 20: notice of general meeting will be proposed as a special resolution
The Companies Act 2006 requires the notice period for general meetings of the company to be at least 21 days. Under its articles of association, the company is currently able to call general meetings (other than an annual general meeting) on not less than 14 days’ notice and would like to preserve this ability. In order to do so, shareholders must first approve the calling of meetings on 14 days’ notice. The shorter notice period would not be used as a matter of routine, but only when the flexibility was merited by the business of the meeting and the circumstances requiring the business. The approval will be effective until the end of the 2022 annual general meeting of the company, when it is intended that a similar resolution will be proposed.
Resolution 21: authorising political donations and political expenditure
Shareholder approval is required for donations to political parties, independent election candidates and other political organisations, and for other political expenditure. The company does not make, and does not intend to make, donations to political parties. However, the definition of political donations is very broad and includes expenses incurred as part of the process of having dialogue with members of parliament and opinion formers to ensure that the issues and concerns of United Utilities are considered and addressed. The resolution seeks to ensure that the company and its subsidiaries remain within the law in carrying out these activities.
General information
Questions
Shareholders have a statutory right in accordance with section 319A of the Companies Act 2006 to ask and to receive an answer to a question relating to the business of the meeting, although an answer need not be given if in doing so, among other things, it was considered undesirable in the interests of the company or the good order of the meeting or if it involved the disclosure of confidential information.
Website
A copy of this notice of meeting and details of the company’s share capital in accordance with section 311A of the Companies Act 2006 are available on the company’s website at unitedutilities.com/corporate
Security
Security personnel will be on hand at the meeting and we reserve the right to search the bags of any person seeking to access the venue. No recording equipment must be used. The company also reserves the right to take appropriate measures in response to the COVID-19 pandemic and any Government guidance provided in respect of it.
Admission card
You should bring your admission card to the meeting if you are attending the venue, as it will speed up the registration process, it also serves as your poll card. If you do not have your admission card, you will need proof of identity before you can be admitted. The doors will open at 10.00am and the meeting will start at 11.00am.
Documents
Copies of executive directors’ service contracts and non-executive directors’ letters of appointment will be available for inspection at the venue for at least 15 minutes prior to, and until the close, of the meeting. Similarly, copies of both the company’s current articles of association and the new articles of association proposed under resolution 19 are available for inspection at the company’s registered office and at the offices of Slaughter and May, One Bunhill Row, London EC1Y 8YY during normal business hours until the date of the AGM and will be available at the venue for at least 15 minutes prior to the start of, and until the close of, the meeting. Some of these documents are ordinarily available on the company’s website.
Voting
The record date for entry on the register of members for a member to have the right, to attend and vote at the meeting is 6.30pm on Wednesday 21 July 2021 (or, if the meeting is adjourned, 6.30pm on the day two days before the date fixed for the reconvened meeting). A poll vote will be held on each resolution and scrutinised by Equiniti ensuring the votes cast are correctly recorded, including any proxy votes. One vote can be cast for each ordinary share held. Members have the right to request information to enable them to determine that their vote was validly recorded and counted. If you wish to receive this information, please contact Equiniti (see page 31).
Proxy appointment
Every shareholder who is entitled to attend and vote has the right to appoint one or more persons as their proxy. A proxy need not be a shareholder. Shareholders can appoint the chairman of the meeting as their proxy, or another person. More than one proxy may be appointed provided each proxy is appointed to exercise rights in respect of a different share or shares held by the shareholder. Where a member appoints multiple proxies but the proxy forms submitted by that member would give the appointed proxies the apparent right to exercise a number of votes on behalf of that member in a general meeting in excess of the number of shares
actually held by that member, then each of those proxy forms will be invalid and none of the proxies appointed under those proxy forms will be entitled to attend, speak, or vote at the AGM.
You may appoint your proxy or proxies electronically (see page 24) or by completing, detaching and returning the proxy form enclosed with this notice. The deadline for receipt by email by Equiniti is no later than 11.00am on Wednesday 21 July 2021.
To be valid, completed proxy forms must be received by the company’s registrar, Equiniti, at Aspect House, Spencer Road, Lancing, West Sussex, United Kingdom, BN99 6DA by no later than 11.00am on Wednesday 21 July 2021. If a proxy form is lodged with the registrar, and a shareholder subsequently attended and wished to vote, the original proxy vote would be disregarded. To appoint more than one proxy, you may photocopy the form of proxy or request additional forms from the company’s registrar, Equiniti, by telephone on 0371 384 2041. Lines are open 8.30am to 5.30pm Monday to Friday excluding public holidays in England and Wales or for overseas shareholders +44 121 415 7048, or by writing to them at the above address. Multiple proxy appointments should be returned together in the same envelope.
The company is not under any obligation to investigate whether the exercise of any vote by any proxy accords with any instruction given by his appointor.
**Persons nominated to enjoy information rights**
If you are not a shareholder, but enjoy information rights under the Companies Act 2006, you are not entitled to appoint a proxy. However, there may be an agreement between you and your nominating shareholder which entitles you to be appointed, or to have someone else appointed, as their proxy. If you don’t have this right, or don’t wish to exercise it, you may still be entitled under such an agreement to give instructions to that shareholder as to how you would like them to vote.
**Electronic proxy voting**
Shareholders can register the appointment of a proxy for this meeting at sharevote.co.uk which is run by Equiniti. To do this, you’ll need the three numbers (voting ID, task ID and shareholder reference number) that are quoted on your proxy form. Then follow the instructions on the website. If you have already registered with the company’s registrar’s online portfolio service, Shareview, you can submit your proxy by logging on to your portfolio at shareview.co.uk using your usual user ID and password. Once logged in simply click ‘View’ on the ‘My Investments’ page, click on the link to vote then follow the on screen instructions. The appointment of a proxy must be received by Equiniti no later than 11.00am on Wednesday 21 July 2021.
Please read the terms and conditions relating to the use of this facility before appointing a proxy. These terms and conditions may be viewed on the website. You may not use any electronic address provided in this notice to communicate with the company for any purpose other than those stated. Any electronic communication sent by a shareholder that is found to contain a virus will not be accepted.
**CREST electronic proxy appointment service**
CREST members who wish to appoint a proxy or proxies through the CREST electronic proxy appointment service may do so by using the procedures described in the CREST manual. CREST personal members or other CREST sponsored members, and those CREST members
General information
who have appointed a voting service provider(s), should refer to their CREST sponsor or voting service provider(s), who will be able to act on their behalf.
In order for a proxy appointment or instruction made using the CREST service to be valid, the appropriate CREST message (a CREST Proxy Instruction) must be properly authenticated in accordance with Euroclear UK & Ireland Limited’s specifications and must contain the information required for such instructions, as described in the CREST manual (available via www.euroclear.com). The message, regardless of whether it constitutes the appointment of a proxy or is an amendment to an instruction given to a previously appointed proxy, must, in order to be valid, be transmitted so as to be received by Equiniti (ID RA19) no later than 11.00 am on Wednesday 21 July 2021 (or not less than 48 hours before any adjourned meeting).
For this purpose, the time of receipt will be taken to be the time (as determined by the time stamp applied to the message by the CREST Application Host) from which the issuer’s agent is able to retrieve the message by enquiry to CREST in the manner prescribed by CREST. After this time, any change of instructions to proxies appointed through CREST should be communicated to the appointee through other means.
CREST members and, where applicable, their CREST sponsors or voting service providers should note that Euroclear UK & Ireland Limited does not make available special procedures in CREST for any particular message. Normal system timings and limitations will, therefore, apply in relation to the input of CREST Proxy Instructions. It is the responsibility of the CREST member concerned to take (or, if the CREST member is a CREST personal member, or sponsored member, or has appointed a voting service provider(s) to procure that his CREST sponsor or voting service provider(s) take(s)) such action as shall be necessary to ensure that a message is transmitted by means of the CREST system by any particular time. In this connection, CREST members and, where applicable, their CREST sponsors or voting system providers are referred, in particular, to those sections of the CREST manual concerning practical limitations of the CREST system and timings.
The company may treat as invalid a CREST Proxy Instruction in the circumstances set out in Regulation 35(5)(a) of the Uncertificated Securities Regulations 2001.
Corporate representative
Any corporation which is a member can appoint one or more corporate representatives who may exercise on its behalf all of its powers as a member provided that they do not do so in relation to the same shares. Where a member appoints more than one corporate representative in respect of its shareholding, but in respect of different shares, those corporate representatives can act independently of each other and validly vote in different ways. The company is not under any obligation to investigate whether the exercise of any vote by any corporate representative accords with any instruction given by his appointor.
Issued share capital
As at 26 May 2021 (being the latest practicable date prior to the publication of this document):
(i) the company’s issued share capital consisted of 681,888,418 ordinary shares of five pence each and 273,956,180 deferred shares of 170 pence each; and
(ii) the total voting rights in the company were 681,888,418.
Shareholder requests
Under section 527 of the Companies Act 2006 (the Act), members meeting the threshold requirements set out in that section have the right to require the company to publish on a website a statement setting out any matter relating to:
(i) the audit of the company’s accounts (including the auditor’s report and the conduct of the audit) that are to be laid before the annual general meeting; or
(ii) any circumstance connected with an auditor of the company ceasing to hold office since the previous meeting at which annual accounts and reports were laid in accordance with section 437 of the Act. The company may not require the shareholders requesting any such website publication to pay its expenses in complying with sections 527 or 528 of the Act. Where the company is required to place a statement on a website under section 527 of the Act, it must forward the statement to the company’s auditor not later than the time when it makes the statement available on the website. The business which may be dealt with at the annual general meeting includes any statement that the company has been required under section 527 of the Act to publish on a website.
Under sections 338 and 338A of the Act, shareholders may request the company to give notice of a resolution which is intended to be moved at an annual general meeting, or to include in the business of an annual general meeting other business which may properly be so included, provided that the resolution or other business would not be defamatory, frivolous or vexatious, and in the case of a proposed resolution, provided that the resolution would not be ineffective. The company will give notice of such a resolution or of such other business if sufficient requests have been received in accordance with sections 338(3) and 338A(3) of the Act.
Privacy
Visit unitedutilities.com/privacy for details of how we handle your personal details.
Electronic link to the United Utilities Group PLC 2021 Annual General Meeting
United Utilities Group PLC will be enabling shareholders to observe the 2021 AGM electronically, and submit questions in writing via the website. This can be done by accessing the following AGM website, hosted by Lumi - https://web.lumiagm.com (the Lumi AGM website).
Accessing the AGM website
The Lumi AGM website can be accessed online using most well-known internet browsers such as Internet Explorer (it is not compatible with versions 10 and below), Edge, Chrome, Firefox and Safari on a PC, laptop or internet-enabled device such as a tablet or smartphone. An active internet connection will be required at all times in order to allow you to submit questions and listen to the audiocast. It is the user’s responsibility to ensure they remain connected for the duration of the meeting.
Logging In
On accessing the Lumi AGM website, you will be asked to enter a Meeting ID which is 125-883-286.
You will then be prompted to enter your unique shareholder reference number (SRN), which can be found printed on your proxy card. Your PIN, is the first two and last two digits of your SRN. The link via the website will be available from 10.00am on 23 July 2021.
Broadcast
The meeting will be broadcast with presentation slides. Once logged in, and at the commencement of the meeting, you will be able to listen to the proceedings of the meeting on your device. You will be able to see the slides presented at the meeting, these slides will change automatically as the meeting progresses.
Questions
Shareholders may submit questions in writing via the website. Please select the messaging icon from within the navigation bar and type your question at the bottom of the screen, once finished, press the ‘send’ icon to the right of the message box which will submit your question.
Duly appointed proxies and corporate representatives
Please contact the company’s registrar before 11.00am on 22 July 2021 on 0371 384 2041 or +44 (0) 121 415 7048 if you are calling from outside the UK for your SRN and PIN. Lines are open 08.30am to 5.30pm Monday to Friday (excluding public holidays in England & Wales).
User Guide to accessing the United Utilities Group PLC 2021 Annual General Meeting
Meeting ID: 125-883-286
To login you must have your SRN and PIN
1. Open the Lumi AGM website and you will be prompted to enter the Meeting ID. If a shareholder attempts to login to the website before the meeting is live*, a pop-up dialogue box will appear.
*10:00 am on 23 July 2021.
2. After entering the Meeting ID, you will be prompted to enter your unique SRN and PIN.
3. When successfully authenticated, you will be taken to the Home Screen.
4. To view the meeting presentation, expand the ‘Broadcast Panel’, located at the bottom of your device. If viewing through a browser, it will appear automatically. This can be minimised by pressing the same button.
5. If you would like to submit a question, select the messaging icon. Type your message within the chat box at the bottom of the messaging screen. Click the send button to submit.
Shareholder information
**Key dates**
- **24 June 2021**
- Ex-dividend date for 2020/21 final dividend
- **25 June 2021**
- Record date for 2020/21 final dividend
- **23 July 2021**
- Annual general meeting
- **2 August 2021**
- Payment of 2020/21 final dividend to shareholders
- **24 November 2021**
- Announcement of half-year results for the six months ending 30 September 2021
- **16 December 2021**
- Ex-dividend date for 2021/22 interim dividend
- **17 December 2021**
- Record date for 2021/22 interim dividend
- **1 February 2022**
- Payment of 2021/22 interim dividend to shareholders
- **May 2022**
- Announce the final results for the 2021/22 financial year
- **June 2022**
- Publish the annual report and financial statements for the 2021/22 financial year
**Electronic communications**
We are encouraging our shareholders to receive their shareholder information by email and via our website. Not only is this a quicker way for you to receive information, it helps us to be more sustainable by reducing paper and printing materials and lowering postage costs.
Registering for electronic shareholder communications is very straightforward, and is done online via shareview.co.uk which is a website provided by our registrar, Equiniti.
Log on to shareview.co.uk and you can:
- set up electronic shareholder communication;
- view your shareholdings;
- update your address details if you change your address; and
- get your dividends paid directly into your bank account.
Please do not use any electronic address provided in this notice or in any related document to communicate with the company for any purposes other than those expressly stated.
**Dividends paid direct to your bank account**
Make life easier and have your dividends paid straight into your bank account:
- The dividend goes directly into your bank account and is available immediately;
- No need to pay dividend cheques into your bank account;
- No risk of losing cheques in the post;
- No risk of having to replace spoiled or out-of-date cheques; and
- It’s cost-effective for your company.
To take advantage of this, please contact Equiniti via shareview.co.uk or complete the dividend mandate form that you receive with your next dividend cheque.
If you choose to have your dividend paid directly into your bank account you’ll receive one tax voucher each year. This will be issued with the interim dividend normally paid in February and will contain details of all the dividends paid in that tax year. If you would like to receive a tax voucher with each dividend payment, please contact Equiniti.
Registrar’s details
The group’s registrar, Equiniti, can be contacted on:
0371 384 2041 or textphone for those with hearing difficulties:
0371 384 2255. Lines are open 8.30am to 5.30pm, Monday to Friday excluding public holidays in England and Wales.
The address is:
Equiniti, Aspect House, Spencer Road,
Lancing, West Sussex, BN99 6DA.
Overseas shareholders may contact them on:
+44 (0)121 415 7048
Equiniti offers a share dealing service by telephone:
0345 603 7037 and online: shareview.co.uk/dealing
Equiniti also offers a stocks and shares ISA for United Utilities shares: call 0345 300 0430 or go to: shareview.co.uk/dealing
Keeping you in the picture
You can find information about United Utilities quickly and easily on our website: unitedutilities.com/corporate including: the annual report and financial statements, company announcements, the half-year and final results and the accompanying presentations.
Warning to shareholders
Please be very wary of any unsolicited contact about your investments or offers of free company reports. It may be from an overseas ‘broker’ who could sell you worthless or high risk shares. If you deal with an unauthorised firm, you would not be eligible to receive payment under the Financial Services Compensation Scheme. Further information and a list of unauthorised firms that have targeted UK investors is available from the Financial Conduct Authority at: fca.org.uk/consumers/unauthorised-firms-individuals
Designed and produced by Jones and Palmer Ltd.
Printed by Park Communications on FSC® certified paper. Park is an EMAS certified company and its Environmental Management System is certified to ISO 14001. 100% of the inks used are vegetable oil based, 95% of press chemicals are recycled for further use and, on average, 99% of any waste associated with this production will be recycled. This document is printed on Galarie. The paper contains material sourced from responsibly managed forests, certified in accordance with the Forest Stewardship Council®. The pulp used is bleached using an elemental chlorine free (ECF) process.
Registered office: United Utilities Group PLC
Haweswater House, Lingley Mere Business Park,
Lingley Green Avenue, Great Sankey, Warrington, WA5 3LP
Registered in England and Wales. Registered number 6559020
unitedutilities.com
Telephone +44 (0)1925 237000
Stock Code: UU. |
MAGICALIA
Race of Wonders
JENNIFER BELL
“Fun, fast and ferociously readable.”
Elle McNicoll
“A super-exciting and unputdownable story.”
Thomas Taylor
“A proper rip-roaring adventure!”
Dominique Valente
“A spellbinding read.”
Rachel Chivers Khoo
“An absolute tour-de-force.”
Justin Somper
“A monstrously fun adventure.”
Andy Sagar
“A 2024 must read.”
Swapna Haddow
“True magic.”
Sarwat Chadda
“A gloriously imaginative and thrilling adventure.”
Lizzie Huxley-Jones
“Wonderful … bursts with technicolour.”
Jasbinder Bilan
“A bold and exciting new world of magic.”
Amy Sparkes
“Spectacular! A world-beating feat of imagination.”
Sinéad O’Hart
“Jennifer Bell makes the impossible seem possible … an absolute triumph!”
Mel Taylor-Bessent
For Beth
This is a work of fiction. Names, characters, places and incidents are either the product of the author’s imagination or, if real, used fictitiously. All statements, activities, stunts, descriptions, information and material of any other kind contained herein are included for entertainment purposes only and should not be relied on for accuracy or replicated as they may result in injury.
First published in Great Britain 2024 by Walker Books Ltd
87 Vauxhall Walk, London SE11 5HJ
2 4 6 8 10 9 7 5 3 1
Text © 2024 Jennifer Bell
Cover and interior illustrations © 2024 David Wyatt
The right of Jennifer Bell to be identified as author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988
This book has been typeset in Sabon MT Pro
Printed and bound by CPI Group (UK) Ltd, Croydon CR0 4YY
All rights reserved. No part of this book may be reproduced, transmitted or stored in an information retrieval system in any form or by any means, graphic, electronic or mechanical, including photocopying, taping and recording, without prior written permission from the publisher.
British Library Cataloguing in Publication Data:
a catalogue record for this book is available from the British Library
ISBN 978-1-5295-0614-3
www.walker.co.uk
FSC® MIX Paper | Supporting responsible forestry
FSC® C171272
MAGICALIA
Race of Wonders
JENNIFER BELL
Until a monster swallowed her PE kit, Bitsy’s evening had been going to plan.
She’d finished all her homework, tidied her room and defeated her best friend, Kosh, at *Mario Kart*. Twice. After dinner, the two of them had blown up an inflatable mattress so he could stay over this weekend while his parents were away and join Bitsy and her dad on a holiday to Paris on Monday. There was only one thing left to do before they could call it a night.
“Recording in three, two…” Bitsy tapped a button on her laptop, adjusted her headphones, and leaned closer to the wireless microphone on her desk. “Hello and welcome to *Poddingham*, the local news podcast for Oddingham village. It’s Friday the twenty-ninth of March. I’m your
host, Bitsy Wilder, and this week I’m joined by our sports correspondent—”
“—Koshan Ranasinghe!” Kosh declared his Sri Lankan surname like a football commentator announcing a goal. Sitting beside her, he had a tatty Oddingham FC beanie pulled over his floppy black hair and was wearing his usual slouchy T-shirt-and-tracksuit-bottoms combo. “Some of you might also know me as the boy who delivers your newspapers and accidentally rides a bike through your flowerbeds. Shout out to Mrs Harris on Bridge Lane for always being so chill about it!”
Bitsy covered the microphone with the sleeve of her cardigan. “Mrs Harris is not chill about it, by the way. I saw her yelling at your mum yesterday.”
“You did?” Kosh paused. “Maybe edit that bit out.”
Shaking her frizzy blonde curls, Bitsy ploughed on. “Coming up, Kosh has the lowdown on last night’s football match between Oddingham and Bletchy Town. First, though, the headlines.” She flipped open her trusty reporter’s notebook and tried to ignore a heavy feeling of disappointment as she read, “Tarmac trouble: residents concerned as potholes worsen on Church Street. Wood you believe it?: sighting of rare woodpecker thrills local birdwatchers. And Gotta ketchup-all: Oddingham gardener grows tomato shaped like Pikachu.”
"Gotta ketchup-all?" Kosh laughed. "That's got to be one of your best puns yet."
Bitsy gave him a weak smile. As much as she enjoyed devising witty headlines, she wished there was more interesting news in Oddingham. She'd never understood why her dad had relocated from London to such a boring village in the middle of nowhere, but if she was ever going to become a professional journalist, she had to start by reporting the experiences of the community she lived in. Even if that meant talking about lookalike vegetables.
She gazed at the pinboard above her desk, cluttered with newspaper clippings of the articles her mum had written. Matilda Wilder had passed away in a car accident when Bitsy was five, but Bitsy’s dad, Eric, talked about her all the time – how she’d been an investigative reporter for the BBC and had adventured around the globe, sniffing out important stories that exposed corruption and fought injustice. Matilda had recorded her investigations in reporter’s notebooks too. Someday, Bitsy was determined to follow in her footsteps.
Glancing back at her notebook, Bitsy was about to begin her report on potholes when a rumbling *boom* reverberated around the house.
"What was that?" Kosh asked. "It sounded like thunder … but inside the house."
Bitsy slid off her headphones. She could hear voices
talking downstairs – her dad and someone Bitsy couldn’t place. Something about her dad’s tone made Bitsy’s heart race.
Stuffing her notebook into her jeans pocket, she rushed to open her bedroom door. A strange shadow was climbing the stairs. It looked like the outline of a large animal with long whiskers and a bulbous head. “Dad?” she called uncertainly. He had a goofy sense of humour; perhaps he was playing a practical joke. “Dad, are you—?”
But her question wedged in her throat as a hamster the size of a bathtub heaved itself to the top of the stairs, wheezing heavily. Amethyst-purple fur covered the beast’s entire body, except for a bald patch above its nose where a jagged black rhinoceros’ horn protruded. The beast’s violet eyes glittered as it spotted the wicker laundry basket on Bitsy’s landing. Scurrying forward, it snared the basket in its claws, opened its mouth – revealing four overgrown incisors – and tossed the contents, PE kit and all, to the back of its throat.
“What in the world is that?!” Kosh choked, jumping out of his chair.
Bitsy stumbled back. For a split second, she thought she might be hallucinating – after all, a purple hamstoceros couldn’t possibly be real – but that didn’t explain how Kosh could see the monster too. “I don’t know!” she spluttered, diving behind her bedroom door. “Hide!”
Kosh dashed across the floor and flattened himself against the wall beside Bitsy. “Do you think it’s friendly? What if it wants to eat us?!”
The wobbly pitch of his voice matched the jumpy feeling in Bitsy’s stomach. She peeked through a gap in the door. The hamstoceros was sitting on its hind legs, gobbling the contents of her dad’s bookcase. Its diet seemed to consist of absolutely everything… “We need to sneak downstairs and find my dad – he could be in trouble,” she whispered, desperately hoping he was OK. As the hamstoceros tramped into her dad’s bedroom at the other end of the landing, she steadied her nerves and snuck out from behind the door. “Come on, this is our chance.”
They tiptoed towards the stairs. Like most houses in Oddingham, Bitsy’s was old and the floorboards were notoriously creaky. Her knees trembled as she crept forward, trying to remember the quiet parts of the landing. Kosh trod carefully in her footsteps, holding his arms out for balance. At the top of the stairs, Bitsy grabbed the banister and lowered her slipper onto the uppermost step…
But as she shifted her weight forward, the door to her dad’s bedroom clattered and the hamstoceros waddled out, chewing on one of her dad’s work ties. Its cheeks had swollen to the size of beach balls and were now stuffed
with so many oddly shaped lumps, the hamstoceros could barely fit its head through the doorframe.
Bitsy froze as the monster caught sight of them. It hastily slurped down the rest of her dad’s tie and lowered its horn like it was taking a bow.
Kosh hesitated. “What is—?”
“Yeeee!” With a high-pitched squeal, the hamstoceros charged.
“Not friendly!” Kosh wailed, pushing Bitsy forward. “Go!”
They scrambled down the stairs two at a time as the hamstoceros rammed into the wall behind them. As if struck by an earthquake, the staircase shook in all directions. Plaster crumbled from the ceiling, and a couple of pictures fell off the wall and smashed onto the steps. Coughing dust out of her lungs, Bitsy landed on the ground floor and raced along the hallway. Voices were coming from the lounge.
“Give the book to me!” a woman snarled.
“You can’t have it,” Bitsy’s dad said fiercely. “It doesn’t belong to you.”
With a burst of speed, Bitsy bolted through the door ahead of Kosh and skidded to a stop in the middle of the carpet.
A tall, raven-haired woman with pale skin was pacing by the TV. Bitsy had never seen her before, but with her
shaved undercut, dark eyeliner, combat trousers and heavy biker boots, she cut a striking figure.
“Bitsy!” Eric Wilder blinked at her from behind his steel-framed spectacles. There were tea stains on his jumper and an empty mug rolling back and forth by his feet. “I’m, uh, just dealing with a surprise visitor. Take Kosh back upstairs and—”
But before he could finish, the hamstoceros barreled through the door behind them, roaring furiously. Clumps of shredded wallpaper dangled from its horn and dust caked its whiskers like it had faceplanted in icing sugar. It surveyed the room and fixed Bitsy and Kosh with a malevolent glare as if to say, Prepare to join your dirty laundry.
Eric stiffened. “On second thoughts, both of you get behind me. Now!”
Bitsy grabbed Kosh’s arm and they dropped behind the closest sofa. “What’s going on, Dad?” she asked breathlessly. “What is that thing?”
“It’s called a magicore,” Eric said, backing steadily away from the hamstoceros. “They’re powerful beasts conjured from emotional energy. That particular species is conjured from greed.”
A beast conjured from greed? The concept pinballed around Bitsy’s head, making her dizzy. “I don’t understand. What’s it doing here? And who’s she?”
The raven-haired woman studied Bitsy with a wry smile. She wore studded leather gloves and a dagger-shaped bronze earring in one ear. Eric glowered at her, pain flickering across his face like it sometimes did when he spoke about Bitsy’s mum. “I’ll explain later. Just stay down, both of you.”
A cold feeling spread through Bitsy’s chest like she’d just been stabbed with an icicle. How did her dad know all this? Had he been keeping secrets from her? It didn’t make sense.
The raven-haired woman stomped over to the hamstoceros. “Well?” she asked sharply, surveying the monster’s bloated cheeks. “Did you find the book?”
As if it had understood the woman’s question, the hamstoceros snorted. It wiggled its cheeks like it was gargling with mouthwash and, with a loud clatter, vomited up an assortment of her dad’s possessions, including two pairs of shoes, a dozen astronomy textbooks, a long black telescope and a fleecy tartan dressing gown with a hole in the sleeve. Finally, it spewed up a week’s worth of Wilder dirty laundry.
The raven-haired woman scrunched her nose as she kicked through the drool-covered pile. “It’s not here. Keep hunting.”
The hamstoceros huffed and, with its cheeks now shrunk to the size of watermelons, plodded towards a
glass cabinet that stood against one wall. Bitsy tensed. The cabinet contained a collection of her mum’s journalism awards, plus several souvenirs from her mum’s travels.
She sprang to her feet as the hamstoceros smashed through the front of the cabinet, reached inside and began devouring trinkets. “Dad, do something!”
Eric’s expression tightened. He looked back and forth between Bitsy and the hamstoceros like he was wrestling with a decision. Finally, he pulled a fountain pen from his trouser pocket and aimed it threateningly at the raven-haired woman. “You have until the count of three to take your magicore and leave. One…”
“What’s he going to do with that?” Kosh whispered as Bitsy crouched back down. “Squirt ink in her face?”
Bitsy shook her head. She’d never seen the fountain pen before.
“Two…”
The woman flared her nostrils. “I don’t have time for this. If you won’t give me the book, I’ll have to take the next best thing.” She signalled to the hamstoceros. “Prepare for extraction.”
The hamstoceros’ fur bristled. It promptly abandoned the statuette it had been about to eat and bared its teeth at Eric.
Eric’s fingers tightened around his pen. Bitsy noticed the barrel glowing blue under his touch.
“Three!”
A cloud of twinkling copper particles burst from the pen with a soft crackle. They whirled through the air like a murmuration of starlings and formed a wavy sausage the width of Bitsy’s thigh. The sausage wriggled, and the particles blew away…
…to reveal a flying, silver caterpillar. Beneath its transparent skin, its body appeared to be made of dense fog that flickered with electrical sparks.
Kosh’s mouth fell open. “Tell me you see…”
“I see it,” Bitsy said, squeezing his arm. Her pulse was racing. Had her dad just conjured a – what had he called it? – magicore?
The caterpillar had a round face with a tiny black mouth, neon-blue eyes and a pair of squidgy antennae. As it whipped through the air, it kept changing direction like it wasn’t sure which way to go.
Bitsy’s dad smiled at the caterpillar like it was an old friend. “Quasar, over here. I need your help.”
The caterpillar zoomed to Eric’s side and nuzzled against his ribs, causing a fine layer of Eric’s sandy-blond hair to stand on end from a static build-up. Was Quasar the magicore’s name? Eric was an astrophysicist and had once told Bitsy that a quasar was a brightly shining nucleus in space…
“Protect Bitsy and Kosh at all costs,” Eric told Quasar
firmly. He jabbed a finger at the hamstoceros. “And extinguish that magicore!”
On command, Quasar whirled around to face the hamstoceros. It wiggled its bottom and shot towards its opponent like a giant silver bullet. The hamstoceros growled and lowered its horn. Just as it prepared to charge, Quasar hurled a bolt of electricity at its feet.
A loud clap pierced the air, making Bitsy flinch. The hamstoceros squealed and rocketed to the ceiling in a cloud of smoke. Shrieking in outrage, it rushed at Quasar, slashing with its claws. Broken furniture went flying as the two magicores grappled with each other, tearing around the room in a purple and silver blur.
In the tussle, the hamstoceros got its foot tangled in the electrical cord of a table lamp. The lamp went flinging through the air and struck Eric hard on the side of the head.
“Dad!” Bitsy cried, jumping up.
“Bitsy…?” he slurred, wobbling forward. “Stay—”
But then his pupils rolled back in his head and he collapsed onto the floor like a sack of potatoes. Although Bitsy could see his chest moving, the rest of his body was motionless.
“Look out!” Kosh yanked on Bitsy’s leg and she ducked just in time as a flaming table leg came frisbeeing over their heads and smashed into the wall behind.
She protected her face with her arms as fiery debris rained over them. “We have to help my dad!”
But her voice was drowned out by another rumble of thunder. Lightning flared across the ceiling. The floor vibrated.
Then all at once, the room fell quiet.
Bitsy listened carefully for sounds of movement but there was nothing.
“Is it over?” Kosh asked, lifting his head out from under his arms.
Gripping the sofa tightly, Bitsy pulled herself to her feet.
The room looked like a bomb had hit it. Scorch marks peppered the walls, ripped cushions and broken furniture lay strewn across the floor, and sparks jumped from a crack in the TV. A splintered heap of wood rested in the middle of the carpet where a coffee table had once been.
But the damage wasn’t what troubled her. As Bitsy surveyed the room, a bubble of panic rose to the back of her throat.
The raven-haired lady, the hamstoceros, Quasar and her dad…
They had all vanished.
“This is impossible,” Kosh said, emerging shakily from behind the sofa. “They can’t have just disappeared into thin air.”
Adrenaline was still coursing through Bitsy’s veins as she staggered into the centre of the room. “Then where did they go? They didn’t escape into the garden because the patio doors were locked, and the only other exit was via the hallway behind us.” She paused as she pictured her dad sprawled on the carpet. “Also, my dad was unconscious. He couldn’t have moved anywhere.” Her insides churned with worry. She had to find him.
“In that case, the goth lady must have taken him somewhere,” Kosh concluded. “It’s the only explanation. The question is: why?”
A feeling of dread crept over Bitsy as she remembered something the raven-haired woman had said. “She was looking for a book. She warned my dad that if he didn’t give it to her, she’d be forced to take the next best thing. I think… I think she meant *my* dad. When she told her hamstoceros to prepare for extraction, she must have been referring to him! Kosh, he’s been kidnapped!”
It suddenly felt like the room was spinning. Bitsy had no idea where her dad was or what was happening to him. But that woman looked dangerous. She grabbed Kosh’s arm to steady herself, feeling woozy.
“It’s going to be all right,” he said, squeezing her shoulders. “Listen to me. Wherever she’s taken him, we’ll find him together.”
Bitsy nodded, but her head was swimming. *Conjuring… Magicores…* How was she going to rescue her dad when she didn’t even understand what was going on?
Kosh pulled his mobile phone out of his pocket and tapped the screen.
“We can’t call emergency services,” Bitsy said, shaking her head. “If we tell them Dad’s been snatched by a lady with a hamstoceros, they’ll just think we’re pranking them. No one in the village is going to believe us, either. And your parents are away.”
“I’m not dialling 999 or my parents,” Kosh said,
holding his phone to his ear. “I’m calling your dad. If he’s got his mobile with him, we can track his location.”
Hope blossomed in Bitsy’s chest, although it wilted a moment later when the voice of Buzz Lightyear rang out from the sofa.
“To infinity … and beyond!”
Her dad’s ringtone. She rummaged under the cushions and found his Samsung lodged in a crevice at the back. Her spirits plummeted further as she swiped at the screen and saw that the device had a biometric lock. She couldn’t even search it for information. “Any other ideas?”
Before Kosh could offer a suggestion, something buzzed under the broken coffee table. It sounded like an enormous bee.
Bitsy stepped closer. The rubble was vibrating. She poked her foot inside the heap and glimpsed a patch of silver. “It’s Quasar!” she realized.
Together, they tossed away the splintered wreckage, freeing Quasar from beneath. The caterpillar’s antennae had been flattened and there was a dazed look in its blue eyes. It hummed on and off like a defective generator as it levitated unsteadily into the air.
“Whoa…” Kosh murmured, his eyes wide.
Goosebumps rippled along the backs of Bitsy’s arms as she watched electricity light up Quasar’s foggy innards.
The air around Quasar smelled fresh and metallic like rainwater. She still didn’t understand how Quasar could exist; everything about them seemed impossible.
Kosh signalled to a dark gash in Quasar’s side. “Looks like it was injured in the fight. How are we going to help it? We can’t exactly call a vet.”
“I don’t know,” Bitsy admitted worriedly. She gently lifted a hand towards Quasar’s glassy skin. Her scalp tingled as her fingertips made contact, and she felt her hair go static like her dad’s had earlier.
Quasar turned to look directly at her. With no obvious nose, all of the magicore’s expression came from its eyes, mouth and antennae. Its lips parted and its cheeks twitched, almost as if it was trying to smile…
And then it spat in Bitsy’s face.
A small projectile hit her on the bridge of her nose and bounced to the floor. “Ouch!” She rubbed the spot where it had struck. “What was that for?”
“Bitsy, look!” Kosh reached down and picked up her dad’s fountain pen. “Quasar must have been keeping this in its mouth.”
Bitsy didn’t understand how Quasar had got hold of the pen. The last time she’d seen it, it had been clutched in her dad’s hand. She took the pen from Kosh and wiped it clean on the bottom of her cardigan. The barrel was made from smooth brown stone, marbled with lightning-bolt
seams of copper. As she turned it over in her hand, the stone glowed where her skin had touched it: red, yellow, purple, green, white and blue. She remembered it had reacted to her dad’s touch, too, except it had only changed to blue. “Perhaps it’s heat sensitive?” she guessed, passing it back to Kosh.
The barrel shimmered the same six colours when he held it. “It looked like your dad used it to conjure Quasar. Sort of like Quasar came out of the pen.”
He returned the pen to Bitsy and she tried aiming it in front of her like she’d seen her dad do. She tightened her grip around it, but no flecks of twinkling dust appeared. She experimented by twisting the top of the pen and pressing the nib against the back of her hand, but nothing happened.
“Something’s wrong,” Kosh said, pointing at Quasar. The magicore was shaking. The fog inside its body had darkened and its electrical sparks sputtered like a dying car engine.
Bitsy stuffed her dad’s pen into her pocket and tried to cradle Quasar in her hands. “What should we do?!”
“It’s made of electricity, so maybe we should connect it to a live wire?” Kosh flapped his arms. “Or, I don’t know, feed it batteries?”
Quasar bobbed forward and wobbled to a stop in front of the glass cabinet containing Matilda Wilder’s awards.
and souvenirs. Its antennae strained as it attempted to point to something on the bottom shelf.
Bitsy glanced between the cabinet and Quasar, realizing the magicore was trying to tell them something. "What is it, Quasar?"
But she was too late. The final sparks inside Quasar fizzled out and the magicore burst into copper dust. The particles twinkled as they fell through the air, disappearing before they reached the floor.
Kosh’s jaw slackened. “How did…? What even…?”
A lump rose to the back of Bitsy’s throat as she realized Quasar was gone. She might have only known the caterpillar for a few minutes, but she had seen how friendly it was with her dad, like a family pet.
She dropped to her knees in front of the cabinet, determined to understand what Quasar had been trying to communicate. “Quasar used the last of its energy to direct us over here. There’s got to be something important it wanted us to see.”
She examined the bottom shelf of the cabinet. Behind a couple of toppled photo frames, a wooden flute was mounted on a silver tripod. Her dad had told her that her mum had purchased the instrument on a trip to Austria, although Bitsy had never been sure why. Her mum couldn’t play the flute. She stretched her hand towards it, but when she went to pull it out, it wouldn’t budge.
“Found anything?” Kosh asked.
“There’s a flute down here, only it’s stuck.” Bitsy tried wiggling the tripod and the flute in different directions, but it felt like they were glued together to the base of the cabinet. As she repositioned her fingers for a better grip, she pressed several of the flute’s keys … and heard a soft click.
The floor vibrated. Bitsy scrabbled back as a crack appeared down the cabinet’s centre, splitting it in half. The two sides slid soundlessly away to reveal a small, brick-lined space no bigger than a cloakroom. Inside was an ornate chest of drawers, inlaid with ebony and mother-of-pearl.
“A secret room…” Kosh gawped as he stepped inside. “It’s like something out of James Bond.”
Bitsy pushed herself to her feet, struggling to understand how she didn’t already know about this. She pictured the layout of the ground floor of her house. Behind this wall were the kitchen and the hallway, only … there had to be this hidden space between.
As she shuffled over the threshold, the cabinet closed behind her and a ceiling light flickered on. Bitsy spotted a lever on the back of the cabinet and gave it a tug. The cabinet silently rolled apart again. “Well, we know how to get out,” she muttered. “But what is this place? Why would my dad have it here?”
“I don’t know, but Quasar wanted us to find it for a reason.” Kosh opened the drawers in the chest. The first was empty, but the second contained three puzzling objects.
“Are those leaves?” Bitsy lifted out a small, toothed comb made of silver birch wood. It was covered in flaky white bark and had new leaves sprouting along its spine, as if the wood was still alive. “How can this be growing? There’s no light or water in here.”
Kosh inspected another item – an intricately twisted wooden key attached to a long gold chain. It was made of rough, lumpy plant roots covered in emerging shoots and was shaped into a capital letter E. “This is still growing, too. E for Eric – this must belong to your dad.”
Bitsy wondered what the key opened. It was too large for any regular keyhole.
The final item was a brown, teardrop-shaped pendant hanging from a length of black cord. Bitsy’s heart fluttered when she saw it. “This was my mum’s! I’ve seen her wearing it in old photos.” She picked it up and noticed the pendant glowing in different colours under her touch. “It must be made from the same stone as my dad’s pen. Maybe my mum used it to conjure magicores, too…” Her chest stung, realizing her mum had secrets she didn’t know about.
Tucking the pendant into her pocket, Bitsy returned the wooden items to the drawer and continued searching.
The remaining drawers were empty, apart from one at the bottom. Stored inside was a large, old, leather-bound book with discoloured pages. Bitsy removed it from the drawer and placed it on top of the chest. Its brown cover was damaged with scorch marks and water stains, and there were three slashes down the spine that looked worryingly like claw marks. Similar to the comb and key, tiny green shoots poked out of the book’s headband, as if a living plant had trussed together its pages. Embossed in gold letters on the front cover was a single word: MAGICALIA.
“The woman who kidnapped Dad was looking for a book,” Bitsy said, glancing nervously at Kosh. “This could be it.”
Eager to learn more, she hooked her fingers under the cover and heard the pages crackle as she lifted it up. The endpapers were printed with a detailed world map drawn in muted shades of green and blue. Written in ornate script at the top were the words CARTA MAGICORA and a date, 1676. Bitsy had seen antique maps in museums before, but this one was different. Scattered across the oceans and lands were paintings of strange beasts labelled with tiny red text.
“Magicores,” Kosh said, marvelling. He squinted to examine a key in the top left corner of the map. “It has their species name and source emotion written below
them. Didn’t your dad say that magicores are conjured from emotional energy? That might be what a source emotion is – the emotion a species is conjured from.”
There were so many different species; Bitsy didn’t know which to study first. In one scan she saw an enormous, flaming-hooved grudgernaut conjured from anger; a ghostly flabberghast conjured from surprise, and an impish proxiwig conjured from impatience. She pointed to a silver caterpillar floating above Brazil. “This looks just like Quasar – a waywurm conjured from confusion.”
A line appeared between Kosh’s eyebrows. “Quasar did give off really confused vibes. It was zipping around erratically like it was permanently disorientated, and its body was made of fog, which is exactly what your brain feels like when you get confused.”
“You’re right,” Bitsy agreed. “Maybe magicores are similar to their source emotions in some ways? The hamstoceros was a bit like greed – grasping and powerful with an uncontrollable desire to take whatever it wanted.”
Kosh tapped the date at the top of the map. “If this was drawn in 1676, then magicores have been around for about three hundred and fifty years. So, how do we not know about them?”
“My dad might be able to answer that,” Bitsy said hollowly. Knowing he had hidden something this momentous from her was a bitter pill to swallow.
Although she was itching to ask him about everything, she couldn’t help but feel deflated that he’d never revealed any of it before. What else hadn’t he told her?
But the questions would have to come later. First, she had to get him back.
She flicked past the map, to the very beginning of *Magicalia*. A paragraph of printed text filled the first page:
**Note to Reader**
*Magicalia is the name for the kingdom of organisms known as magicores. Although these extraordinary creatures share some of the same powers, each species has its own unique gift. These are grouped into six types and indicated by the magicore’s eye colour.*
- Armourer magicores are red-eyed and have a remarkable physical gift
- Clairvoyant magicores are white-eyed and can influence the minds of others
- Elemental magicores are blue-eyed and have the ability to control a particular force, energy or element
- Metamorph magicores are yellow-eyed and are talented at transformations
- Weaver magicores are green-eyed and can craft remarkable objects
Hunter magicores are purple-eyed and skilled in seeking particular things
Readers are cautioned to conjure magicores at their own risk. The publisher shall not be liable for any injury, loss of limb or death arising from any information contained in this book.
Bitsy glanced worriedly at Kosh before turning the page. A shining, gold-leafed capital A sat at the top of the next sheet. Written below was a list of magicores, organized alphabetically by their source emotion:
agitation
HUFFFLUFF
[Armourer, gamma-level]
The hufffluff is an extremely fidgety magiCore with a flat, rectangular body that goes limp when the hufffluff is frightened. Its eyes, ears, nose and mouth are located on its smooth, rose-pink underside, while its back is covered in a layer of wiry grey hair. The hufffluff is a graceful flyer, even whilst carrying extraordinary weight on its back. Due to its restless nature, it never stays in one place for too long.
amazement
Lorple
[Hunter, beta-level]
The nocturnal lorple is a furry beast weighing between thirteen and twenty pounds. It is quiet and slow-moving, with long arms and legs. The lorple has the largest eyes of any species of magicore, and its vision can penetrate materials as dense as lead. Like all hunter species, it is excellent at tracking and has a particular gift for hunting knowledge. Wild lorples have been known to gather on hilltops with beautiful vistas.
amusement
Hix
[Clairvoyant, alpha-level]
The hix is a mischievous and fun-loving magicore, known for its remarkably ticklish hair which can grow up to two feet long. It can weigh anywhere between six and thirteen pounds and is around a handspan wide. It has a spherical body and moves by rolling around at high speed. Once a hix has made someone laugh, it has the power to temporarily persuade them of anything. Its fur varies in colour from sunset orange to shades of red and gold.
“It’s like an encyclopaedia of magicore species,” Bitsy realized. “I wonder what the different levels represent?”
Kosh scratched under his beanie. “Maybe they’ve got to do with how powerful each species is or how difficult they are to conjure? We should look up ‘greed’ to find out more about the hamstoceros. It might tell us something about your dad’s kidnapper.”
Right at that moment, *Magicalia* rustled. A wedge of pages flipped over as if a breeze had lifted them, although Bitsy didn’t feel any shift in the air.
“OK…” Kosh murmured. “Am I imagining it or did *Magicalia* just move by itself?”
Bitsy’s skin prickled as she looked down and saw that the encyclopaedia now lay open on the entry for “greed”. “I think you might be right,” she admitted, nervously. “Look what’s written here – it’s as if the book heard what we were saying.”
Feeling equal parts alarmed and amazed, she turned her attention to the text.
**greed**
**GROBBLE**
*[Hunter, gamma-level]*
*Weighing anywhere between sixteen and thirty stone, grobbles resemble giant rodents with stout bodies,*
round ears and long whiskers. Their thick fur is highly insulating and the horn above their nose is strong enough to pierce steel. They have a special gift for hunting gold and can detect deposits of the element from up to one mile away. Unique amongst hunter-type magicores, grobbles gather information by eating the objects around them. They have the strongest constitution of any species of magicore and have been known to store twice their own body weight in their extraordinarily stretchy cheek pouches.
Bitsy couldn’t believe what she’d just read. The grobble, née hamstoceros, had been eating objects in her house in order to gather information. It was certainly one way of learning new things, although she didn’t much fancy munching her way around her chemistry classroom in order to get a better understanding of the structure of an atom. She ran her finger across the page, rereading the entry. “If grobbles have a special gift for hunting gold, then maybe this is the book Dad’s kidnapper wanted. There’s gold leaf on some of the pages; she might have been using the grobble to detect it.”
She fetched her notebook from her pocket. If she was going to rescue her dad, she needed to know more about the woman who had kidnapped him: who she was, what she wanted and where she had taken him.
As she started scribbling ideas, Kosh reached into the bottom drawer. “Hey, look. There’s something else in here.”
He pulled out a small brown envelope addressed to Eric Wilder, which had already been torn open along the top. Bitsy put her notebook down, took the envelope from Kosh and slid out a sheet of thick paper from inside. Typed upon it was a short letter with a design at the top showing a galleon within a ring of silver stars:
The European Conservatoire of Conjuring
Chancellor’s Desk
3 January 2024
Dear Mr Wilder,
It is my duty to inform you, as per the terms of the 1889 Statute of Conjuring, that any person aged twelve years or over, with at least one conjuring parent, is required to undergo a cosmodynamics test at their nearest conservatoire of conjuring.
My records indicate that your daughter, Miss Elizabeth Wilder, will turn twelve years old on the
26th of July. Therefore, with your permission, I would like to invite her to attend a cosmodynamics test on the 27th of July. This test will decide whether Elizabeth has an aptitude for conjuring magicores. If the test is positive, she will be invited to enrol at the conservatoire to study conjuring in the summer term.
I have written to Elizabeth’s cosmodian, Miss G. Greynettle of 7 Andromeda Mews, to inform her of our invitation.
Please feel free to contact me should you wish to discuss this matter further.
Yours sincerely,
Chancellor Edith Hershel
Bitsy’s hands trembled as she finished reading. “Kosh, this is about me. I’ve been invited to be tested at some sort of school to see if I can conjure magicores like my dad.”
“Is that what conservatoire means?” he asked, reading over her shoulder. “School?”
She nodded, her gaze fixed on the letter. The last thing she’d been expecting was for any of this to connect to her. Why hadn’t her dad said anything? “The letter’s dated January. My dad received this nearly three months ago…”
“Maybe he wanted to tell you, but something happened and he couldn’t?” Kosh suggested. His eyebrows knitted as he scanned the final paragraph. “Do you know who this other person is? Miss G. Greynettle?”
Bitsy shook her head. *Cosmodian* sounded like a professional title, but she had never seen the word before.
Kosh took out his mobile phone and googled the address. By a stroke of luck, there was only one result. “Andromeda Mews is in Kensington, West London.”
“That’s only a few hours away on public transport,” Bitsy realized. “We’ve got to go check it out. It’s our only lead.”
“All right, but it’s too late to get a train to London now,” Kosh said, noting the time. “We’ll have to leave first thing in the morning.”
Bitsy’s stomach tightened. She didn’t want to wait until tomorrow to continue their investigation. She wanted to start looking for her dad now. She looked Kosh in the eyes. “Fine, let’s go tomorrow. But are you sure you want to come with me? It might be dangerous.”
Kosh gave a determined frown. “I told you, wherever that woman’s taken your dad, we’ll find him together. He’s like family to me, too; I’m not about to let you rescue him without me.” He added quickly, “Besides, it can’t be more dangerous than school dinners and I eat those every day.”
Bitsy laughed. Somehow, even in the most daunting of situations, Kosh could always lift her spirits.
She returned her notebook to her pocket, collected *Magicalia* and the Chancellor’s letter, and left the secret room. Pausing by the pile of grobble vomit in the lounge, Bitsy picked up her dad’s dressing gown. He had been wearing it earlier that morning as he made her breakfast. She pictured him in the kitchen, pouring her a glass of orange juice with one hand while stuffing a slice of bread into the toaster with the other. Despite the gown being covered in grobble-slobber, Bitsy clutched it tightly to her chest. It still smelled like him, of pencil shavings and aftershave.
*Hold on, Dad. We’re coming.*
It was early morning, but the parade of designer cafés and high-end boutiques on Kensington High Street was already buzzing with activity. Staff were busy laying tables or vacuuming floors while locals sauntered by, walking their dogs. The air hummed with the drone of traffic and the clang of distant building works.
As Bitsy and Kosh walked past shop windows, Bitsy tried to push down her frustration at all the unanswered questions whirring through her head. She and Kosh had spent the train journey searching through *Magicalia* and asking it questions – where was her dad? Who is Miss G. Greynettle? What does *cosmodian* mean? But the book had remained still. Either it was no longer listening to them or it didn’t have the answers they needed.
Kosh glanced at his phone and then pointed towards a row of leafy chestnut trees in the distance. “That’s the edge of Kensington Gardens. We need to take a right opposite there to get to Andromeda Mews. It’s twelve minutes’ walk away.”
The journey so far had been relatively straightforward: a bus from Oddingham to the local train station, a high-speed train to London and then an underground train to High Street Kensington. Bitsy had taken her dad’s wallet and paid for everything using his debit card, which in the circumstances, she didn’t think he’d mind. She pushed her hand inside her satchel to reassure herself that *Magicalia* was still there beside her dad’s fountain pen and her mum’s teardrop pendant. The letter from Chancellor Hershel she’d tucked in her coat pocket next to her notebook, while the wooden comb and key were hidden in the secret room back home. “Assuming we find Miss G. Greynettle at this address, I don’t think we should tell her about *Magicalia* or the other items we’ve discovered,” Bitsy said. “At least, not until we know we can trust her.”
“Copy that,” Kosh replied.
As Bitsy’s fingers grazed her dad’s fountain pen, she worried whether he might need it, wherever he was. She hoped not.
They turned off the main road and continued along a few side streets until they came to a cobbled lane flanked
by modest terraced houses. It looked unassuming and quiet – not the kind of place you’d expect anyone involved with magicores to be living.
“It’s this one.” Kosh stopped outside a small, shabby-looking building with cracked pebble-dashed walls. Several of the roof tiles were missing and a broken section of drainpipe had been repaired with string and tape. The white front door had a brass knocker shaped like a shield with a lily in the centre.
As Bitsy approached the door, she took a deep breath and tried to focus. This was their first real opportunity to learn why her dad had been kidnapped and how they might rescue him. She reached up and banged the knocker. After a few seconds, a shadow moved behind the glass.
“One moment!” called a chirpy voice.
Bitsy heard several clicks and scrapes that sounded like multiple locks being undone. There was a creak and then the door opened onto a stocky, olive-skinned woman in a pinafore dress and long-sleeved blouse.
“Yes?” she asked, smiling. She had twinkly hazel eyes and an abundance of silver waves that were fixed in a wobbly pile on top of her head with what appeared to be a chopstick. Deep wrinkles extended around her mouth and eyes.
Bitsy blinked. “Are you Miss G. Greynettle?”
“That’s right.” Miss Greynettle arched an eyebrow. “And who might you two be?”
Trying to hold her nerve, Bitsy fetched the letter from Chancellor Hershel. “My name’s Bitsy and this is my friend, Kosh. We got your address from this letter. It says you’re my … cosmodian?”
Miss Greynettle’s mouth shrank to a small “o”. “Elizabeth Wilder? Does your father know you’re here?”
“No,” Bitsy said, relieved that Miss Greynettle at least knew who she was. “That’s why we’ve come. He’s been kidnapped and we need your help.”
“Kidnapped?” Miss Greynettle swayed. “You’d better come inside. Quickly.”
She ushered them into a draughty hall with threadbare carpets and shut the door behind them. The air inside smelled clean and fresh, like cotton sheets. “Your father didn’t say you use the name Bitsy,” she muttered, signalling for them both to remove their shoes. “You can call me Giverna.”
As Bitsy kicked off her trainers, she spotted a rucksack stuffed with medicine vials, brown bottles and bandages at the foot of the stairs. “What is a cosmodian?” she asked, wondering if Giverna might be some type of doctor.
“Your father still hasn’t told you?” Giverna tutted as she placed their trainers on a rack by the door. “A cosmodian is a conjuring mentor. Young conjurors-in-training are
called *initiates*. They hone their skills at conservatoires like the one where you were invited to attend a cosmodynamics test.”
She spoke so breezily that it was as if she was talking about something completely normal. Bitsy had to shake off her shock in order to concentrate.
“A negative test result indicates the participant is cosmotypical and unable to conjure magicores, but a positive test result indicates the participant is cosmodynamic and can become an initiate,” Giverna explained. “If successful, every initiate has a cosmodian with whom they can talk about their training. Your parents asked me when you were born, should your cosmodynamics test be positive, if I would be your cosmodian. I was one of their tutors at the European Conservatoire, but I retired a few months ago.”
Bitsy shared an incredulous glance with Kosh as Giverna led them along a narrow corridor, towards the back of the house. She’d always known her parents had met at school, only she’d assumed it was the type of school where you studied maths and English, not magicores and conjuring.
They entered a bright room with floor-to-ceiling windows along one side that overlooked a well-tended vegetable garden. A network of tarnished copper pipes scaled the walls, passing through cupboards
and feeding into various beakers, flasks and test tubes before channelling into a wide porcelain sink. Given the presence of a fridge and cooking stove, Bitsy couldn’t tell if Giverna used the room as a kitchen or a laboratory. In the middle of the ceiling, a dusty stained-glass chandelier cast muted rainbow splinters onto a wooden dining table below. Giverna pulled out a couple of chairs on one side. “First things first, have either of you had breakfast? I can make you some toast.”
Bitsy had tried to eat earlier but her stomach felt like a cement mixer. “Thanks, but I’m good,” she said, pulling out her notebook as she took a seat.
“I would love some toast,” Kosh replied, happily. Although he’d already munched a large bowl of cereal and an apple before they’d left, Bitsy was pleased. Kosh’s mood was directly linked to his stomach and she did not want a hangry partner on this rescue mission.
“Excellent. You can’t solve problems on an empty stomach.” Giverna slotted two slices of bread into a toaster and collected three glass mugs out of a cupboard. Like the other cookware on display, the mugs looked like they might have come from a laboratory. They had twisted glass stems and strange markings up the side, like on a measuring flask. “I’ll make some chamomile tea, too.”
Bitsy skimmed the questions in her notebook, wondering which to ask first. “Do you know why
Dad has never told me anything about magicores or conjuring before?”
A sad look crossed Giverna’s face as she carried a kettle to the sink. “After your mother passed away, your father turned his back on the conjuring world. He moved to your village to try to forget it all. When I received my copy of that letter, he told me he didn’t want you to take a cosmodynamics test and that would be the end of it.”
So that’s why we moved to Oddingham… Bitsy fell back in her chair. She wished her dad could have told her this himself.
“You said he’d been kidnapped?” Giverna prompted, turning on the tap and holding the kettle underneath.
“By a woman with a grobble,” Kosh explained. “It happened yesterday evening.”
“A grobble?” Giverna recoiled. “What did the woman look like?”
Bitsy turned back a few pages to consult her notes. “Tall with pale skin and dark hair. She had a dagger-shaped earring in one ear.”
The kettle wobbled in Giverna’s hand, sending water sloshing into the sink. A scowl deepened on her brow. “Melasina Spires,” she growled. “The leader of the Hunter Guild.”
“What’s the Hunter Guild?” Bitsy asked. She instantly didn’t like the sound of it.
“To answer that, I need to tell you a story. It’s one that initiates usually hear on their first day of training.” Giverna returned the kettle to the side and switched it on. She reached into the pocket of her dress and pulled out a white cotton handkerchief printed with tiny multicoloured polka dots. As she spread it flat on the table in front of Bitsy and Kosh, the polka dots started moving.
Bitsy leaned closer, staring. At first, the polka dots whizzed around chaotically, bumping into each other like static-charged polystyrene balls. But then they moved with purpose, shifting into a pattern of coloured pixels.
“This is a thinkerchief,” Giverna explained, keeping one hand on one corner of the fabric. “They’re made by thimbulls – weaver-type magicores conjured from sympathy. Conjurers use them to display what they’re thinking.”
As Giverna spoke, the pixels on the thinkerchief resolved into the image of a shaggy-haired yak with six horns sprouting from its temple. Threads of white yarn were looped between its horns like the string of a cat’s cradle. It seemed to be spinning another small white handkerchief.
A thimbull, Bitsy guessed, scribbling notes. It looked both cuddly and terrifying. The pixels shifted and the thimbull was replaced by a fireball streaking through a dark night sky.
“Long ago,” Giverna went on, “a meteorite landed on a remote island in the Atlantic Ocean. It was found in 1656 by the six surviving crew members of a shipwrecked vessel. Exposure to powerful cosmic matter at the landing site affected the crew on a cellular level, turning them cosmodynamic. They named the meteorite farthingstone and discovered that they could use it to conjure powerful beasts to do their bidding. Magicores.”
The image on the thinkerchief changed. It showed the bottom of a sandy crater where six men with straggly hair and ragged, old-fashioned clothes were gathered around a mammoth boulder of metallic rock. One wore a once fine coat and bicorn hat; one carried a bag of navigational equipment and another wore a sleeveless shirt and knife through his belt.
“The crew were from all over the world and had different beliefs and values. Gradually, they learned that each of them was able to conjure a different type of magicore, according to their personality. The ship’s creative carpenter could conjure weaver-types; the kindly surgeon could conjure clairvoyant-types; the brave gunner could conjure armourer-types and so on. After the crew escaped the island, they brought the farthingstone to England, where they split the meteorite into six pieces. They each wanted to use magicores for a different purpose, so they founded six different guilds
of conjurors. They pledged to keep secret what they had learned, and to work together in an alliance to use their gifts to benefit humanity from the shadows, hidden from the rest of the world.”
Bitsy’s heart raced as Giverna’s story swirled through her head, like whispers from the past. The old map at the beginning of *Magicalia* made sense now, although Bitsy still couldn’t believe this had really happened hundreds of years ago and yet nobody knew about it. She leaned closer as the pictures on the thinkerchief altered to show six coats of arms. They were shaped like shields with different objects inside each one.
“These represent the different guilds?” Kosh guessed.
“That’s right. Bitsy’s father is a member of the Elemental Guild.” Giverna pointed to a blue shield with a telescope in the centre. “Elementals are curious, bookish and experimental. They use their magicores to make progress in science and technology, exploring new fields of discovery and learning more about the universe. Eric used to work in one of the Elemental Guild’s laboratories when Bitsy was small.”
Bitsy’s forehead tightened. She didn’t remember that, but she did recognize her dad in Giverna’s description. He was always reading and asking questions; he loved travelling to new places and his experimental cooking was infamous.
Giverna tapped a green shield featuring a harp. “Bitsy’s mother belonged to the Weaver Guild – the creatives of the conjuring world. Weavers work as writers, musicians, artists and craftspeople, using their magicores to help weave extraordinary structures or objects, like the thinkerchief. Matilda had a particularly close bond with her mudtail, a weaver-type species that crafts items from organic materials such as paper and wood.”
Wood… Bitsy glanced meaningfully at Kosh, thinking of the key and comb they’d found in the secret room. Wondering if her mum’s mudtail had woven them, she suddenly wanted to examine them again. Perhaps *Magicalia* had been woven by her mum’s mudtail, too? That might explain why the book seemed to understand what they were saying…
“Which coat of arms represents the Hunter Guild?” Kosh asked nervously.
Giverna’s expression soured as she tapped a purple shield with a crown inside. “The Hunter Guild was founded by the ship’s greedy and arrogant captain. As time passed, he grew hungry for power and tried to steal a dangerous artifact from the Alliance. As a result, the Hunter Guild was expelled from the Alliance and became an organization of outlaws. For hundreds of years, hunters have attacked us, stolen from us and spied on us. When the Alliance was nearly destroyed by dark forces,
the Hunter Guild refused to come to our aid. They are cold-blooded, deceitful and ruthless.”
A vice tightened around Bitsy’s chest as she realized her dad was being held captive by a bunch of thugs. Whatever they wanted with *Magicalia*, it couldn’t be good.
The toast popped up with a clang. As Giverna went to collect it, she let go of her thinkerchief and the fabric went blank.
“The woman that took my dad – Melasina Spires – she asked him for a book,” Bitsy ventured, hoping Giverna might be able to shed some light on the matter.
Concern flickered through Giverna’s hazel eyes. Bitsy noticed her hands tremble as she placed the toast on a plate and spread it with butter and jam. “And how did he respond?”
“He wouldn’t give her anything,” Kosh replied. “He told her to leave, then he conjured a waywurm and there was a big fight. He got knocked unconscious in the battle.”
Bitsy watched Giverna’s face carefully. She had the distinct impression that Giverna knew more than she was letting on. Unfortunately, Bitsy couldn’t press Giverna without revealing that they had found *Magicalia*, and Bitsy wasn’t sure they could trust her with that information yet.
“It’s imperative we find your father as quickly as we can,” Giverna said, sliding the plate of toast in front of Kosh. “Melasina will probably want to interrogate him
about this book, and he might not be able to stay silent for long. If he’s been hurt, he might need urgent medical attention.”
Bitsy swallowed, hoping he was going to be all right. She wondered if he’d woken up already. “Do you have any idea where Melasina is holding him?”
Giverna touched her thinkerchief and the image of a vast, festering swamp appeared. A collection of military style buildings was half-buried in the bog. “The last I heard, the Hunter Guild was operating out of a series of secret underground barracks. I know there’s been a recent spate of conservatoire thefts attributed to hunters. Maybe those incidents are connected to your father’s kidnapping, but I need to talk to the Alliance. They’ll have more information than me.”
The vice loosened a little around Bitsy’s ribs. It felt good to have a plan of action. She smiled hopefully at Kosh, who was already munching on his jam-smeared toast.
“I’ll contact them now. It won’t take a moment.” Giverna tugged the chopstick out of her updo, letting her long silver waves fall to her shoulders. Then she aimed it at Kosh’s toast.
Kosh curled an arm protectively around his plate, still chewing. “What-er-yoo-doing?”
It was only then that Bitsy noticed flashes of copper reflecting in the chopstick. As it glowed white under
Giverna’s fingers, she realized it had to be made of the same stone as her dad’s fountain pen and her mum’s teardrop pendant.
Giverna winked. “Conjurors have magicore-means of getting everything done. Watch and learn.”
Londoner JENNIFER BELL worked as a children’s bookseller at a world-famous bookshop before becoming an author. Her debut novel, *The Uncommoners: The Crooked Sixpence*, was an international bestseller. She is also the author of *Agents of the Wild*, an adventure series for younger readers; *Wonderscape*, which was selected as a Waterstones Children’s Book of the Month and is inspired by some of her favourite heroes from history and her love of gaming; and *Legendarium*, which celebrates incredible legends from around the world. *Magicalia: Race of Wonders* is the first in an exciting new fantasy series about incredible creatures called magicores that are conjured from different emotions.
jennifer-bell-author.com
𝕏 @JenRoseBell
-instagram @JenBellAuthor
𝕏 @WalkerBooksUK
Discover the world of Magicalia at
MagicaliaBooks.com
#MAGICALIA
When her dad is kidnapped, Bitsy and best friend Kosh are swept into a secret world of ancient meteorites and strange beasts called magicores. With the help of a powerful book called *Magicalia*, the friends must follow a trail of clues in a race to rescue Bitsy’s dad from a mysterious villain...
“Spectacular! A world-beating feat of imagination.” Sinéad O’Hart
“A bold and exciting new world of magic.” Amy Sparkes |
Characterization of particle deposition in a lung model using an individual path
A. F. Tena\textsuperscript{1}, P. Casan\textsuperscript{1}, J. Fernández\textsuperscript{2a}, C. Ferrera\textsuperscript{1}, A. Marcos\textsuperscript{2}
\textsuperscript{1}Instituto Nacional de Silicosis. Dr Bellmunt s/n. 33006 Oviedo, Spain.
\textsuperscript{2}Universidad de Extremadura. Avda de Elvas s/n, 06006 Badajoz, Spain.
Abstract. Suspended particles can cause a wide range of chronic respiratory illnesses such as asthma and chronic obstructive pulmonary diseases, as well as worsening heart conditions and other conditions. To know the particle depositions in realistic models of the human respiratory system is fundamental to prevent these diseases. The main objective of this work is to study the lung deposition of inhaled particles through a numerical model using UDF (User Defined Function) to impose the boundary conditions in the truncated airways. For each generation, this UDF puts the values of velocity profile of the flow path to symmetrical truncated outlet. The flow rates tested were 10, 30 and 60 $U/min$, with a range of particles between 0.1 $\mu m$ and 20 $\mu m$.
1 Introduction
One of main health problems to the urban population is the exposure to air pollution. Suspended particles (made up of soot, smoke, dust and liquid droplets) can cause a wide range of chronic respiratory illnesses such as asthma and chronic obstructive pulmonary diseases, as well as worsening heart conditions and other conditions. To know the particle depositions in realistic models of the human respiratory system is fundamental to prevent these diseases, and this is the objective of this work.
Following the model developed by Weibel [1], a 3D numerical model of the bronchial tree has been developed from the trachea to the sixteenth level bronchioles. It has been discretized with a mesh of about one million cells. The Navier-Stokes equations have been solved with a commercial CFD finite volume code (Ansys Fluent). Other authors [2 and 3] have developed similar models, also using an individual path. In order to obtain reasonable results from a truncated model, it is necessary to apply physiologically realistic boundary conditions at these truncated outlets.
This work is part of a broader, which tries to model the airflow in the lung with all their characteristics: unsteady flow, inhalation and exhalation of particles, common diseases (asthma, bronchitis, etc.). A first step was [4] the construction and simulation of a particular 7-level model, using a single way to study the unsteady flow that occurs during the execution of a spirometry test. A second step [5] was to study the lung deposition of inhaled particles through a numerical model. A third step, the main objective of this work, is to study the lung deposition of inhaled particles through a numerical model by means of UDF (User Defined Function) to impose the boundary conditions in the truncated airways. For each generation, this UDF puts the values of the velocity profile of the flow path to symmetrical truncated outlet. The flow rates tested were 10, 30 and 60 $U/min$, which are equivalent to the different respiratory rhythms. The particle size used ranged between 0.1 $\mu m$ and 20 $\mu m$, being introduced by means of an injection type called surface, specifying particle properties and velocity.
2 Methodology
The numerical model of the nasal cavity and nasopharynx was obtained from a 30-year-old woman by means of CT images [6]. The throat reproduces the model written in [7]. The geometry follows the models developed by Weibel [1] and Kitaoka et al [8]. The 3D numerical model has been made with the commercial code Ansys Gambit\textsuperscript{®} [9].
The trachea has a length of 12 cm and a diameter of 1.8 cm. The bifurcation angle was set to 35° according to the guidelines given in [1, 8]. The geometry of the bifurcations in the bronchus at all the generations is created by a similar procedure. The diameter $d$ and the length $\ell$, deduced from the relations proposed by Kitaoka (levels 1, 2 and 3) and Weibel (rest of levels), are:
2a: email@example.com
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Article available at http://www.epj-conference.org or http://dx.doi.org/10.1051/epjconf/20134501079
\[
\begin{cases}
d = 0.018 \exp(-0.388 \ n) & \text{if } n \leq 3 \\
d = 0.013 \exp\left[-(0.2929 - 0.00624 \ n) \ n\right] & \text{if } n > 3
\end{cases}
\]
\[
\begin{cases}
l = 0.12 \exp(-0.92 \ n) & \text{if } n \leq 3 \\
l = 0.025 \exp(-0.17 \ n) & \text{if } n > 3
\end{cases}
\]
Half lung is presented in Figure 1. The numerical simulation of this complete morphology is simply unavoidable because the lung until level 16 has 65536 branches (Table 1).
**Fig 1.** Complete morphology of the lung
**Table 1.** Main parameters of the branches
| n | branche | diameter (mm) | length (mm) | branche (mm²) | total (mm²) |
|---|---------|---------------|-------------|---------------|-------------|
| 0 | 1 | 18.00 | 120.00 | 254.47 | 254 |
| 1 | 2 | 12.21 | 47.82 | 117.12 | 234 |
| 2 | 4 | 8.28 | 24.85 | 53.90 | 216 |
| 3 | 8 | 5.62 | 16.86 | 24.81 | 198 |
| 4 | 16 | 4.45 | 12.67 | 15.56 | 249 |
| 5 | 32 | 3.51 | 10.69 | 9.69 | 310 |
| 6 | 64 | 2.81 | 9.01 | 6.19 | 396 |
| 7 | 128 | 2.27 | 7.61 | 4.05 | 519 |
| 8 | 256 | 1.86 | 6.42 | 2.72 | 696 |
| 9 | 512 | 1.54 | 5.41 | 1.87 | 958 |
| 10| 1024 | 1.30 | 4.57 | 1.32 | 1353 |
| 11| 2048 | 1.10 | 3.85 | 0.96 | 1957 |
| 12| 4096 | 0.95 | 3.25 | 0.71 | 2903 |
| 13| 8192 | 0.83 | 2.74 | 0.54 | 4416 |
| 14| 16384 | 0.73 | 2.31 | 0.42 | 6886 |
| 15| 32768 | 0.65 | 1.95 | 0.34 | 11010 |
| 16| 65536 | 0.59 | 1.65 | 0.28 | 18048 |
Figure 2 shows a global image of the model. The complete morphology of the lung can be generated from this model by imposing symmetry at each of the branches.
**Fig 2.** Numerical model geometry
### 3 Numerical model
A boundary layer mesh was built before meshing the volumes in order to obtain a better description of the boundary layer in the numerical calculations. The lung was meshed with tetrahedral cells due to their better adaptation to complex geometries, reducing its size while descending from the high-order to the low-order generations. The size of the tetrahedrons was consistent with the size of the boundary layer cells. The volume of the cells ranges between $2.96 \times 10^{-12}$ and $2.01 \times 10^{-10}$ m$^3$. The maximum equiangle skew was restricted to 0.6 for 97.60% of the cells in the mesh. Figure 3 shows a detail of the mesh generated.
The total number of cells used to begin the simulations was about $10^6$, though other meshes of different size ($2 \times 10^6$ and $4 \times 10^6$) were generated in order to investigate the dependence of the numerical predictions. As can be seen in Figure 4, the variation observed in the outlet static pressure rate when considering different mesh sizes is not very significant. If one compares 1,000,000 and 2,000,000 cells models with the 4,000,000 cells model, the relative deviation is 1.77% and 1.04% respectively.
The numerical simulations were performed with the code Ansys Fluent® [10]. This code was used to solve the full steady 3D Reynolds-averaged Navier-Stokes equations by the finite volume method. The fluid used in the calculations was air. The velocities will vary between 0.65 and 3.93 m/s at the trachea, so the Reynolds number is between 840 and 5050. The flow was considered as incompressible and turbulent. To effectively address both laminar and turbulent flow conditions, the model used for turbulent closure was the SST k-omega together with the transitional flows option to enable a low-Reynolds-number correction to the turbulent viscosity. This model provides a good approximation to transitional flows because the value of $\omega$ does not reach the zero value as the laminar flow limit is approached. Furthermore, the turbulence is simulated all the way to the viscous sublayer, avoiding the use of standard wall functions, which are inaccurate for transitional flows. The pressure-velocity coupling was established by means of the SIMPLE algorithm. Second-order upwind discretizations were used for the convection terms and central difference discretizations were established for the diffusion terms. The $y^+$ values at all wall boundaries were maintained on the order of approximately 2 or less. This model has already been tested in the first step of this broad work [4].
A specific volume flow rate at the nose and a constant gauge pressure at the lowest generation were imposed as the boundary conditions. An additional user-defined function was used to impose a symmetric operation of the two branches at each bronchus. A detailed description of this UDF (which is about 400 lines long) is beyond the scope of this article. This UDF obtains the velocity profile at each open branch from the calculations and prescribes the same profile in the corresponding truncated branch. This methodology is repeated iteratively until achieving convergence in the flow field. Convergence was accepted with criteria of 0.00001 residuals for continuity and each velocity component in the momentum equation. Convergence required about 1,100 iterations and approximately 15 min CPU time in a cluster with 8 cores.
Figure 5 shows a sample of this UDF. On the left there is a bifurcation in normal conditions, with uncut branches, showing the normal profile of the velocity field. On the middle is shown a bifurcation with the left branch truncated, using as boundary condition the velocity profile of the open branch and, on the right, the same but with the uniform flow-rate.
The particle trajectory equation can either be solved with the momentum and energy equations for the continuum flow (coupled) or after the momentum and energy equations have converged (uncoupled). The coupled option allows particles to interact with the flow fluid and affect the flow solution. In this case, the uncoupled option was chosen.
Once the static simulation finished, the Discrete Phase Model (DPM) was switched on to predict the trajectory.
of discrete phase particles. To study particle deposition, the Lagrangian approach was used; particle trajectories were calculated within the steady flow fields of interest as a post processing step. Forces on the particles of interest include drag, pressure gradient, gravity, lift, and Brownian motion. To model the effects of turbulent fluctuations on particle motion, a random walk method was employed. The tracking parameters used were 50,000 for the “maximum number steps” and 5 for the “step length factor”.
Particles were introduced by means of an injection type called surface, specifying particle properties and velocity. Robinson [11] founded that 50,000 particles are necessary to minimize random variation in the deposition efficiency predictions due to the randomness of the particle position profile.
Deposition was determined by summing up the “trapped” fate particles, which occurs when their centre of mass touches the wall. Fluent reported the number of incomplete, aborted, or unable to be tracked particles. These numbers could be minimized by adjusting various input parameters.
4 Results
The flow rates tested were 10, 30, and 60 l/min, which are equivalent to different respiratory rhythms. The seeding conditions of the particles were:
- Inert material density: 1,000 kg/m³.
- Particle size: 0.1 μm, 0.5 μm, 1 μm, 2 μm, 5 μm, 10 μm and 20 μm.
- Velocity: the same that air.
- Density 0.5%.
- Number of injected particles: 50,058.

**Fig 6.** Relationship between particle size and lung deposition. Numerical results,
The regional deposition of particles can be quantified in terms of the deposition fraction (DF), defined as the mass ratio of deposited particles in a specific region to the particles entering the lung. Here (figure 6) is the ratio of the particles trapped in the first seven levels and entering the lung. These results agree with Dolovich [12] except in the range of particles between 0.1 and 2 microns, where numerical values are greater than those obtained experimentally.
Figures 7 and 8 show the concentration of the particles (kg/m³) settled on the duct walls for a flow rate of 60 l/min and for a size of 5 μm. Red colour means high concentration of settled particles on the wall, and blue colour means absence of settled particles.

**Fig 7.** Particles concentration (kg/m³).

**Fig 8.** Particles concentration (kg/m³).
As can be seen, most of the particles is retained in the nose and in the junction of the larynx and trachea, where the epiglottis.
The rest of particles travel through the lung. It can be seen how the UDF placed on truncated branches allows the exit of the particles, working the lung as if he had all the branches. According to the designed UDF, it can be said that the particle satisfies the task assigned, allowing to simulate the lung by means of an individual path.
5 Conclusions
This paper has explored a general methodology to simulate a model of a human lung. It has developed a complete and realistic model of the lower conductive zone of the lung (generations 0 to 16). Thais can be simulated within reasonable computational times. The operation of the ‘truncated’ airway is included in the simulations by means of a user-defined function. This function was used to: 1) obtain the velocity profile at each ‘active’ (open) branch, and 2) prescribe this profile in the truncated branch. This was useful to simulate the operation of the truncated branches at each bronchiole.
The distribution of particles in the lung airways depends of its size. Small particles are distributed more uniformly than bigger particles, which follow the mean flow. The main objective of this work, to study the particle deposition from the mouth to the level 16 using a mixture of particles of different sizes has been achieved. Due to the high number of branches (131,072), it is necessary to work with a single pathway, so the boundary conditions applied in the truncated branches are very important. It can be concluded that the numerical model presented in this paper and the user-defined function routine included to account for the operation of the truncated branches can be satisfactory used to simulate the real operation of a human lung over the entire breathing cycle. This model provided a realistic description of the operation of the lung while avoiding too large computational costs.
Our future efforts will focus in the simulation of several pulmonary diseases (bronchitis, emphysema, etc.).
Acknowledgements
The authors gratefully acknowledge the financial support provided by Gobierno de Extremadura and FEDER under project GR10047 and also by Ministerio de Economía y Competitividad under project DPI 2010-21103-C04-04.
References
1. E R Weibel, Morphometry of the human lung, Springer-Verlag (1963)
2. G Tian, P W Longest, G Su and M Hindle. Characterization of Respiratory Drug Delivery with Enhanced Condensational Growth using an Individual Path Model of the Entire Tracheobronchial Airways, Annals of Biomedical Engineering, 2011, Volume 39, Number 3, Pages 1136–1153.
3. D K Walters and W H. Luke. A Method for Three-Dimensional Navier–Stokes Simulations of Large-Scale Regions of the Human Lung Airway, J. Fluids Eng. 132, 051101 (2010).
4. A F Tena, P Casan, A Marcos, R Barrio, E Blanco. Analysis of the fluid dynamic characteristics of the obstructive pulmonary diseases using a three-dimensional CFD model of the upper conductive zone of the lung airways. Proceedings of the SIMBIO 2011, Brussels, (Belgique), 2011.
5. A F Tena, P Casan, J Fernáez, A Marcos, R Barrio. Numerical simulation of the nano particle deposition using a three-dimensional model of lung airways. Conference on Modelling Fluid Flow (CMFF’12), Budapest, Hungary, 2012.
6. P Castro-Ruiz F, Castro-Ruiz, A Costas-López, C. Cenjor-Español. Computational fluid dynamics simulations of the airflow in the human nasal cavity. Acta Otorrinolaringol Esp 2005; 56: 403–410.
7. M Brouns, S T Jayarajul, C Lacor, J De Mey, M Noppen, W Vincken, and S Verbanck. Tracheal stenosis: a flow dynamics study. Journal of Applied Physiology March 2007 vol. 102 no. 3 1178–1184.
8. H Kitaoiki H, R Takaki, B Suki. A three-dimensional model of the human tree. J. Applied Physiology. 1999; 87: 2207–2217.
9. Gambit version 2.4.6, 2006, ©ANSYS Inc.
10. Fluent version 6.3.26, 2006, ©ANSYS Inc.
11. Robinson, R. J., Oldham, M. J., Clinkenbeard, R. E., and Rai, P. 2006. Experimental and numerical analysis of a 7 generation human replica tracheobronchial model. Ann. Biomed. Eng. 34(3):373–383.
12. Dolovich MB, Newhouse MT. Aerosols. Generation, methods of administration, and therapeutic applications in asthma. In Allergy. Principles and practice, 4th edn, eds Middleton E Jr, Reed CE, Ellis EF, Adkinson NF Jr, Yunginger JW, Busse WW. St Louis: Mosby Year Book, Inc., 1993; 712–739. |
Direction of flagellum beat propagation is controlled by proximal/distal outer dynein arm asymmetry
Beatrice Freya Lucy Edwards\textsuperscript{1,*}, Richard John Wheeler\textsuperscript{1,*,#}, Amy Rachel Barker\textsuperscript{1,*}, Flávia Fernandes Moreira-Leite\textsuperscript{1}, Keith Gull\textsuperscript{1}, Jack Daniel Sunter\textsuperscript{2,#}
\textsuperscript{1}Sir William Dunn School of Pathology, University of Oxford, South Parks Road, Oxford, OX1 3RE, UK
\textsuperscript{2}Department of Biological and Medical Sciences, Oxford Brookes University, Gipsy Lane, Oxford, OX3 0BP, UK
* Equal contribution
# Corresponding authors: email@example.com, firstname.lastname@example.org
Keywords: flagellum, motility, outer dynein arm, intraflagellar transport, trypanosomatid
Abstract
The 9+2 axoneme structure of the motile flagellum/cilium is an iconic, apparently symmetrical cellular structure. Recently, asymmetries along the length of motile flagella have been identified in a number of organisms, typically in the inner and outer dynein arms. Flagellum beat waveforms are adapted for different functions. They may start either near the flagellar tip or near its base (and may be symmetrical or asymmetrical. We hypothesised that proximal/distal asymmetry in the molecular composition of the axoneme may control the site of waveform initiation and direction of waveform propagation. The unicellular eukaryotic pathogens *Trypanosoma brucei* and *Leishmania mexicana* often switch between tip-to-base and base-to-tip waveforms, making them ideal for analysis of this phenomenon. We show here that the proximal and distal portions of the flagellum contain distinct outer dynein arm docking complex heterodimers. This proximal/distal asymmetry is produced and maintained through growth by a concentration gradient of the proximal docking complex, generated by intraflagellar transport. Furthermore, this asymmetry is involved in regulating whether a tip-to-base or base-to-tip beat occurs, which is linked to a calcium-dependent switch. Our data show that the mechanism for generating proximal/distal flagellar asymmetry can control waveform initiation and propagation direction.
Significance statement
The motile flagellum/cilium is found across all eukaryotic life, and it performs critical functions in many organisms including humans. A fundamental requirement for a motile flagellum/cilium is that it must undergo the correct and appropriate waveform for its specific function. Much is known about the generation of asymmetry in flagellum movement, however it is unknown how a motile flagellum specifies where waves should start and whether waves should go from base-to-tip, or from tip-to-base. We show here that in two flagellum model organisms (the human parasites *Trypanosoma brucei* and *Leishmania mexicana*, differences in the outer dynein arms between the distal and proximal regions of the flagellum determine wave propagation direction, and are generated and maintained by the flagellum growth machinery.
Introduction
Eukaryotic flagella/cilia are highly conserved cellular structures and play key roles as both sensory and motile organelles. Motile flagella/cilia undergo different waveforms necessary for their function: a symmetrical sinusoid or helical flagellar-type beat (human sperm, or an asymmetrical wafting ciliary-type beat (ciliated epithelia, *Chlamydomonas reinhardtii*). Many organisms can switch between asymmetric and symmetric waveforms(1–6, and this switch is typically mediated by calcium(4, 6–14. In addition to the waveform shape, waveform propagation direction can vary, and these two phenomena are distinct. *C. reinhardtii* and animal sperm flagella generally undergo base-to-tip waveforms, while other flagella normally undergo tip-to-base waveforms (including *Leishmania* and *Trypanosoma*(15–17. Many flagella can switch the direction of waveform propagation, including *Leishmania* and the sperm of some animal species, to change the direction of swimming(16, 18–22. Like changes in waveform symmetry, switching of waveform direction is calcium mediated(18–21, 23, 24. Regardless of whether or not it can switch between waveforms, every flagellum/cilium must somehow specify where a waveform should start, and thus the direction in which it propagates.
The mechanism defining the point at which waveforms are initiated has been generally overlooked, possibly because the most popular model of flagellar waveform propagation, the geometric clutch model, suggests that flagellar beating can start spontaneously(25. Moreover, experimental evidence shows that the beat can initiate not only at the flagellar tip but also mid-flagellum in trypanosomatids(16. However, since the initiation point defines the direction of waveform
propagation (proximal initiation for base-to-tip, distal for tip-to-base), determining the mechanisms by which the position and direction of flagellar beat is initiated is critical to our understanding of waveform generation and switching.
Motor proteins in the inner and outer dynein arm complexes (IDAs, ODAs) are key to the generation and control of flagellar movement. The canonical view is the inner dynein arms generate and regulate the beat, while outer dynein arms provide the force to generate the final waveform(26). This view is predominantly derived from genetic evidence in *C. reinhardtii*: loss of IDAs (of which there are several classes) tends to alter waveform shape(27), while loss of ODAs reduces flagellar beat frequency with a small effect on waveform shape(27, 28). ODA defects are one of the main causes of primary ciliary dyskinesia (PCD) in humans(29), a recessive genetic disorder characterised by chronic pulmonary disease, randomisation of the left/right body axis and infertility. In *Trypanosoma brucei* loss of ODAs eliminates the tip-to-base flagellar beat, preventing forward motion of the parasite(15, 30). *C. reinhardtii* mutants lacking ODAs move more slowly and cannot swim backwards in response to stimulation with light(27, 28, 31).
In recent years studies have revealed that components of the IDAs and ODAs in several organisms are asymmetrically arranged along the length of the flagellum. IDA asymmetries have been identified in *C. reinhardtii*(32, 33), and ODA asymmetries occur in several model organisms. In humans, this asymmetry differs between cell types. In ciliated epithelia the outer arm dynein DNAH5 localises to the whole axoneme while DNAH9 and DNAH11 localise only to the distal axoneme(34–36); in contrast, in sperm DNAH5 localises to the proximal flagellar axoneme and DNAH9 to the whole axoneme(34). *C. reinhardtii* has one microtubule doublet with particularly strong proximal/distal asymmetry(32, 37), and this asymmetry appears to be at least partially due to proximal-only localisation of an ODA-associated complex including *ODA5 & ODA10*(38, 39). The function of proximal/distal asymmetry of ODAs has not previously been analysed in any detail, however it appears important: in humans disruption of the asymmetric DNAH proteins is associated with defects in ciliary motility and primary ciliary dyskinesia(34–36). In *C. reinhardtii*, mutations in the proximal proteins *ODA5 & ODA10* are associated with defects in swimming, but this phenotype is complicated by the additional roles of these proteins in ODA assembly(38, 39). How these IDA and ODA asymmetries are generated is also largely unknown.
We hypothesised that proximal/distal molecular asymmetries in the flagellum control where flagellum waveform starts, and so control beat propagation direction and contribute to the control of beat type. Since asymmetry has been observed in the IDAs of *C. reinhardtii*, and the ODAs in at least two model organisms, we examined the flagella of *T. brucei* and *Leishmania mexicana* for similar asymmetries. These organisms are well-characterised models for flagellar motility, and are capable of switching between tip-to-base and base-to-tip waveforms. There is some evidence of asymmetry between the proximal and distal regions of the flagellum in *T. brucei*(40), although the functional relevance of this is not yet clear. We show that in both organisms, the proximal and distal regions of the flagellum contain distinct ODA docking complexes (DCs), with an inherent asymmetry achieved early and maintained throughout flagellum growth. We demonstrate that this asymmetry is produced and maintained by an IFT-dependent concentration gradient of DC proteins, by retrograde transport of proximal DCs. Finally, we show that ODA proximal/distal asymmetry is involved in regulating whether a tip-to-base or base-to-tip beat occurs, likely via a calcium-dependent switch.
**Results**
We observed that the proximal and distal regions of the *T. brucei* axoneme are not identical by thin-section transmission electron microscopy (TEM). The *T. brucei* flagellum is laterally attached to the
cell for most of its length; therefore, most transverse cross-sections through the flagellum have an attached cross-section through the cell body. This architecture allows unambiguous identification of flagellar axoneme cross-sections as proximal or distal, based on the size and ultrastructure of the neighbouring cell body. Averaged electron density analysis of distal axoneme cross-sections showed a subtle difference in electron density in the outer dynein arm region between the proximal and distal regions of the axoneme (Figure 1A). By this unbiased analysis no other differences were detectable, however this does not exclude smaller proximal/distal differences in other structures.
We identified candidate proteins that may be responsible for this asymmetry using TrypTag, a project localising every protein encoded in the *T. brucei* genome(41). TrypTag identified several proteins with proximal- or distal-only axoneme localisations, including homologs of the ODA docking complex proteins DC1/ODA3 & DC2/ODA1 in *C. reinhardtii*, corresponding to CCDC151 & CCDC114 respectively in humans. One DC1 and one DC2 homolog (Tb927.5.1900 & Tb927.11.16090 respectively) localised to the distal part (approximately half) of the axoneme and a second DC1 and second DC2 homolog (Tb927.8.4400 & Tb927.7.5660 respectively) localised to the proximal part (Figure 1B), as determined by N-terminal tagging. We named these proteins pDC1 & pDC2 and dDC1 & dDC2, forming the proximal (pDC) and distal (dDC) docking complex pairs, respectively. Asymmetry is unlikely to be due to aberrant targeting since both C-terminal tagging and N-terminal tagging gave the same results (See SI Appendix, Figure S1A).
In..., DC1 and DC2 are predicted to form a coiled coil heterodimer and are mutually dependent for flagellar localisation and function(42, 43). *T. brucei* DC proteins are rich in predicted coiled coils (See SI Appendix, Figure S1B), therefore to test if each of the 4 DC proteins (pDC1, pDC2, dDC1, dDC2) are mutually dependent on their putative partner for correct localisation, we generated inducible RNAi cell lines targeting the open reading frame (ORF) of each DC gene. RNAi target sequences were selected ensuring they are not present elsewhere in the genome(44), and spurious knockdown of an incorrect DC protein is unlikely due to very low (<20%) protein sequence identity and no length of identical DNA sequence >11 nucleotides between any pair of DC proteins. For each RNAi cell line, we fluorescently tagged either the same gene or the expected heterodimer partner at the endogenous locus. In cell lines where the same gene was both tagged and targeted for RNAi, fluorescent signal of the tagged protein was undetectable after 72hr of RNAi induction, confirming effective RNAi knockdown at the protein level (See SI Appendix, Figure S1C, Table 1). When a given DC gene was targeted for RNAi knockdown in a cell line expressing a tagged copy of its putative partner, RNAi induction led to loss of fluorescence signal for the expected partner protein: tagged dDC2 was undetectable following dDC1 RNAi (and vice versa), and tagged pDC2 was undetectable following pDC1 RNAi (and vice versa) (Figure 1C, Table 1). Off-target effects of RNAi seem unlikely, as they would not be expected to generate these clear reciprocal phenotypes. Hence, the four *T. brucei* DC proteins likely make distinct distal (dDC1+dDC2) and proximal (pDC1+pDC2) heterodimers. We therefore focused further on one pDC and one dDC protein, pDC1 and dDC2 respectively.
To examine whether DC proteins performed their expected function of docking the ODAs to the axoneme, we tagged each of the outer arm dynein (OAD) heavy chains (OADα and OADβ), and one inner arm dynein (IAD) heavy chain (IADβ) as a negative control, on the background of RNAi targeting either dDC2 or pDC1. Prior to RNAi induction, both OAD and IAD fluorescence signals extended along the entire flagellum. Induction of pDC1 RNAi for 72hr had no detectable effect on OAD or IAD fluorescent signal (Figure 1D), however induction of dDC2 RNAi for 72hr resulted in decrease of OAD (but not IAD) fluorescence signal in the distal ≈25% of the flagellum (Figure 1D). The loss of ODAs from the distal axoneme only following dDC2 RNAi was confirmed using electron microscopy (Figure 1E).
To determine how ODAs remain attached to the proximal axoneme in the absence of pDC1 we tested whether loss of the proximal docking complex alters localisation of the distal docking complex and *vice versa*. We therefore tagged pDC1 and pDC2 on the background of dDC2 RNAi, and tagged dDC1 and dDC2 on the background of pDC1 RNAi. Following 72hr induction of dDC2 RNAi, the pDC1 and pDC2 signals remained proximal but extended along the flagellum to ≈75% of the flagellum length (Figure 1F, S1D, Table 1), matching the length of the OAD signal upon dDC2 RNAi (Figure 1D, S1D). Following 72hr induction of pDC1 RNAi, the dDC1 and dDC2 fluorescence signals extended to cover the entire flagellum (Figure 1F, S1D), indicating that the distal docking complex can dock ODAs to the entire axoneme in the absence of the proximal docking complex. RNAi knockdown of dDC1 with tagged pDC1 or pDC2 and RNAi knockdown of pDC2 with tagged pDC1 or pDC2 confirmed this result (See SI Appendix, Figure S1F, Table 1). Importantly, axoneme asymmetry changed upon RNAi knockdown of components of the proximal or distal docking complexes, indicating that both of these docking complexes must be present to generate asymmetry.
While lengthways asymmetries along the flagellar axoneme have been observed in other organisms, the mechanism that generates this asymmetry is unknown. To address this issue, we considered critically a number of possible models: DCs may attach to an underlying asymmetry, such as another protein or tubulin modification (Model 1). This is unlikely, as knockdown of DC proteins altered the asymmetry (Figure 1D,F) and thus they cannot purely be clients to an existing asymmetry. Asymmetry may be derived from flagellum growth, with the proximal docking complex assembled early and the distal complex later (Model 2). This is not correct, as proximal/distal asymmetry was achieved early and maintained throughout flagellar growth (see below). We considered the possibility that asymmetry comes from information passed through the lateral attachment of the flagellum to the *T. brucei* cell body (Model 3). This is also unlikely, as knockdown of DC proteins caused asymmetry changes without affecting flagellum/cell body attachment (Figure 1D,F). Finally, asymmetry may be generated intrinsically in the flagellum, by tip structures or by intraflagellar transport (IFT) (Model 4). As docking complex asymmetry extends over the whole flagellum and the switching point occurs at the midpoint, asymmetry is unlikely due to direct interaction with the basal body or flagellum tip structures. However, the IFT system that assembles the axoneme could generate asymmetry by creating a proximal/distal concentration gradient of DCs.
To test and make predictions about the IFT model (Model 4) of asymmetry generation we built a quantitative agent-based model of docking complex binding, diffusion and IFT transport. The axoneme was simulated in 100 nm sections, and the proximal and distal heterodimers were simulated as single particles which diffuse along the flagellum, attach to and detach from the axoneme and may be transported by IFT. The key parameters which define the behaviour of the model are the probability of DC attachment (*on*), detachment (*off*), diffusion (*D*) to a neighbouring section, rate of transport by IFT (*T*), and quantity of DCs (*Q*) (Figure 2A). Conceptually, IFT transport of unbound DCs (either retrograde transport for the proximal DCs, anterograde transport for the distal DCs, or transport of both) could generate a concentration gradient which drives proximal/distal asymmetry. This assumes that unbound docking complexes diffuse freely along the axoneme when not transported by IFT and then bind to the next available site. Simulating IFT transport of proximal DCs, distal DCs or both DCs all generated proximal/distal asymmetry (Figure 2B).
Our experiments showed that the effect on distal docking complexes upon proximal complex knockdown and *vice versa* was different: on knockdown of pDCs, the dDCs extended along the entire flagellum, while pDCs were still excluded from part of the distal flagellum on dDC knockdown (Figure 1F). This suggested only one of proximal or distal DCs are transported. We used this to constrain our model by specifying whether one or both DCs were transported by IFT and then simulating the
resulting flagellum asymmetry in the presence of either both DCs, just proximal DCs or just distal DCs, with the latter two mirroring the RNAi experiments (Figure 2B). Simulation of retrograde transport of the proximal docking complex alone matched the changes to docking complex localisation on RNAi knockdown. Any anterograde transport of distal docking complexes prevented the distal docking complex extending along the proximal flagellum in the absence of the proximal complex (Figure 2B cf. Figure 1F). Finally, given retrograde transport of the proximal docking complex, a higher binding affinity of pDC was necessary for the simulation to match the observed DC localisation (Figure 2C cf. Figure 1B). The model gave qualitatively similar results even with large changes to the estimated parameters (See SI Appendix, Figure S3A). Therefore our data is consistent with an asymmetry generation model whereby the higher affinity proximal docking complex is restricted to the proximal axoneme by retrograde IFT transport, while the lower affinity distal docking complex fills the remaining axoneme binding sites.
The IFT-mediated asymmetry model predicts that growing flagella will maintain their proximal distal asymmetry independent of flagellum length (Figure 2D). *T. brucei* grows a new flagellum each cell cycle, with each daughter cell inheriting one full-length flagellum. The new growing flagellum is always positioned anterior of the old flagellum(45, 46). Using a cell line in which dDC2 and pDC1 were each tagged with different fluorescent proteins, we saw that the new growing flagellum contained similar proportions of proximal pDC1 and distal dDC2 to those found in the old flagellum (Figure 2D). Measurements of the dDC2 signal length confirmed that there was no correlation between flagellum length and the proportion of the flagellum with dDC2 signal (Spearman’s rank correlation -0.013, n=123) (Figure 2D). Therefore the DC asymmetry is established early during flagellum growth, eliminating flagellum growth *per se* as a mechanism of asymmetry generation.
The IFT model also predicts that new distal docking complex molecules would be incorporated at the distal end of growing flagella. In contrast, the proximal docking complex would be incorporated more slowly and diffusely, weakly focused at the distal end of the proximal docking complex region, which corresponds to the middle of the flagellum (Figure 2E). We tested this prediction using pulse labelling in cell lines expressing dDC2 or pDC1 tagged with HaloTag (See SI Appendix, Figure S2B). Incubation with a non-fluorescent ligand followed by a pulse with fluorescent ligand allowed us to observe the incorporation of new material into the flagellum; as predicted by the model, new dDC2 was incorporated at the distal end of new growing flagella and new pDC1 signal was weaker, focused towards the middle of growing flagella (Figure 2E).
Finally, the IFT model predicts disruption of IFT should alter the proximal/distal asymmetry of the flagellum. The precise effect is hard to predict: IFT disruption also reduces flagellum growth and IFT-mediated entry of docking complexes into the flagellum may be affected. The model suggests that the proximal docking complex is unlikely to require IFT to enter the flagellum, as it is actively removed by retrograde transport, while the distal docking complex may require IFT to enter the flagellum and so be depleted on IFT disruption. Simulation predicts reduced IFT will allow the pDC region to extend distally, outcompeting the binding of dDC, and this effect could be exacerbated by a reduction in the quantity of distal docking complex proteins. We tested this prediction using RNAi knockdown of IFT46 (Tb927.6.3100), which is required for anterograde IFT, thus disrupting both anterograde and retrograde IFT. Induction of IFT46 RNAi for 24 h caused the cellular phenotypes of IFT knockdown(47): cells with shorter flagella, cytokinesis defects, and reduced population growth (See SI Appendix, Figure S2C). We tagged dDC2 and pDC1 with different fluorescent proteins in this RNAi cell line, and then looked at cells 8 h and 16 h after RNAi induction to examine the earliest effects of IFT46 RNAi. In a minority of dividing cells at 8 h and a majority at 16 h the new flagellum had the predicted changes to axoneme asymmetry. The region occupied by pDC1 was greatly expanded, with a corresponding
reduction in dDC2 signal (Figure 2F, Table 1). This reduced proportion of dDC2 signal was inherited by one daughter cell and after 24 h induction cells with a single flagellum and a similar proximal/distal defect were common in the population (See SI Appendix, Figure S2D). This confirms that retrograde transport of proximal docking complexes generates a concentration gradient which, combined with different dissociation rates of pDC and dDC, generates the observed asymmetry in docking complexes.
As outer dynein arms are required for flagellar beating, we reasoned there may be regulatory proteins for modulating the site of waveform initiation (and therefore waveform propagation direction) which bind only to either the proximal or distal ODAs. Using TrypTag(41), we identified a candidate beat regulation protein (Tb927.9.4420) based on its localisation and predicted domains. It is the only EF-hand/calmodulin-domain containing protein (Figure 3A) localised specifically to the distal axoneme (Figure 3B). This protein could plausibly interact with Ca$^{2+}$, a known beat regulator. The most similar *C. reinhardtii* protein is LC4 (but LC4 is not a reciprocal best BLAST result), an outer dynein arm-binding protein implicated in flagellar beat control(48, 49). We named this protein LC4-like. Fluorescently tagged LC4-like localised to the distal axoneme, similar to dDC1/dDC2, and the fluorescence signal was undetectable following induction of dDC2 RNAi knockdown for 72hr, suggesting that LC4-like relies upon the distal docking complex for localisation (Figure 3B, Table 2). These results provide further support for the biological plausibility of LC4-like as a distal ODA regulator. LC4-like is not an obligate part of the docking complex, as 72hr induction of LC4-like RNAi knockdown did not affect the localisation of dDC2 (Figure 3B, Table 2). This is similar to *C. reinhardtii*, where the docking complex heterodimer is associated with a calcium-binding protein, DC3 (*ODA14*)(50); however, we did not find a clear homologue of DC3 by reciprocal best BLAST in *T. brucei*. RNAi knockdown of pDC1 caused the LC4-like signal to extend to the proximal end of the flagellum (Figure 3C, Table 2), as does the distal docking complex. In *C. reinhardtii* LC4 binds the ODAs(49), and therefore LC4-like may only bind to ODAs attached by the distal docking complex, although it may bind directly to the dDCs.
We predicted that disruption of proximal/distal asymmetry or LC4-like would alter the direction of flagellum waveform propagation. However, full analysis of flagellar waveforms is complicated in *T. brucei* due to the lateral attachment of the flagellum to the cell body. To better analyse changes in flagellum movement we used the related parasite *L. mexicana*, which does not have a laterally attached flagellum, greatly simplifying the beat waveform analysis. We identified *L. mexicana* homologs of dDC1, dDC2, pDC1 and pDC2 (LmxM.10.0960, LmxM.31.2900, LmxM.15.0540 and LmxM.06.1040 respectively). Each has comparable localisations to those in *T. brucei*, except that the proximal docking complex occupies ≈20% of the proximal axoneme rather than 50% (See SI Appendix, Figure S3A). Deletion of both alleles of dDC2 on the background of fluorescently tagged pDC1 caused distal extension of the pDC1 signal and loss of ODAs from the distal axoneme, similar to dDC2 RNAi knockdown in *T. brucei* (See SI Appendix, Figure S3B,C cf. Figure 1E,F). The localisation of LC4-like (LmxM.01.0620) in *L. mexicana* also reflected that of *T. brucei* and deletion of both dDC2 alleles caused loss of LC4-like from the axoneme, similar to dDC2 RNAi knockdown in *T. brucei* (See SI Appendix, Figure S3D cf. Figure 3B).
Deletion of both alleles of *L. mexicana* dDC2 decreased the speed, velocity and directionality of cell swimming (Figure 4A), though flagella still moved. We analysed flagellum movement with 200 Hz high framerate video microscopy. *L. mexicana* can undergo both a tip-to-base sinusoidal flagellar beat and a base-to-tip asymmetric ciliary type flagellar beat(16). Normal flagellum movement in the parental cell line was mostly a flagellar beat, with occasional pauses or ciliary beats, and a large minority of cells undergoing low-frequency or uncoordinated movement (Figure 4B, Video 1). Flagellum movement after dDC2 deletion entirely lacked flagellar beats. Approximately half of flagella were uncoordinated and half of cells underwent an asymmetric base-to-tip reverse beat, far more than in
the parental cell line (Figure 4C,D, Video 1). The shape of base-to-tip waveforms was unaffected, with the same asymmetric shape as the parental cell line. The distal ODAs are therefore required for the tip-to-base flagellar beat to occur, most likely by initiating the flagellar beat waveform at the distal end.
Deletion of both LC4-like alleles caused a significant increase in swimming speed and velocity (Figure 4A). High frame rate video of the LC4-like deletion showed that flagellum movement was almost entirely a flagellar beat, with fewer pauses in the beat, less uncoordinated movement and far fewer ciliary beats than in the parental cell line (Figure 4B, Video 1). The frequency of the flagellar beat was also significantly higher in the LC4-like deletion (44.2±8.9 Hz, n=33) than in the parental cell line (25.4±9.0 Hz n=27) (Mann-Whitney U test p<10^{-10}), although the waveform showed the same sinusoidal shape. LC4-like therefore appears to be a regulatory protein that inhibits the initiation of flagellar beat waveforms, and the absence of LC4-like has the opposite effect to that of missing distal outer dynein arms.
**Discussion**
It is becoming increasingly apparent that there are molecular asymmetries in the IDAs and ODAs of the axoneme both between the outer microtubule doublets(51, 52) and along the length of the axoneme(32–34, 36) in many organisms. In *C. reinhardtii*, proximal/distal and doublet-doublet asymmetries in the IDAs have been implicated in controlling whether a flagellum waveform is asymmetric or symmetrical(27). We show here that proximal/distal axoneme asymmetry is also important in controlling the site of flagellum waveform initiation and therefore the direction of waveform propagation. The control of the site of waveform initiation and the control of waveform asymmetry are significantly different phenomena; however, as both involve proximal/distal asymmetries in dynein arms, there may be similarities in the underlying mechanisms.
We have concentrated on the lengthwise axonemal asymmetry and demonstrate that both *T. brucei* and *L. mexicana* use distinct proximal and distal docking complexes to confer proximal/distal asymmetry of the molecular composition of ODAs in the axoneme. The resulting axoneme structure is similar to previous reports of asymmetry of outer arm dyneins in human cilia and sperm flagella(34–36), and to proximal/distal asymmetry of one axoneme microtubule doublet in *C. reinhardtii*(32, 37), which is linked to docking complex-related accessory proteins(38, 39). It is perhaps surprising that we identified the ODAs, rather than IDAs, as important for specifying the site of waveform initiation in *T. brucei* and *L. mexicana*. The canonical view (based primarily on *C. reinhardtii*) is that ODAs provide the force which drives the beat, while the IDAs initiate and regulate the waveform shape(26). However, we saw no clear proximal/distal asymmetry of IDAs by TEM in *T. brucei*, while loss of distal ODA DCs or the ODA-associated LC4-like was sufficient to change the site of waveform initiation. The change in the site of waveform initiation occurred with the normal waveform for tip-to-base (symmetric) and base-to-tip (asymmetric) waveforms. This is consistent with IDAs regulating the shape of the waveform (as in *C. reinhardtii*) but ODAs regulating the site of wave initiation. While we cannot formally exclude a parallel or underlying role of IDAs in controlling whether waveform initiation is distal or proximal, we saw no change in the localisation of IDA components during docking complex knockdown. However, as *C. reinhardtii* flagella always undergoes base-to-tip waveforms, it is also possible *C. reinhardtii* has lost some proximal/distal ODA asymmetries. Given this asymmetry occurs in diverse organisms, proximal/distal asymmetry in ODAs may represent a general mechanism for defining proximal and distal regions of the flagellum, although the specific proteins involved may differ through evolution. Guided by this, we identified a similar, previously unrecognised, phenomenon by analysing previously published data from the unrelated unicellular parasite *Giardia lamblia*(53). We
noticed that this species also has proximal and distal DC1-like proteins (GL50803_13288 & GL50803_16998) but a single DC2 (GL50803_114462).
Proximal/distal axoneme asymmetry had been described, but the mechanism of asymmetry generation was unknown, although competition for axoneme binding between distal and proximal components has been suggested for *C. reinhardtii* and *T. brucei* (33, 40). We show that asymmetry of the docking complexes is generated at the very earliest stages of flagellar growth and maintained as the flagellum elongates. Therefore the mechanism by which asymmetry is generated is intrinsic to the flagellar growth machinery. By modelling various permutations of the IFT transport of each docking complex, and comparing these with the experimental data, we show that two factors are sufficient to generate the asymmetries we observed: lower-affinity binding of the distal docking complex and retrograde transport of the proximal docking complex by IFT. Retrograde transport of the proximal docking complex toward the base of the flagellum generates and maintains a concentration gradient of the proximal docking complex. The higher affinity of the proximal docking complex out-competes binding of the distal docking complex, generating the axoneme asymmetry, and unbound distal docking complex is free to diffuse throughout the flagellum and fills in the remaining spaces. This model is simple, and in reality there are likely to be additional complexities, yet it precisely matches our experimental observations: i) maintenance of asymmetry throughout flagellar growth, ii) the locations at which newly synthesised proximal and distal docking complexes are incorporated into the growing flagellum, and iii) the effects on docking complex distribution when IFT is disrupted.
Diffusion of docking complexes along the axoneme, filling available binding sites, has previously been observed in *C. reinhardtii* flagella lacking ODAs; the docking complex enters the base of the flagellum, binds first to the proximal axoneme and proceeds distally, filling each unoccupied site (54). Some evidence for diffusion of the ODA protein DNAI1 has also been observed in *T. brucei* (55). Diffusion alone could initially generate a proximal/distal axoneme asymmetry; for example, diffusion of a limited quantity of high-affinity proximal docking complex into the proximal flagellum out-competes a low-affinity distal docking complex. However, our simulation indicated retrograde transport of the proximal complex by IFT was required to maintain this asymmetry over an extended period. Together, our data indicates a concentration gradient generated by directional transport of a protein complex can generate and maintain asymmetry in an organelle, and may be a fundamental mechanism through which an organelle can generate internal asymmetry/structure, analogous to the concentration gradients driving polarity in cell and tissue development.
We show that proximal/distal asymmetry of ODAs in *T. brucei* and *L. mexicana* is involved in the control of the site of initiation of flagellum beat waveforms. Loss of the distal docking complex resulted in loss of ODAs from only the distal portion of the flagellum, and the subsequent loss of tip-to-base waveforms demonstrates that ODAs in this region are required for initiation and/or propagation of tip-to-base waveforms. This is consistent with previous studies showing that beat initiation occurs in the most distal 2 µm of the flagellum (16). Proteins involved in regulating waveform shape are often associated with the ODAs, with LC1 and LC4 representing well-characterised examples in *C. reinhardtii* (48, 49, 56), though IDA components are also important. In *T. brucei*, knockdown of the ODA component LC1 resulted in loss of tip-to-base waveform generation (15, 29), but this was complicated by the complete loss of ODAs in these mutants. Our data showed a distal only LC4-like protein, with a Ca$^{2+}$ binding site, is a repressor of distal initiation of tip-to-base waveforms. This reveals that the distal ODAs are an important site for the regulation of initiation of flagellar waveforms, thus controlling whether the waveform travels tip-to-base or base-to-tip.
Flagellum movement arises from dynein-driven sliding of neighbouring axoneme microtubule doublets (57) and there are three different models to explain waveform propagation (57). The leading
model is arguably the geometric clutch hypothesis, which states that mechanical distortion of the axoneme as it bends regulates force generation by regulating dynein arm engagement(25, 58, 59). The simplest interpretation of our results in the context of this hypothesis is that the distal docking complex positions the distal ODAs so they are more likely than the proximal ODAs to spontaneously engage and start a flagellar waveform (although more complex interpretations, like involvement of the IDAs, are also possible). Loss of distal ODAs prevents distal waveform initiation, allowing spontaneous proximal initiation. Calcium binding to LC4-like could then modulate the likelihood of engagement of the distal ODAs, regulating the site of waveform initiation and, therefore, the direction of beat propagation. This role for LC4-like is consistent with previous data in *C. reinhardtii*, where the switch from flagellar to ciliary beating upon photostimulation is calcium-mediated(7, 60).
Given that mutations both in docking complexes(61) and in proteins with an asymmetric localisation(36) lead to primary ciliary dyskinesia in humans, a better understanding of the mechanisms by which asymmetry occurs and how it contributes to flagellar motility is essential. Here, we demonstrate that molecular asymmetries within the axoneme can be generated by an IFT-dependent concentration gradient of proteins within the flagellum and that this asymmetry is linked to the control of waveform initiation, defining whether a tip-to-base or base-to-tip beat occurs. This control is mediated by a potentially calcium-responsive protein, which relies on the distal docking complexes for proper localisation, enabling *T. brucei* and *L. mexicana* parasites to switch flagellum waveform propagation direction and control their swimming. It seems likely that proximal/distal asymmetry is a common feature of cilia and flagella, and that the true extent and function of this important phenomenon are only just beginning to become clear.
**Methods**
SmOxP9 procyclic *T. brucei* (derived from TREU 927, expressing T7 RNA polymerase and tetracycline repressor(62)) were grown in SDM79 with 10% FCS(63). Constructs for endogenous mNeonGreen (mNG) or mScarlet mSc tagging were generated by PCR and transfected as previously described(64), with the pPOT version 4 series of vectors used as PCR templates(64), specifically pPOT mNG Blast or pPOT mSc Neomycin. Target sequences were selected and primers were designed using TAGit ([http://www.sdeanresearch.com/cgi-bin/tagitA.cgi](http://www.sdeanresearch.com/cgi-bin/tagitA.cgi))(64) (See SI Appendix, Table S1). Constructs for RNAi were generated using the pQuadra system(65). Primers for amplification of the target ORF fragment were designed using RNAit ([http://trypanofan.path.cam.ac.uk/software/RNAit.html](http://trypanofan.path.cam.ac.uk/software/RNAit.html)). Transfectants were selected with the necessary combination of 5 μg/ml blasticidin S hydrochloride, 5 μg/ml G-418 disulfate, and 10 μg/ml phleomycin and cloned by limiting dilution in 96 well plates.
Cas9T7 *L. mexicana* (derived from WHO strain MNYC/BZ/62/M379, expressing Cas9 and T7 RNA polymerase(66)) were grown in M199 supplemented with 2.2 g/l NaHCO₃, 0.005% haemin, 40 mM HEPES-HCL pH 7.4 and 10% FCS. Constructs and sgRNA templates for endogenous mNG tagging templates were generated by PCR as previously described(66) and transfected as previously described(64). The pLrPOT series of vectors were used as PCR templates for generating tagging constructs, specifically pLrPOT mNG Blast. These are a variant of pLPOT(64) with *T. brucei* and *Crithidia fasciculata* 5’ or 3’ untranslated regions (UTRs) and intergenic sequences replaced with complete *L. mexicana* intergenic sequences. The *T. brucei* actin 5’ UTR was replaced with the *L. mexicana* actin (LmxM.04.1230) 5’ UTR, the *T. brucei* aldolase 3’ UTR/*C. fasciculata* PGKB 5’ UTR fusion was replaced with the *L. mexicana* histone 2B intergenic sequence (between LmxM.19.0050 and LmxM.19.0030), the *C. fasciculata* PGKA/B intergenic was replaced with the *L. mexicana* histone 2A intergenic (between LmxM.08_29.1740 and LmxM.08_29.1730) and the *T. brucei* aldolase 3’ UTR was replaced with the *L. mexicana* eukaryotic initiation factor 5 (LmxM.25.0720) 3’ UTR. Constructs and sgRNA templates for open reading frame deletion were generated by PCR and transfected as previously described, using
pT Blast and pT Neo as templates(66). Primers were designed using LeishGEdit (http://www.LeishGEdit.net)(66). Transfectants were selected with the necessary combination of 20 µg/ml puromycin dihydrochloride, 5 µg/ml blasticidin S hydrochloride, 40 µg/ml G-418 disulfate, 50 µg/ml nourseothricin sulfate and 25 µg/ml phleomycin and cloned by limiting dilution in 96 well plates using MM199 as previously described(64).
*T. brucei* and *L. mexicana* cultures were grown at 28°C. Culture density was maintained between 1×10⁵ (*T. brucei*) or 1×10⁵ (*L. mexicana*) and 1×10⁷ cells/ml for continued exponential population growth. Culture density was measured using a CASY model TT cell counter (Roche Diagnostics) with a 60 µm capillary.
*T. brucei* and *L. mexicana* cell lines expressing fluorescent fusion proteins were imaged live. Cells were washed three times by centrifugation at 800 g followed by resuspension in vPBS (PBS supplemented with 10 mM glucose and 46 mM sucrose). DNA was stained by including 10 µg/ml Hoechst 33342 in the second wash. Washed cells were settled on glass slides then immediately observed. To generate cytoskeletons, cells were prepared as for live cell microscopy, the membrane solubilised with 0.5% NP40 in PEME (100 mM PIPES:NaOH pH 6.9, 2 mM EGTA, 1 mM MgSO₄ and 100 nM EDTA) for 30 s, then the remaining cytoskeleton fixed by immersion in −20°C methanol for 20 min. Cytoskeletons were then rehydrated in PBS, mounted in 50 mM phosphate-buffered glycerol pH 8.0 and imaged. For fluorescent labelling of HaloTag fusion proteins cells were incubated in culture with fluorophore-conjugated ligands. For labelling of all HaloTag fusion protein, cells were incubated with TMRDirect ligand (Promega) at 0.1 µM final concentration for 45 min. For pulse labelling of HaloTag fusion protein, cells were incubated with Coumarin ligand (Promega) at 10 µM final concentration for 45 min, washed three times with medium, then incubated with 0.1 µM TMRDirect (tetramethylrhodamine) ligand for 45 min. We could not detect the expected blue fluorescence of the Coumarin ligand, but found it was an effective block, as described previously(67). Widefield epifluorescence and phase contrast images were captured using a Zeiss Axioimager.Z2 microscope with a 63× NA 1.40 oil immersion objective and a Hamamatsu ORCA-Flash4.0 camera. Cell morphology measurements were made in ImageJ(68).
Swimming and flagellar beat behaviours were analysed for cells in exponential growth in normal culture medium essentially as previously described(17). For cell swimming analysis, a 25.6 s video at 5 frames per second under darkfield illumination was captured from 5 µl cell culture in a 250 µm deep chamber using a Zeiss Axioimager.Z2 microscope with 10× NA 0.3 objective and a Hamamatsu ORCA-Flash4.0 camera. Particle tracks were traced automatically, and mean cell speed, mean cell velocity and cell directionality (the ratio of velocity to speed) were calculated as previously described(17). For flagellar beat analysis, a 4 s video at 200 frames per second under phase contrast illumination was captured from a thin film of cell culture between a slide and coverslip using a Zeiss Axiovert.A1 microscope with a 20× NA 0.3 objective and an Andor Neo 5.5 camera. Unlike previously, glass slides and coverslips were blocked with bovine serum albumen (BSA) to reduce cell adhesion to the glass, by immersion in 1% BSA for 60 s then washed with water and allowed to dry prior to use. Flagellar beat behaviours for each cell in the 4 s videos were classified manually.
Thin-section transmission electron microscopy samples were prepared as previously described(69, 70). Sections with nominal thicknesses between 70 nm were cut, stained with lead citrate, and then observed using an FEI Tecnai 12 TEM with a Gatan Ultrascan 1,000 CCD camera. Transverse sections through flagella were classified into proximal or distal based on the width of the cell body to which the flagellum was attached: proximal if the cell body was over ≈500 nm wide, and distal if under ≈500 nm or if the flagellum was not laterally attached to a cell. Ninefold rotational averages of the axoneme structure (Markham rotations) were generated following perspective correction to ensure a
circular axoneme cross-section as previously described(71–73). Axoneme cross-sections were pooled from negative controls from previous studies, then were stacked and averaged in ImageJ(68) to generate average proximal and distal axoneme electron density.
The agent-based simulation of flagellum assembly and DC proximal/distal asymmetry was written in Javascript/NodeJS. pDC and dDC complexes were simulated as two particles, and particles could either be attached at a fixed position in the axoneme or detached and free to diffuse. The flagellum was simulated in discrete bins from proximal to distal (segments) and in discrete time steps (intervals). An evaluation interval of 0.1 s and a flagellum segment size of 100 nm were selected for useful granularity of binding/dissociation events and axoneme binding capacity for DCs. DC binding capacity of the axoneme was 37 /segment, assuming ninefold axoneme symmetry and a 24 nm repeat of outer dynein arms(74). Probability of detached DC diffusion to an adjacent segment was 0.436 /interval, calculated from the probability of diffusing 1 segment distance in 1 interval assuming a 5 nm docking complex effective hydrodynamic radius and a flagellar cytoplasm viscosity 670x greater than water (derived from BioNumbers(75) ID 108250, the diffusion constant of GFP in water, and the diffusion of rate of GFP in bacterial cytoplasm(76)). Diffusion occurs equally in both directions, therefore probability of anterograde diffusion and retrograde diffusion ($D$) were both 0.208 /interval. Probability of DC binding to a free site in the axoneme ($on$) was set to 1, assuming binding kinetics are fast relative to dissociation and diffusion. Probability of dissociation ($off$) was initially set to $4 \times 10^{-5}$, giving a dissociation half-life on the order of 1 h. Flagellum growth rate was constant and set to 10 $\mu$m/h, up to a maximum length of 23 $\mu$m(77). Quantity of docking complex protein ($Q$) is expressed as a factor excess over the number of binding sites in half of the flagellum and was initially set to 2.0. For example $Q_{dist}=Q_{prox}=1$ indicates an equal number of DC particles and axoneme binding sites, half pDC and half dDC. $Q_{dist}=Q_{prox}=2.0$ indicates a 100% excess. In every interval in which flagellum growth led to addition of a new axoneme segment the necessary number of new detached pDC and dDC particles were added to the base of the flagellum. IFT was simulated by sweeping from proximal to distal (anterograde transport) or distal to proximal (retrograde transport) and moving the first detached dDC (for anterograde) or pDC (for retrograde) particle encountered to the distal or proximal end of the axoneme respectively. Quantity of IFT ($T$) was initially set to 0.2 /interval, assuming around 5 IFT trains per second(78) with one binding site per train. Dissociation probabilities were simulated with the following values: $off_{dist}=off_{prox}=4 \times 10^{-5}$, $off_{dist}=1 \times 10^{-5} off_{prox}=1.6 \times 10^{-4}$, and $off_{dist}=1.6 \times 10^{-4} off_{prox}=1 \times 10^{-5}$. The final values were $off_{dist}=1.6 \times 10^{-4} off_{prox}=1 \times 10^{-5}$. pDC or dDC knockdowns were simulated with $Q_{prox}$, $Q_{dist}$, $T_{prox}$ and $T_{dist}$ as indicated in the text. The final values were $Q_{dist}=Q_{prox}=2.0$ and $T_{prox}=0.2$ and $T_{dist}=0.0$.
The localisations of the *T. brucei* docking complex proteins were initially identified from TrypTag.org(41). *T. brucei* and *L. mexicana* protein and genome sequences were accessed using TriTrypDB.org(79). *L. mexicana* orthologs of *T. brucei* proteins were identified using synteny and orthogroup data from TriTrypDB.org(79), and manually confirmed by reciprocal best BLAST. *C. reinhardtii*, *G. lamblia* and human orthologs were identified using orthogroups as found by orthofinder(80) run on 45 diverse eukaryote genomes, then manually checked by BLAST search. We used the same set of 45 ciliated and non-ciliated organism genomes as in previous studies concerning evolution of flagellar/ciliary components(81). Coiled coils were predicted using Coils v2.2 with default parameters(82). Key residues in the EF-hand fold were derived from Superfamily v1.75 (supfam.org)(83), specifically 1a03 A (S100 protein set) for SSF47473, and visualised using WebLogo(84).
Acknowledgements
We would like to thank Heather Jeffrey and Eva Gluenz for generating the *L. mexicana* LC4-like deletion cell line, Helen Farr and Emily Poon for contributing EM images and Catherine McEnhill for contributing to flagellum beat data collection. We would especially like to thank Samuel Dean and the other co-PIs of TrypTag. This work was supported by the Wellcome trust [103261/Z/13/Z, 108445/Z/15/Z, 104627/Z/14/Z].
Author contributions
JDS, RJW & KG designed and directed the research project. BE performed preliminary experiments. BFLE, JDS & ARB generated and analysed *T. brucei* cell lines. RJW generated and analysed *L. mexicana* cell lines. FML performed electron microscopy. ARB and JDS developed asymmetry mechanism experiments, RJW simulated the asymmetry mechanism. BFLE & RJW analysed cell swimming and flagellar beats. All authors contributed to data analysis and drafting the manuscript.
References
1. Ringo DL (1967) Flagellar Motion and Fine Structure of the Flagellar Apparatus in Chlamydomonas. *J Cell Biol* 33(3):543–571.
2. Ueki N, Matsunaga S, Inouye I, Hallmann A (2010) How 5000 independent rowers coordinate their strokes in order to row into the sunlight: Phototaxis in the multicellular green alga Volvox. *BMC Biol* 8:103.
3. Holwill MEJ (1966) The Motion of Euglena Viridis: The Role of Flagella. *J Exp Biol* 44(3):579–588.
4. Diehn B, Fonseca JR, Jahn TL (1975) High Speed Cinemicrography of the Direct Photophobic Response of Euglena and the Mechanism of Negative Phototaxis. *J Protozool* 22(4):492–494.
5. Kung C, Chang SY, Satow Y, Houten JV, Hansma H (1975) Genetic dissection of behavior in paramecium. *Science* 188(4191):898–904.
6. Naitoh Y (1968) Ionic Control of the Reversal Response of Cilia in Paramecium caudatum. *J Gen Physiol* 51(1):85–103.
7. Bessen M, Fay RB, Witman GB (1980) Calcium control of waveform in isolated flagellar axonemes of chlamydomonas. *J Cell Biol* 86(2):446–455.
8. Hyams JS, Borisy GG (1975) Flagellar coordination in Chlamydomonas reinhardtii: isolation and reactivation of the flagellar apparatus. *Science* 189(4206):891–893.
9. Schmidt JA, Eckert R (1976) Calcium couples flagellar reversal to photostimulation in Chlamydomonas reinhardtii. *Nature* 262(5570):713–715.
10. Hyams JS, Borisy GG (1978) Isolated flagellar apparatus of Chlamydomonas: characterization of forward swimming and alteration of waveform and reversal of motion by calcium ions in vitro. *J Cell Sci* 33(1):235–253.
11. Doughty MJ, Diehn B (1979) Photosensory transduction in the flagellated alga, Euglena gracilis I. Action of divalent cations, Ca2+ antagonists and Ca2+ ionophore on motility and photobehavior. *Biochim Biophys Acta* 588(1):148–168.
12. Iwadate Y (2003) Photolysis of caged calcium in cilia induces ciliary reversal in Paramecium caudatum. *J Exp Biol* 206(7):1163–1170.
13. Iwadate Y, Nakaoka Y (2008) Calcium regulates independently ciliary beat and cell contraction in Paramecium cells. *Cell Calcium* 44(2):169–179.
14. Shiba K, Inaba K (2017) Inverse relationship of Ca2+-dependent flagellar response between animal sperm and prasinophyte algae. *J Plant Res* 130(3):465–473.
15. Baron DM, Kabututu ZP, Hill KL (2007) Stuck in reverse: loss of LC1 in Trypanosoma brucei disrupts outer dynein arms and leads to reverse flagellar beat and backward movement. *J Cell Sci* 120(Pt 9):1513–1520.
16. Gadelha C, Wickstead B, Gull K (2007) Flagellar and ciliary beating in trypanosome motility. *Cell Motil Cytoskeleton* 64(8):629–643.
17. Wheeler RJ (2017) Use of chiral cell shape to ensure highly directional swimming in trypanosomes. *PLoS Comput Biol* 13(1):e1005353.
18. Shiba K, Shibata D, Inaba K (2014) Autonomous changes in the swimming direction of sperm in the gastropod Strombus luhuanus. *J Exp Biol* 217(6):986–996.
19. Holwill MEJ, McGregor JL (1975) Control of flagellar wave movement in Crithidia oncopelti. *Nature* 255(5504):157–158.
20. Johnston DN, Silvester NR, Holwill MEJ (1979) An Analysis of the Shape and Propagation of Waves on the Flagellum of *Crithidia Oncopelti*. *J Exp Biol* 80(1):299–315.
21. Sugrue P, Hirons MR, Adam JU, Holwill ME (1988) Flagellar wave reversal in the kinetoplastid flagellate *Crithidia oncopelti*. *Biol Cell Aquispes Eur Cell Biol Organ* 63(2):127–131.
22. Yang Y, Lu X (2011) *Drosophila* Sperm Motility in the Reproductive Tract. *Biol Reprod* 84(5):1005–1015.
23. Mukhopadhyay AG, Dey CS (2016) Reactivation of flagellar motility in demembranated *Leishmania* reveals role of cAMP in flagellar wave reversal to ciliary waveform. *Sci Rep* 6:37308.
24. Mukhopadhyay AG, Dey CS (2017) Role of calmodulin and calcineurin in regulating flagellar motility and wave polarity in *Leishmania*. *Parasitol Res*:1–8.
25. Lindemann CB (1994) A “Geometric Clutch” Hypothesis to Explain Oscillations of the Axoneme of Cilia and Flagella. *J Theor Biol* 168(2):175–189.
26. Brokaw CJ (1994) Control of flagellar bending: a new agenda based on dynein diversity. *Cell Motil Cytoskeleton* 28(3):199–204.
27. Brokaw CJ, Kamiya R (1987) Bending patterns of *Chlamydomonas* flagella: IV. Mutants with defects in inner and outer dynein arms indicate differences in dynein arm function. *Cell Motil Cytoskeleton* 8(1):68–75.
28. Mitchell DR, Rosenbaum JL (1985) A motile *Chlamydomonas* flagellar mutant that lacks outer dynein arms. *J Cell Biol* 100(4):1228–1234.
29. Papon JF, et al. (2010) A 20-year experience of electron microscopy in the diagnosis of primary ciliary dyskinesia. *Eur Respir J* 35(5):1057–1063.
30. Branche C, et al. (2006) Conserved and specific functions of axoneme components in trypanosome motility. *J Cell Sci* 119(Pt 16):3443–3455.
31. Kamiya R, Okamoto M (1985) A mutant of *Chlamydomonas reinhardtii* that lacks the flagellar outer dynein arm but can swim. *J Cell Sci* 74:181–191.
32. Bui KH, Yagi T, Yamamoto R, Kamiya R, Ishikawa T (2012) Polarity and asymmetry in the arrangement of dynein and related structures in the *Chlamydomonas* axoneme. *J Cell Biol* 198(5):913–925.
33. Yagi T, Uematsu K, Liu Z, Kamiya R (2009) Identification of dyneins that localize exclusively to the proximal portion of *Chlamydomonas* flagella. *J Cell Sci* 122(9):1306–1314.
34. Fliegauf M, et al. (2005) Mislocalization of DNAH5 and DNAH9 in respiratory cells from patients with primary ciliary dyskinesia. *Am J Respir Crit Care Med* 171(12):1343–1349.
35. Panizzi JR, et al. (2012) CCDC103 mutations cause primary ciliary dyskinesia by disrupting assembly of ciliary dynein arms. *Nat Genet* 44(6):714–719.
36. Dougherty GW, et al. (2016) DNAH11 Localization in the Proximal Region of Respiratory Cilia Defines Distinct Outer Dynein Arm Complexes. *Am J Respir Cell Mol Biol* 55(2):213–224.
37. Hoops HJ, Witman GB (1983) Outer doublet heterogeneity reveals structural polarity related to beat direction in *Chlamydomonas* flagella. *J Cell Biol* 97(3):902–908.
38. Dean AB, Mitchell DR (2015) Late steps in cytoplasmic maturation of assembly-competent axonemal outer arm dynein in *Chlamydomonas* require interaction of ODA5 and ODA10 in a complex. *Mol Biol Cell* 26(20):3596–3605.
39. Wirschell M, et al. (2004) Oda5p, a novel axonemal protein required for assembly of the outer dynein arm and an associated adenylate kinase. *Mol Biol Cell* 15(6):2729–2741.
40. Subota I, et al. (2014) Proteomic analysis of intact flagella of procyclic *Trypanosoma brucei* cells identifies novel flagellar proteins with unique sub-localisation and dynamics. *Mol Cell Proteomics MCP* 13(7):1769–86.
41. Dean S, Sunter JD, Wheeler RJ (2017) TrypTag.org: A *Trypanosome* Genome-wide Protein Localisation Resource. *Trends Parasitol* 33(2):80–82.
42. Koutoulis A, et al. (1997) The *Chlamydomonas reinhardtii* ODA3 gene encodes a protein of the outer dynein arm docking complex. *J Cell Biol* 137(5):1069–1080.
43. Takada S, Wilkerson CG, Wakabayashi K, Kamiya R, Witman GB (2002) The outer dynein arm-docking complex: composition and characterization of a subunit (oda1) necessary for outer arm assembly. *Mol Biol Cell* 13(3):1015–1029.
44. Redmond S, Vadivelu J, Field MC (2003) RNAit: an automated web-based tool for the selection of RNAi targets in *Trypanosoma brucei*. *Mol Biochem Parasitol* 128(1):115–118.
45. Wheeler RJ, Scheumann N, Wickstead B, Gull K, Vaughan S (2013) Cytokinesis in *Trypanosoma brucei* differs between bloodstream and tsetse trypomastigote forms: implications for microtubule-based morphogenesis and mutant analysis. *Mol Microbiol* 90(6):1339–55.
46. Gull K, et al. (1990) The cell cycle and cytoskeletal morphogenesis in *Trypanosoma brucei*. *Biochem Soc Trans* 18(5):720–722.
47. Kohl L, Robinson D, Bastin P (2003) Novel roles for the flagellum in cell morphogenesis and cytokinesis of trypanosomes. *EMBO J* 22(20):5336–5346.
48. Sakato M, King SM (2003) Calcium Regulates ATP-sensitive Microtubule Binding by Chlamydomonas Outer Arm Dynein. *J Biol Chem* 278(44):43571–43579.
49. Sakato M, Sakakibara H, King SM (2007) Chlamydomonas Outer Arm Dynein Alters Conformation in Response to Ca2+. *Mol Biol Cell* 18(9):3620–3634.
50. Casey DM, Yagi T, Kamiya R, Witman GB (2003) DC3, the Smallest Subunit of the Chlamydomonas Flagellar Outer Dynein Arm-docking Complex, Is a Redox-sensitive Calcium-binding Protein. *J Biol Chem* 278(43):42652–42659.
51. Hoops HJ, Witman GB (1983) Outer doublet heterogeneity reveals structural polarity related to beat direction in Chlamydomonas flagella. *J Cell Biol* 97(3):902–908.
52. Bui KH, Sakakibara H, Movassagh T, Oiwa K, Ishikawa T (2009) Asymmetry of inner dynein arms and inter-doublet links in Chlamydomonas flagella. *J Cell Biol* 186(3):437–446.
53. Hagen KD, et al. (2011) Novel Structural Components of the Ventral Disc and Lateral Crest in Giardia intestinalis. *PLoS Negl Trop Dis* 5(12):e1442.
54. Owa M, et al. (2014) Cooperative binding of the outer arm-docking complex underlies the regular arrangement of outer arm dynein in the axoneme. *Proc Natl Acad Sci* 111(26):9461–9466.
55. Vincensini Laetitia, et al. (2017) Flagellar incorporation of proteins follows at least two different routes in trypanosomes. *Biol Cell* 110(2):33–47.
56. Patel-King RS, King SM (2009) An outer arm dynein light chain acts in a conformational switch for flagellar motility. *J Cell Biol* 186(2):283–295.
57. Lindemann CB, Lesich KA (2010) Flagellar and ciliary beating: the proven and the possible. *J Cell Sci* 123(4):519–528.
58. Lindemann CB (2004) Testing the geometric clutch hypothesis. *Biol Cell* 96(9):681–690.
59. Lindemann CB, Lesich KA (2015) The geometric clutch at 20: stripping gears or gaining traction? *Reproduction* 150(2):R45–R53.
60. Wakabayashi K, King SM (2006) Modulation of Chlamydomonas reinhardtii flagellar motility by redox poise. *J Cell Biol* 173(5):743–754.
61. Hjeij R, et al. (2014) CCDC151 Mutations Cause Primary Ciliary Dyskinesia by Disruption of the Outer Dynein Arm Docking Complex Formation. *Am J Hum Genet* 95(3):257–274.
62. Poon SK, Peacock L, Gibson W, Gull K, Kelly S (2012) A modular and optimized single marker system for generating *Trypanosoma brucei* cell lines expressing T7 RNA polymerase and the tetracycline repressor. *Open Biol* 2(2):110037.
63. Brun R, Schönemberger M (1979) Cultivation and in vitro cloning or procyclic culture forms of *Trypanosoma brucei* in a semi-defined medium. Short communication. *Acta Trop* 36(3):289–292.
64. Dean S, et al. (2015) A toolkit enabling efficient, scalable and reproducible gene tagging in trypanosomatids. *Open Biol* 5(1):140197.
65. Inoue M, et al. (2005) The 14-3-3 Proteins of *Trypanosoma brucei* Function in Motility, Cytokinesis, and Cell Cycle. *J Biol Chem* 280(14):14085–14096.
66. Beneke T, et al. (2017) A CRISPR Cas9 high-throughput genome editing toolkit for kinetoplastids. *Open Sci* 4(5):170095.
67. Dean S, Moreira-Leite F, Varga V, Gull K (2016) Cilium transition zone proteome reveals compartmentalization and differential dynamics of ciliopathy complexes. *Proc Natl Acad Sci* 113(35):E5135–E5143.
68. Collins TJ (2007) ImageJ for microscopy. *BioTechniques* 43(1 Suppl):25–30.
69. Höög JL, Gluenz E, Vaughan S, Gull K (2010) Ultrastructural investigation methods for *Trypanosoma brucei*. *Methods Cell Biol* 96:175–196.
70. Gluenz E, Wheeler RJ, Hughes L, Vaughan S (2015) Scanning and three-dimensional electron microscopy methods for the study of *Trypanosoma brucei* and *Leishmania mexicana* flagella. *Methods Cell Biol* 127:509–542.
71. Gadelha C, Wickstead B, McKean PG, Gull K (2006) Basal body and flagellum mutants reveal a rotational constraint of the central pair microtubules in the axonemes of trypanosomes. *J Cell Sci* 119(Pt 12):2405–2413.
72. Wheeler RJ, Gluenz E, Gull K (2015) Basal body multipotency and axonemal remodelling are two pathways to a 9+0 flagellum. *Nat Commun* 6:8964.
73. Markham R, Frey S, Hills GJ (1963) Methods for the enhancement of image detail and accentuation of structure in electron microscopy. *Virology* 20(1):88–102.
74. Hughes LC, Ralston KS, Hill KL, Zhou ZH (2012) Three-Dimensional Structure of the Trypanosome Flagellum Suggests that the Paraflagellar Rod Functions as a Biomechanical Spring. *PLoS ONE* 7(1):e25700.
75. Moran U, Milo R, Jorgensen PC, Weber GM, Springer M (2009) BioNumbers—The Database of Key Numbers in Molecular and Cell Biology. Available at: https://dash.harvard.edu/handle/1/8063390 [Accessed October 20, 2017].
76. Mullineaux CW, Nenninger A, Ray N, Robinson C (2006) Diffusion of Green Fluorescent Protein in Three Cell Environments in Escherichia Coli. *J Bacteriol* 188(10):3442–3448.
77. Tyler KM, Matthews KR, Gull K (2001) Anisomorphic cell division by African trypanosomes. *Protist* 152(4):367–378.
78. Fort C, Bonnefoy S, Kohl L, Bastin P (2016) Intraflagellar transport is required for the maintenance of the trypanosome flagellum composition but not length. *J Cell Sci* jcs.188227.
79. Aurrecoechea C, et al. (2017) EuPathDB: the eukaryotic pathogen genomics database resource. *Nucleic Acids Res* 45(D1):D581–D591.
80. Emms DM, Kelly S (2015) OrthoFinder: solving fundamental biases in whole genome comparisons dramatically improves orthogroup inference accuracy. *Genome Biol* 16:157.
81. Hodges ME, Wickstead B, Gull K, Langdale JA (2011) Conservation of ciliary proteins in plants with no cilia. *BMC Plant Biol* 11:185.
82. Lupas A, Dyke MV, Stock J (1991) Predicting coiled coils from protein sequences. *Science* 252(5009):1162–1164.
83. Wilson D, et al. (2009) SUPERFAMILY—sophisticated comparative genomics, data mining, visualization and phylogeny. *Nucleic Acids Res* 37(suppl_1):D380–D386.
84. Crooks GE, Hon G, Chandonia J-M, Brenner SE (2004) WebLogo: A Sequence Logo Generator. *Genome Res* 14(6):1188–1190.
**Figure Legends**
**Figure 1. The trypanosome flagellum has proximal/distal asymmetry arising from proximal and distal-specific outer dynein arm DCs.**
**A.** Nine-fold rotational averages of TEM of transverse sections through the *T. brucei* axoneme. Averages were generated from either the proximal (*n*=23) or distal (*n*=24) region. A difference map shows differences only in the outer dynein arms. **B.** Widefield epifluorescence micrographs of *T. brucei* cells expressing DC proteins tagged with mNG at the N terminus. Overlays of phase contrast (grey), DNA (Hoechst 33342, magenta) and mNG (green) images and mNG fluorescence alone are shown. **C.** Micrographs of *T. brucei* RNAi cells targeting dDC1, dDC2, pDC1 or pDC2 ORFs and respectively expressing mNG tagged dDC2, dDC1, pDC2 or pDC1. Induction of RNAi for 72 h caused loss of fluorescence signal in each case. Summarised in Table 1. **D.** Micrographs of *T. brucei* RNAi cell lines targeting dDC2 or pDC1 expressing mNG tagged OADα, OADβ or IADβ. Induction of dDC2 RNAi for 72 h caused loss of distal OADα and OADβ (circled) while pDC1 RNAi had no effect on OADα and OADβ fluorescence. **E.** TEMs (left) and nine-fold rotational averages (right) of transverse sections through the axoneme of the *T. brucei* dDC2 RNAi cell line. Outer dynein arms are present in uninduced samples (representative of *n*=25) and in the proximal axoneme 72 h after induction of RNAi (representative of *n*=11), but absent in the distal axoneme after 72 h induction of RNAi (representative of *n*=7). CB cell body, F flagellum. **F.** Micrographs of *T. brucei* RNAi cell lines targeting dDC2 or pDC1 and respectively expressing mNG tagged pDC1 or dDC2. 72 h induction of dDC2 RNAi caused distal extension of pDC1 and pDC2 fluorescent signal (circled), and 72 h induction of pDC1 RNAi caused proximal extension of dDC1 and dDC2 fluorescent signal (circled). Summarised in Table 1.
**Figure 2. Retrograde movement of proximal DCs by IFT is responsible for proximal/distal axoneme asymmetry.**
**A.** Outline of the agent based model of DC distribution by IFT and diffusion, showing key parameters: *D*<sub>prox</sub> and *D*<sub>dist</sub>, the probability of diffusion of proximal or distal DCs to the neighbouring axoneme segment; *on*<sub>prox</sub> and *on*<sub>dist</sub>, the probability of DC binding; *off*<sub>prox</sub> and *off*<sub>dist</sub>, the probability of DC unbinding; *T*<sub>prox</sub> and *T*<sub>dist</sub>, the capacity for anterograde and retrograde IFT transport; and *Q*<sub>prox</sub> and *Q*<sub>dist</sub> (not shown in diagram), the quantity of DCs. **B.** Simulated distribution of proximal (green) and distal (magenta) DCs in full length flagella with different rates of IFT (*T*<sub>prox</sub> and *T*<sub>dist</sub>) and quantities of proximal and distal DCs (*Q*<sub>prox</sub> and *Q*<sub>dist</sub>). The conditions *T*<sub>prox</sub> = 0.2 and *T*<sub>dist</sub> = 0.0 (retrograde transport of proximal DCs) of most closely matched the experimental data (Figure 1F). **C.** Simulated distribution of proximal and distal DCs in full length flagella with different rates of dissociation of DCs (*off*<sub>prox</sub> and *off*<sub>dist</sub>). The condition *off*<sub>prox</sub> < *off*<sub>dist</sub> most closely matched the experimental data (Figure 1B). **D.** Comparison of simulated distribution of proximal and distal DCs in growing...
flagella (left) to micrographs of dividing cells with a new growing flagellum (middle). In the micrographs the new flagellum is indicated (dotted outline) in a *T. brucei* cell line expressing mNG tagged pDC1 and mSc tagged dDC2. Measured proportion of flagellum with dDC2 signal was always approximately 50%, independent of flagellum length (right). **E.** Comparison of simulation of new protein incorporation in a new growing flagellum (left) to micrographs of *T. brucei* cells in an equivalent pulse–chase experiment (right). In the micrographs the new growing flagellum is indicated (dotted outline) in a *T. brucei* cell line expressing HaloTag tagged dDC2 or pDC1 labelled with a 45 min pulse of tetramethylrhodamine HaloTag ligand. The flagellum was labelled with an anti-PFR antibody. **F.** Comparison of simulated distribution of proximal and distal DCs in growing flagella with or without IFT transport of the distal DC and with no IFT transport and reduced quantity of distal DC (left), in comparison to micrographs of dividing cells in a *T. brucei* cell line expressing N terminally mNG tagged pDC1 and N terminally mSc tagged dDC2 8 h after induction of IFT46 RNAi knockdown (right). In the simulation and the new flagellum of dividing cells reduced IFT caused a shorter region of distal DC/dDC2 signal.
**Figure 3. A calcium-binding LC4-like protein is a candidate regulator of the flagellar beat.**
**A.** Sequence logo of the Ca$^{2+}$ binding site of 100 reference EF-hand domains, and the aligned *T. brucei* and *L. mexicana* LC4-like sequences. **B.** Micrographs of a *T. brucei* cell line with integrated RNAi constructs targeting dDC2 or pDC1 and expressing LC4-like tagged with mNG at the N terminus. 72 h induction of dDC2 RNAi causes loss of flagellar LC4-like signal, and 72 h induction of pDC1 RNAi causes proximal extension of LC4-like signal (circled). Summarised in Table 1. **C.** Micrographs of a *T. brucei* cell line with an integrated RNAi construct targeting LC4-like and expressing dDC2 or pDC1 tagged with mNG at the N terminus. 72 h induction of LC4-like RNAi causes no change in pDC1 or dDC2 signal. Summarised in Table 1.
**Figure 4. Proximal/distal flagellum asymmetry contributes to control of flagellum beat type.**
**A.** Swimming paths, swimming speed and directionality of *L. mexicana* cell lines with deletion of both alleles of dDC2 or LC4-like in comparison to the parental cell line. Swimming tracks show slower, less directional, swimming following dDC2 deletion, which occasionally corresponds to tight helical paths (circled). dDC2 deletion caused a significant decrease in speed, velocity and proportion of highly directional (>0.5) cells, while LC4-like deletion caused a significant increase in speed and velocity (Student’s T-test). **B.** Proportion of cells undergoing different types of flagellar movement for *L. mexicana* cell lines with deletion of both alleles of dDC2 or LC4-like in comparison to the parental cell line. Behaviours were assessed from a 4 s 200 Hz videomicrograph. dDC2 deletion caused significantly more ciliary and uncoordinated movement, and LC4-like caused significantly more uninterrupted flagellar type movement ($\chi^2$ test, $p<10^{-3}$). **C.** Example kymographs of flagellar movement illustrating the types of flagellar movement used as classes in **B.** **D.** Frames from high speed videos corresponding to the kymographs in **C.** Propagation of a flagellum wave over the frames is indicated (white arrow) relative to the cell posterior (black arrow).
A Proximal Distal Difference
More proximal More distal
B Distal Proximal
dDC1 mNG::Tb927.5.1900 pDC1 mNG::Tb927.8.4400
dDC2 mNG::Tb927.11.16090 pDC2 mNG::Tb927.7.5660
overlay mNG
C RNAi not induced RNAi induced (72h) RNAi
mNG::dDC2
mNG::pDC2
mNG::pDC1
D RNAi not induced RNAi induced (72h)
OADα mNG::Tb927.3.930 dDC1
OADβ mNG::Tb927.11.3250 dDC2
IADβ mNG::Tb927.8.3250 pDC1
OADα mNG::Tb927.3.930 pDC2
OADβ mNG::Tb927.11.3250
E dDC2 RNAi
Distal Proximal Distal
RNAi not induced RNAi induced (72h)
F RNAi not induced RNAi induced (72h)
mNG::pDC1
mNG::pDC2
mNG::dDC1
mNG::dDC2
CB F
200nm 500nm
overlay mNG
10μm
A Proximal
Distal
Axoneme
100 nm
B Simulated RNAi
DCs transported by IFT
No RNAi
$Q_{dist} = 1.5 \quad Q_{prox} = 1.5$
pDC RNAi
$Q_{dist} = 1.5 \quad Q_{prox} = 0.0$
dDC RNAi
$Q_{dist} = 0.0 \quad Q_{prox} = 1.5$
C DC dissociation rate
Slow dist.
$off_{dist} < off_{prox}$
Equal
$off_{dist} = off_{prox}$
Slow prox.
$off_{dist} > off_{prox}$
D Flagellum growth
mNG::pDC1 mSc::dDC2
5.7 μm
16.8 μm
23.0 μm
2.5 μm
7.7 μm
14.5 μm
10 μm
Proportion dDC2
Old flagellum
New flagellum
E DC incorporation
new DC$_{dist}$
new DC$_{prox}$
Halo::dDC2
Halo::pDC1
anti-PFR
Halo
overlay
F IFT RNAi
No RNAi
$T_{prox} = 0.2 \quad Q_{dist} = 1.5$
No IFT
$T_{prox} = 0.0 \quad Q_{dist} = 1.5$
No IFT & ↓dDC2
$T_{prox} = 0.0 \quad Q_{dist} = 0.3$
IFT46 (Tb927.6.3100) RNAi (8h)
mNG::pDC1 mSc::dDC2
overlay
mNG
mSc
10 μm
5.4 μm
8.2 μm
12.2 μm
10 μm
A EF-hand (SSF47473) Ca$^{2+}$ binding site
Bits
Logo
...DTLRAFVSLGGNEDGSGSVLVEDLR... T. brucei LC4-like
...DTLKAFIALGGGEDGSGEILASTLR... L. mexicana LC4-like
B RNAi not induced
RNAi induced (72h)
RNAi
LC4-like mNG::Tb927.9.4420
mNG::LC4-like
overlay
mNG
10μm
C RNAi not induced
RNAi induced (72h)
RNAi
mNG::pDC1
mNG::dDC2
overlay
mNG
10μm
A
Swimming tracks
Parental
ΔdDC2
ΔLC4-like
50 μm
Velocity
Speed
Directionality
B
Beat type
Flagellar (constant)
Flagellar (interrupted)
Uncoordinated/Static
Ciliary/Reverse
Proportion of cells
100%
94
67
n = 137
C
Kymograph
D
Example frames
0
5
10
15 ms
0
110
220
330 ms
10 μm
200 ms
*
| Protein tagged | No RNAi | dDC1 | dDC2 | pDC1 | pDC2 |
|----------------|------------------|-----------------|-----------------|-----------------|-----------------|
| **dDC1** | Distal 50% | 0% (No signal) | 0% (No signal) | 100% (Whole flagellum) | 100% (Whole flagellum) |
| **dDC2** | Distal 50% | 0% (No signal) | 0% (No signal) | 100% (Whole flagellum) | 100% (Whole flagellum) |
| **pDC1** | Proximal 50% | Proximal 75% | Proximal 75% | 0% (No signal) | 0% (No signal) |
| **pDC2** | Proximal 50% | Proximal 75% | Proximal 75% | 0% (No signal) | 0% (No signal) |
Table 1. Summary of DC protein localisation changes upon DC RNAi knockdowns in *T. brucei*.
| Protein tagged | No RNAi | dDC2 | pDC1 | LC4-like |
|----------------|---------|------|------|----------|
| **dDC2** | Distal 50% | 0% (No signal) | 100% (Whole flagellum) | Distal 50% |
| **pDC1** | Proximal 50% | Proximal 75% | 0% (No signal) | Proximal 50% |
| **LC4-like** | Distal 50% | 0% (No signal) | 100% (Whole flagellum) | 0% (No signal) |
Table 2. Summary of evidence for LC4-like localisation dependent on distal DCs in *T. brucei*.
Supplementary Information for
Direction of flagellum beat propagation is controlled by proximal/distal outer dynein arm asymmetry
Beatrice FL Edwards*, Richard J Wheeler*, Amy R Barker*, Flávia F Moreira-Leite, Keith Gull, Jack D Sunter
* Equal contribution
Corresponding authors: Richard J Wheeler, Jack D Sunter
Email: email@example.com, firstname.lastname@example.org
This PDF file includes:
Figure S1 to S3
Table S1
Captions for movie S1
Other supplementary materials for this manuscript include the following:
Movies S1
**Figure S1 (related to Figure 1). DC proximal/distal asymmetry is present irrespective of the terminus tagged, and RNAi constructs are effective at knocking down tagged DC protein.**
A. Widefield epifluorescence micrographs of mNG native fluorescence in *T. brucei* cells expressing DC proteins tagged with mNG at the C terminus. Phase contrast (grey), DNA (Hoechst 33342, magenta) and mNG (green) overlay and mNG fluorescence are shown. Localisations are the same as with N terminal tagging (Figure 1B).
B. Predicted coiled coils in *T. brucei* DCs. COILS v2.2 coiled coil propensity with a window of 14 (blue), 21 (green) and 28 (yellow) and the maximum of all three (black).
C. Micrographs of *T. brucei* cell lines with integrated inducible RNAi constructs targeting dDC1, dDC2, pDC1 or pDC2 protein open reading frame and expressing the same proteins tagged with mNG at the N terminus. 72 h induction of RNAi causes loss of flagellar mNG signal showing effective knockdown of the mNG DC fusion protein.
D. Proportion of the axoneme with fluorescent signal from pDC1, OADα or OADβ tagged with mNG at the N terminus before and 72 h after induction of dDC2 RNAi, and dDC2, OADα or OADβ tagged with mNG at the N terminus before and 72 h after induction of pDC1 RNAi. This is a quantitation of the phenomenon illustrated in Figure 1D,F. For samples with 100% axoneme signal all examples of >100 cells had no region lacking signal. Error bars indicate standard deviation.
E. Micrographs of *T. brucei* RNAi cell lines targeting dDC1 or pDC2 and respectively expressing mNG tagged pDC1 or dDC2. 72 h induction of dDC1 RNAi caused distal extension of pDC1 and pDC2 fluorescent signal (circled), and 72 h induction of pDC2 RNAi caused proximal extension of dDC1 and dDC2 fluorescent signal (circled). Summarised in Table 1.
Figure S2 (related to Figure 2). Controls for analysing the mechanism of axoneme proximal/distal docking complex asymmetry.
A. Behaviour of the agent-based model of proximal/distal axoneme asymmetry with large changes to parameters: 100-fold larger or smaller values of $off$, $T$ or $D$ and unequal values of $Q_{prox}$ and $Q_{dist}$. Large parameter changes still give proximal distal/asymmetry, although the quantitative features are altered.
B. Micrographs of *T. brucei* cell lines expressing N terminally HaloTag tagged dDC2 or pDC1 following 45 min incubation with tetramethylrhodamine HaloTag ligand (no blocking step) to label all DC protein. HaloTag tagged dDC2 and pDC1 have the same localisation as mNG tagged dDC2/pDC1, cf. Figure 1B, S1B.
C. Summary of the phenotype caused by IFT46 RNAi knockdown. Following RNAi induction, population growth is slowed and the proportion of cells with normal numbers of nuclei (N) and kinetoplasts (K) (1K1N, 2K1N, 2K2N; blue shades) are reduced, and 1K2N and 1K0N cells (indicating cytokinesis defects; orange shades) are more prevalent.
D. Micrographs of *T. brucei* cells expressing N terminally mNG tagged pDC1 and N terminally mSc tagged dDC2 (as in Figure 2F) 24 h after induction of IFT46 RNAi knockdown. After a longer IFT42 RNAi induction many flagella are abnormally short, both on cells with one flagellum and the new flagellum of dividing cells. Abnormally short flagella have a very short region of dDC2 signal.
**Figure S3 (related to Figure 4).** *L. mexicana* has similar proximal/distal axoneme asymmetry to *T. brucei*. **A.** Widefield epifluorescence micrographs of mNG native fluorescence in *L. mexicana* cells expressing DC proteins tagged with mNG at the N terminus, *cf.* Figure 1B. Phase contrast (grey), DNA (Hoechst 33342, magenta) and mNG (green) overlay and mNG fluorescence are shown. **B.** Micrographs of *L. mexicana* cell lines expressing pDC1 tagged with mNG at the N terminus, before and after deletion of both alleles of dDC2. dDC2 deletion causes distal extension of pDC1 fluorescent signal (circled), *cf.* Figure 1F. **C.** Transmission electron micrographs (left) and nine-fold rotational averages (right) of transverse sections through the axoneme of the *L. mexicana* dDC2 deletion cell line. Outer dynein arms are present in the proximal axoneme (representative of $n=8$), but absent in the distal axoneme (representative of $n=9$). CB cell body, F flagellum. **D.** Micrographs of *L. mexicana* cell lines expressing LC4-like tagged with mNG at the N terminus showing a distal axoneme localisation, *cf.* Figure 3C. **E.** PCR confirmation of deletion both alleles of LC4-like or dDC2 open reading frames in the *L. mexicana* deletion cell lines used for swimming analysis (Figure 3). The asterisk indicates the expected PCR product size when the open reading frame is present.
Movie S1 (related to Figure 4). High speed videomicrographs and the corresponding kymographs of different classes of *L. mexicana* flagellar movement. Four examples of flagellar (tip-to-base symmetrical), interrupted flagellar, uncoordinated/static and ciliary (base-to-tip asymmetrical) beating are shown. Cell movement due to swimming or rotation has been digitally subtracted. |
“Ghost Cities” Analysis Based on Positioning Data in China
Guanghua Chi\textsuperscript{a,b}, Yu Liu\textsuperscript{b}, Zhengwei Wu\textsuperscript{a}, Haishan Wu\textsuperscript{a}
\textsuperscript{a} Big Data Lab, Baidu Research, Baidu Inc., Beijing 100085, China
\textsuperscript{b} Institute of Remote Sensing and Geographic Information Systems, Peking University, Beijing 100871, China
Abstract: Real estate projects are developed excessively in China in this decade. Many new housing districts are built, but they far exceed the actual demand in some cities. These cities with a high housing vacancy rate are called “ghost cities.” The real situation of vacant housing areas in China has not been studied in previous research. This study, using Baidu positioning data, presents the spatial distribution of the vacant housing areas in China and classifies cities with a large vacant housing area as cities or tourism sites. To the best of our knowledge, it is the first time that we detected and analyzed the “ghost cities” in China at such fine scale. To understand the human dynamic in “ghost cities”, we select one city and one tourism sites as cases to analyze the features of human dynamics. This study illustrates the capability of big data in sensing our cities objectively and comprehensively.
Keywords: Ghost City, Real Estate, Big Data, Vacant Housing Area
Introduction
China has experienced fast development during the past decade. From 1984 to 2010, the urban built-up area has increased from 8,842 km$^2$ to 41,768 km$^2$ (Nie & Liu, 2013). The urbanization speed is unprecedented in human history with so many buildings constructed in such a short time (Xue & Tsai, 2013). The amount of concrete used in China in the three years (2011-2013) is more than that used in U.S. in the 20$^{th}$ century (McCarthy, 2014). The fast urbanization of China has contributed to the high housing vacancy rate in some cities. Many new housing districts are built, but they far exceed the actual demand. In these cities, the population density is very low, and the residential districts are dark with few lights at night. Therefore, they are called “ghost cities.” In Shepard’s book “Ghost Cities of China” (Shepard, 2015), he defined “ghost city” as “a new development that is running at severe undercapacity, a place with drastically fewer people and businesses than there is available space for.”
The “Ghost city” phenomenon has attracted much attention in recent years. Shepard (2015) said that “China is the world’s most populated country without a doubt has the world’s largest number of empty homes.” Chinese Premier Keqiang Li warned the risk of rapid urbanization and said that “Urbanization is not about building big, sprawling cities. We should aim to avoid the typical urban malady where skyscrapers coexist with shanty towns” (Ryan, 2013). Zuoji Dong, head of the Ministry of Land and Resources planning bureau, said “new guidance issued by the ministry would allow for strict controls on new urban development. Unless a city's population is too dense or expansion is deemed necessary to cope with natural disasters, new urban districts will not be permitted” (Rafagopalan, 2014).
Media have reported many cities in China with a large vacant housing area. However, sometimes their opinions on a “ghost city” are completely opposite. For example, media
have covered the serious situation of high vacant housing areas in Rushan City, while other media have reported that Rushan City has added residents and already expelled the term of “ghost.” These reports, obtained by taking pictures or counting the number of homes with lights at night, have been criticized for their low credibility. Moreover, the internal causes of “ghost cities” may differ a lot despite the same outward appearance. Tourism in China has developed quickly recently. To satisfy the demand of tourists, cities with attractive tourism resources have built many houses for vacation. During the popular tourism seasons, many people will live in the vacant housing areas. While in other seasons, the population is small. Therefore, it is unfair to treat tourism sites and cities as equal. This raised the question: what is the real situation of “ghost cities” in China?
The vacant housing rate is one of the most important indicators to evaluate the health of real estate in each city. This indicator can also be used to discover the “ghost cities.” The Chinese government has not published any data related to the vacant housing rate. The National Bureau of Statistics of China mentioned that the difficulty of calculating vacant housing rates lies in the difficulty to define a standard of the status of vacant houses and length of vacancy (House China, 2015). Besides using the vacant housing rate to define a ghost city, there are two other definitions. The Ministry of Housing and Urban-Rural Development of China gives a standard that $1 \text{ km}^2$ area holds 10 thousand people. Based on this standard, the rank of “ghost cities” in 2014 defines that cities with people smaller than half of the standard are “ghost cities” (Su, 2014). The first three rank cities based on this indicator are Erenhot, Qinzhou and Lahsa. Chen (2014) proposed an alternative equation to calculate the indicator of “ghost cities”: $(S-D)/n$, where $S$ is the supply of new houses in the following five years. $D$ is the demand of new houses in the following five years. $n$ is the number of houses at present. This equation reflects the proportion of current houses that should be removed to satisfy the balance between supply and demand. The first three rank cities based on this indicator are Ordos, Yingkou and Ulanqab.
It is difficult to obtain real estate data and population data with high spatial resolution. This impedes the understanding of “ghost cities.” Su and Chen’s studies cannot pinpoint the exact location of vacant housing areas and can only reflect the average level of “ghost” in each city, not to mention finding out the reasons behind “ghost cities.” These results would be questionable since they are aggregated results. Fortunately, the emergence of big data brings opportunities to objectively understand the status of or even reasons behind "ghost cities." Widely-applied location-aware devices (LAD), such as mobile phones and GPS receivers, generate large volumes of individual trajectory data with long time scale and high resolution. These features make it suitable for population analysis. For example, Kang et al. (2012) used mobile phone data to estimate the population distribution in China. It provides a mean for observing urban dynamics from a micro perspective, including the human migration and interaction between regions. According to the social sensing concept proposed by Liu et al. (2015), we can use the data generated by each individual to sense our living environment.
Whether the so called “ghost cities” have a high housing vacancy has always been disputed for the lack of data to verify. In this study, we use the location data of mobile APPs and point of interest data of residential area of Baidu, the largest search engine company in China. The basic idea of discovering vacant housing areas is that only a small population lives in these areas. To the best of our knowledge, it is the first time that we detected and analyzed the “ghost cities” in China at such fine scale. We should note that we do not attempt to give the rank of which areas have the most “ghost cities.” Instead, we want to find out the exact location of vacant housing areas at the moment. Even if the vacant housing area is very large
in a city, it does not mean that this city will still have a high vacant ratio in the future. Like the Zhengdong new district, it has been developing quickly in the past few years and attracted a large number of people.
**Methodology**
**Data Description**
This study uses two types of datasets, including Baidu positioning data and points of interests (POI). The attributes of Baidu positioning data cover anonymized user id, latitude, longitude, and time. There are several billions of positioning points each day. The time span is from 9/8/2014 to 4/22/2015. Its features of national spatial scale, long temporal scale, and high precision make our study of “ghost cities” representative and reliable. We admit that this dataset is biased. It cannot represent all the demography of a city, such as very young and very old people, or those who do not use smart mobile phones. However, it can represent the situation of the population density, which is the focus of our study. The POI data include POI name, latitude, longitude, and category. The specific processing of POI data would be discussed in the next section.
**Discovering and Classifying Vacant Housing Area**
The basic idea of discovering vacant housing areas is that only a small population lives in a residential area. Therefore, we should calculate two variables based on our data: users’ home location and location of residential areas. We first adopted DBSCAN algorithm to calculate the home location of each user based on their positioning points from 9:00 am to 6:00 pm. DBSCAN is a widely used density based clustering algorithm and is more computationally efficient than other methods like MeanShift (Ester et al., 1996). It stands for density-based spatial clustering of applications with noise. It is based on the idea of density reachability: many points that are closely located will be grouped together as clusters, while points that are located in low-density regions are considered as noise. This algorithm requires two parameters: the minimum distance $\varepsilon$ to decide whether two points are reachable and the minimum number of points $minPts$ to form a cluster. A point $P_1$ is reachable to point $P_n$ if there are a sequence of points $P_1, P_2, ..., P_n$ where each $P_i$ is within $\varepsilon$ to $P_{i-1}$. After experimenting with different values of these two parameters, we set $\varepsilon$ as 200 meters and $minPts$ as 2. We choose the center of the cluster with largest number of points as the user’s location. Cases exist where users move their houses during the period of our data. But they only cover a low proportion and will not affect our results. We keep both locations of these users in our study.
Our study attempts to discover the spatial distribution of vacant housing areas. Therefore, it is crucial to know exactly the location of residential areas. We use POI data from Baidu Map, which has high reliability and quality. We select POIs whose categories are residential areas and villas. We delete the residential area POIs that are within one kilometer of villas since population density near villas is very low. Moreover, it is possible that the residential area is built recently and few people live there. We will remove these situations based on the population living there and the detail will be discussed below.
The difficulties of discovering vacant housing areas based on positioning points and POI lie in
two aspects. First, the residential area POI is a point, while residential district is an area. The residential area POI is not always located in the center of the residential area. Second, the areas of residential districts are not the same. In other words, they have different length and width. If we have the data of residential area polygons instead of points, we can directly count the exact population in each residential area. Such data may be available in some residential areas of few big cities. But the vacant housing area is usually small in these big cities. While for those cities with a large vacant housing area, we do not have such data. To solve this problem, we use 100 m * 100 m grids as the analysis units in this study. As shown in Figure 1, for each residential area POI (Point A), we select 5 * 5 grids with the residential area POI in the center grid. We select the top six most populated grids. If the sum is less than 300, we define this residential area as the vacant housing area. The reason why we choose the threshold as 300 will be explained below. We also set the sum more than 60 to exclude those recently built residential areas where few people live.

(This figure shows the study area generated based on residential area POI A. Residential area points B and C will be discussed below. Each grid is 100m * 100m.)
The edge length of a residential area is between 300 meter and 500 meter on average. The location of a residential area POI is not always located in the center of a residential area. It may be located at the edge of a residential area (Point B in Fig. 1). At the extreme situation, it may be located at the vertex of a residential area (Point C in Fig. 1). In this extreme case, only a fourth part of the 500 m * 500 m grids would cover the residential area. The other three fourths part of the area may be covered with non-residential area. If we summarize all the population in the 500 m * 500 m grids, results in cases like point B and C in Figure 1 would be distorted. This is the reason why we only select the first six grids with the most population.
The average floor area ratio is 1 (An, 2015) and the average living area is 30 m$^2$ in China (Ruan, 2010). Thus a 100 m * 100 m grid can hold 333 people. The dataset we used in this study contains 770 million users. China has a population of 1.36 billion. Thus on average a 100 m * 100 m grid can hold 188 Baidu users. We define that a grid with people smaller than one fourth of the standard is a vacant housing area. If the six grids with most population holds people smaller than 300, this area is defined as the vacant housing area.
We should note that the first six grids with most population may be located in the nearby residential area of the study residential area rather than itself. We admit that our results fail to discover some vacant housing areas. However, the vacant housing areas are normally located spatially aggregated. Even though one residential area is lost, its nearby residential areas will be discovered. Therefore, this special case will not cause too many noises to the
To classify those tourism sites with a large vacant housing area, we use the positioning data in China’s National Day (9/29/2014 and 10/2/2014), New Year’s Day (12/30/2014 and 1/2/2015), and International Workers’ Day (4/29/2015 and 5/2/2015) in the vacant housing areas. By comparing the change of population before and after the holiday, we can judge whether the vacant housing area is for tourism. If the population in a vacant housing area increases at the holiday, it is defined as a tourism area.
**Results**
We count the total number of all vacant housing areas POI in each county. We also count the area of all vacant housing areas and find that the result is similar. We select 20 cities with a large vacant housing area. The result is shown in Figure 2, which can be browsed interactively on the website (http://bdi.baidu.com/ghostcity/). This study presents the spatial distribution of vacant housing areas. We should note that the rank of these 20 cities is not the real rank of our result, but they all rank at the top of 50. The reason why we did not show the real rank is that the rank is very sensitive and it may affect the sale of real estate.

From a macro view, cities with a large vacant housing area are mostly second-tier and third-tier cities. The tiers of cities in China are determined by their income, education, transportation, technology and other indicators. East provinces have more proportion of cities with vacant housing areas. From the micro view, vacant housing areas are mostly distributed in the city’s periphery or new towns of cities. As shown in Figure 3A, the 500 m * 500 m grid covers the residential area. Even though it does not cover the other part of the residential area, the result has already shown that the residential area is a vacant housing
area. Figure 3B shows two residential areas of vacant housing area. The location of the left-bottom residential area in Figure 3B is similar with the point C in Figure 1. Our algorithm exactly finds this area.
We present the spatial distribution of the first 8 cities plus Dongling district (Liaoning Province) with a large vacant housing area in Figure 4. All these 9 cities have been reported by media for their large vacant housing area. In figure 4A, the vacant housing areas distribute tightly and are close to the sea in Rushan. Media has reported that Rushan built too many see-view houses for vacation. The vacant housing rate is high except in summer. Figure 4B and C show the spatial distribution of vacant housing area in Ordos, which is well known as “ghost city.” The vacant housing areas mostly are located in the city’s periphery. Figure 4H is located near the figure 4C and has the similar situation of vacant housing rate. Figure 4D is Binhai New Area, which was called “the third increasing point of economy in China” ever. However, this new area exists many half-baked buildings. For Rugao, Dongying, and Xinghua (Fig. 4E, F, G), we can find the support from the Internet or media that these cities contain a large vacant housing area. Media reported that Shenyang National Games Village has become “ghost town” (Miller & Chow, 2015). Our result confirmed this report and showed the real spatial distribution of vacant housing areas (Fig. 4I).
We also detect whether these cities are tourism sites. We find that many tourism sites are located near the sea, such as Rushan (Shandong Province), Rugao (Jiangsu Province), Xinghua (Jiangsu Province), Wenchang (Hainan Province), Donggang (Shandong province), and Qionghai (Hainan Province). The other two cities like Zhuji (Zhejiang Province) and Xiaoshan (Zhejiang Province) also have popular tourism resource.
Fig. 3 Illustration of residential areas of vacant housing area we discovered. (A. One residential area B. Two residential areas)
Case Study
To find out the real situation of cities with a large vacant housing area and discover the reason why they are called “ghost cities”, we select Rushan (a tourist site) and Kangbashi (a city) as cases to analyze the population change, home-work separation, and human migration. These two cities are well-known for their large vacant housing area. Kangbashi is a new district, which once belonged to Dongsheng. The Ordos government moved from Dongsheng to Kangbashi in 2006 to increase its development speed. Kangbashi contains abundant coal and other natural resources, which boosts its economic development quickly. As it became richer, the government ambitiously starts building a new city. Much capital was invested in residential real estate, which became an investment instead of living demand. The housing vacancy ratio is very high. For more information, please refer to (Woodworth, 2015).
Rushan is located in Shandong province. It has a 21 kilometer coastline with a beautiful sea view, which is called Rushan Yintan, meaning silver beach. Most of the residential real estate
are seasonal houses. Many people bought a house there for vacation. Every summer a large number of tourists will visit there, while after the summer the population will decrease dramatically. For these two different types of cities with a large vacant housing area, we attempt to find out the features of human dynamic.
Population Dynamics
We count the population in these two cities each day and show the result in Figure 5. Because the number of Baidu users is increasing, we should remove this effect to show the real population change in each city. In this study, we use the population in each city divided by the total number of people in China as the indicator of population. Then we normalize the value of these two cities. Kangbashi has a clear weekly cycle of population change, while Rushan is not. This is related to their functional difference. Rushan is a tourist site and people there do not have a work related cycle. In National Day, Rushan has the largest population and the number is much more than other days in whole year. The population in Kangbashi decreases, while in Rushan the number increase. In China’s Spring New Year, Kangbashi has the fewest number of people. The population in Rushan is fewest at the beginning of February instead of Chinese Spring New Year. In New Year, the population in Rushan does not increase significantly because the weather is cold and few people visit there for vacation. In Qingming Festival (2015/4/5), the population in Kangbashi decreases significantly, while the number in Rushan increases slightly. Rushan is a tourist site while Kangbashi is not, which leads to the difference of population change in Rushan and Kangbashi. From September in 2014 to April in 2015, the number of people in these two large vacant housing area did not increase, showing no trend of attracting more people. However, we should note that Rome wasn’t built in a day; neither are new cities in China. Constructing a new city is easy, while making it functional needs a long-term endeavor (Shepard, 2015).

Figure 6 shows the population change in each hour of one week from 2015/3/23 to 2015/3/29. Similar to the results from mobile phone data (Kang et al., 2012), the largest population appears at 12:00pm and 20:00pm. The two peaks are almost similar in Kangbashi, while in Rushan the peak in 20:00pm is higher. In weekends, the population in Kangbashi decreases, while the number in Rushan keeps stable.
Home-work Analysis
Home-work separation is a normal phenomenon in modern city life, which means that a considerable distance exists between a person’s home and work place (Poston, 1972). This study attempts to understand the home-work separation situation in “ghost cities.” The results can help government learn about the real situation and make reasonable planning. We used DBSCAN algorithm to calculate the work place based on a person’s positioning points from 9:00am to 6:00pm.
We adopted the algorithm designed by Eric Fischer (Fischer, 2013) to visualize the spatial distribution of people’s home and work places. This algorithm assigns lightness value to each pixel based on the number of points in the pixel. As shown in Figure 7a, most of work places are located closely to Kangbashi government seats. In Rushan Yintan, the number of home and work places is really small compared with Rushan downtown since it is a tourist area (Fig. 7b). Rushan government realized the situation of few work opportunities in Rushan Yintan and began to attract companies to increase the employment.
We statistic the proportion of people who live or work in Kangbashi and Rushan Yintan (Table 1). It is a common sense that the proportion of people who work and live in the same city contains most part (Shi et al., 2015). Administrative boundaries effectively impact the human migration across the territory (Chi et al., 2014). The number of people who live in Dongsheng and work in Kangbashi is twice as many as that who live in Kangbashi and work in Dongsheng. Dongsheng has a perfect infrastructure. People prefer to live in Dongsheng and work in Kangbashi. This illustrates that Kangbashi should increase more job opportunities and improve its infrastructure. A new city being beautiful and modern cannot attract people migrate there. Jobs, industry, entertainment, health and education systems should be functioning before many people migrate to a new city (Shepard, 2015). These two cases reflect that vacant housing areas usually contain limited job opportunities, and basic infrastructure should be improved to satisfy people’s need.
Fig. 7 The spatial distribution of home and work place (Red points denote home. Blue points are work places.)
Table. 1 Home and work place distribution in Rushan and Kangbashi
| | Kangbashi | | Rushan | |
|------------------|-----------|------------|------------|------------|
| | Home | Work place | Proportion | Home | Work place | Proportion |
| | Kangbashi | Kangbashi | (%) | Rushan Yintan | Rushan Yintan | 33.5 |
| | Kangbashi | Dongsheng | 8.2 | Rushan Yintan | Rushan downtown | 8.1 |
Human Migration
Human migration reflects the urban vitality. The tempo-spatial trajectory is composed of a person’s positioning points ordered by time. In this section, we attempt to discover the city interaction caused by human migration. If a user moves from city $a$ to city $b$, we suppose that city $a$ and $b$ has an interaction. When calculating human migration, it is unreasonable to define that a user has a migration between two cities as long as he/she exposed in two different cities because he/she may pass by cities between the origin city and the destination city in space. For example, as shown in table 2, a person moves from Guangzhou to Shanghai at Nov. 1st, 2014. He/she passed by Changsha, Nanchang, and Hangzhou before arriving at Shanghai. It is wrong to suppose that Guangzhou and Changshan has an interaction. In calculating migration, the difficult part is how to remove pass-by cities. The frequency of positioning points exposed by people is not stable. The number of positioning points exposed in pass-by cities may be larger than that exposed in destinations. These situations make the calculation of human migration more difficult. Meanwhile, the large amount of our dataset requires that the algorithm has a high efficiency.
This study supposes that the migration will last for more than one day. We ignore the cases that people return back in one day. We define a person’s trajectory as $\{t_1, \text{city}_1\}$, $\{t_2, \text{city}_2\}$, $\{t_3, \text{city}_3\}$, ..., $\{t_n, \text{city}_n\}$. $t_n$ is the time of the $n$th positioning point. $\text{city}_n$ is the city of the $n$th positioning point. We first select the city of the first positioning point each day as people’s location. If $\text{city}_i$ is same to $\text{city}_{n-1}$, then we merge these two records. Otherwise, we suppose that the person migrates from $\text{city}_{n-1}$ to $\text{city}_n$.
Table 2 A sample of a person’s trajectory
| Province | City | Time |
|----------|--------|---------------|
| Guangdong| Guangzhou | 2014/11/1 10:30:01 |
| Hunan | Changsha | 2014/11/1 12:50:01 |
| Jiangxi | Nanchang | 2014/11/1 14:52:01 |
| Zhejiang | Hangzhou | 2014/11/1 18:30:01 |
| Shanghai | Shanghai | 2014/11/1 19:31:01 |
| Shanghai | Shanghai | 2014/11/2 09:21:01 |
| Shanghai | Shanghai | 2014/11/3 08:13:01 |
Figure 8 shows the result of human migration in these two cities. It makes sense that the number of people inflow or outflow increases during holidays. The migration population increases as the number of Baidu users increases. Compared to Kangbashi in National Day, a peak of population inflow appears firstly in Rushan, then followed by a peak of population
outflow. While in Kangbashi, the peak of population inflow and outflow appears almost at the same time. In these two “ghost cities”, we did not discover the increase of net population flow, meaning that they did not attract more people to live there. Besides the change of migration population, we also know the interaction intensity between two cities. In other words, we know the proportion of migration people from/to each city. Figure 9 shows the location and proportion of migration population in National Day. The source and sink with largest flow in Kangbashi and Rushan are Yulin and Weihai, respectively.
a. Kangbashi
b. Rushan
Fig. 8 Population change of migration
Conclusion
The fast urbanization of China has contributed so many housing built that has outstripped the actual demand. Whether the so called “ghost cities” reported by media have a high housing vacancy has always been disputed for the lack of data to verify. As far as we are concerned, we use big data to analyze the real situation of “ghost cities” in China for the first time. The features of national spatial scale, long temporal scale, and high precision of Baidu big data make the study of “ghost cities” representative and reliable. Instead of just counting the number of homes with light at night in certain residential areas as the indicator of “ghost city”, Baidu big data can count the population precisely, in real time, and in national scale. A limitation of the data is that it cannot represent the real demography of a city because not all people are Baidu users. However, with the ubiquity of smart mobile phones, Baidu users occupy the most proportion of the whole population. Moreover, the quality of residential area POIs will affect our results. We make a series of processing to make sure that the POIs are reliable. Baidu big data bring opportunities to objectively understand the status or even reasons of “ghost cities.”
Based on the Baidu positioning data and residential area POI data, we design an algorithm to discover the vacant housing areas. The results discovered the specific location of vacant housing areas, which can help government make smarter and more reasonable decisions. Our results provide the real situation of the so called “ghost cities” in China. Cities with a large vacant housing area are mostly second-tier and third-tier cities. The tiers of cities in China are determined by their income, education, transportation, technology and other indicators. East provinces have more proportion of cities with vacant housing areas. We also distinguish the tourism sites and cities. Based on Baidu positioning data, we discover the human dynamic in cities with a large vacant housing area to help better understand the situation in “ghost cities.”
Acknowledgements
The authors would like to thank their colleagues in Big Data Lab Zhengxue Li for his data visualization in web design, Wei Jia for his help in pre-processing the data, and Geography Librarian Amanda Hornby at University of Washington for her insightful comments.
References
An, B. (2015). Thinking rationally about the management of vacant housing rate. May 9. <http://comments.caijing.com.cn/20150506/3876126.shtml>.
Chen, Q. (2014). Will the number of ghost cities continue increasing? November 10. <http://www.zhihu.com/question/26193868/answer/33116107>.
Chen, Y. (2015). Chasing ghosts: Where is China's next wave of empty's new towns? February 13. <http://multimedia.scmp.com/china-ghost-towns/>.
Chi, G., Thill, J. C., Tong, D., Shi, L., & Liu, Y. (2014). Uncovering regional characteristics from mobile phone data: A network science approach. Papers in Regional Science.
Su, X. (2014). Rank of ghost cities in 2014. October 12. <http://house.ifeng.com/detail/2014_10_12/50060123_0.shtml>.
Ester, M., Kriegel, H. P., Sander, J., & Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. Kdd, 96(34), 226-231.
Fischer, E. (2013). Mapping Millions of Dots. <https://www.mapbox.com/blog/mapping-millions-of-dots/>.
House China. (2015). China's vacant housing rate is 20%. 12 "ghost cities" appeared last year. May 4. <http://house.china.com.cn/wuxi/view/783741-all.htm>.
Kang, C., Liu, Y., Ma, X., & Wu, L. (2012). Towards estimating urban population distributions from mobile call data. Journal of Urban Technology, 19(4), 3-21.
Liu, Y., et al. (2015). "Social Sensing: A New Approach to Understanding Our Socioeconomic Environments." Annals of the Association of American Geographers ahead-of-print: 1-19.
MacCarthy, N. (2014). China Used More Concrete In 3 Years Than The U.S. Used In The Entire 20th Century. December 5. <http://www.forbes.com/sites/niallmccarthy/2014/12/05/china-used-more-concrete-in-3-years-than-the-u-s-used-in-the-entire-20th-century-infographic/>.
Miller, D. & Chow, E. (2015). Inside China's latest 'ghost town': Huge urban centre built to hold the National Games for 13 DAYS now looks like the set of a zombie horror film. June 17. <http://www.dailymail.co.uk/news/peoplesdaily/article-3128561/China-s-ghost-city-Huge-urban-centre-built-hold-National-Games-looks-like-set-zombie-horror-film.html>.
Nie, X., & Liu, X. (2013). Types of "Ghost Towns" in the process of urbanization and countermeasures. Journal of Nantong University (Social Sciences Edition), 29(4), 111-117.
Poston, D. L. (1972). Socioeconomic status and work-residence separation in metropolitan America. Pacific Sociological Review, 367-380.
Rajagopalan, M. (2014). China to restrict expansion of cities in move to curb 'ghost towns'. September 26. <http://www.reuters.com/article/2014/09/26/us-china-cities-idUSKCN0HL1CB20140926>.
Ruan, Y. (2010). Ministry of Housing and Urban-Rural Development of the People's Republic of China: The average living area is 30 m² in China. December 30. <http://finance.people.com.cn/GB/13618045.html>.
Ryan, V. (2013). Li Keqiang warns of urbanisation risks in first speech as premier. March 18. <http://www.scmp.com/news/china/article/1193244/li-keqiang-warns-urbanisation-risks-first-speech-premier>.
Shepard, W. (2015). Ghost Cities of China. Zed Books.
Shi, L., Chi, G., Liu, X., & Liu, Y. (2015). Human mobility patterns in different communities: a mobile phone data-based social network approach. Annals of GIS, 21(1), 15-26.
Woodworth, D. (2015). Ordos Municipality: A Market-Era Resource Boomtown. Cities 43 (March): 115–32.
Xue, C. Q., Wang, Y., & Tsai, L. (2013). Building new towns in China—A case study of Zhengdong New District. Cities, 30, 223-232. |
A bootstrap method based on linear regression to estimate COVID-19 Ecological Risk in Catalonia
Nicolas Ayala-Aldana ¹,*, Antonio Monleon-Getino ¹, Jaume Canela-Soler ², Petia Radeva ³ and Javier Rodenas ³
¹ Department of Genetics, Microbiology, and Statistics, Section of Statistics, Faculty of Biology, University of Barcelona, Barcelona, Spain.
² Department of Clinical Foundations, School of Medicine and Health Sciences, University of Barcelona, Barcelona, Spain
³ Department of Mathematics and Computer Science, Faculty of Mathematics and Computer Science, University of Barcelona, Spain.
World Journal of Advanced Research and Reviews, 2023, 17(01), 324–332
Publication history: Received on 02 December 2022; revised on 09 January 2023; accepted on 12 January 2023
Article DOI: https://doi.org/10.30574/wjarr.2023.17.1.0047
Abstract
Background: SARS-CoV-2 is a new type of coronavirus that causes COVID-19. It is affecting the entire planet. Despite the widespread use of ecologic analysis in epidemiologic research and health planning, health scientists and practitioners have given little attention to the methodological aspects of this approach. The study of risk factors linked to the COVID-19 pandemic is one of the most current and exciting topics for epidemiologists. These risks in many cases are unknown. This research covers the study of risk factors in the case of COVID-19 and proposes the use of an ecologic method known to epidemiologists in the case of aggregated data. The present study aims to compute a model that allows to easily calculate the risk of infection in different types of populations, using aggregated data to approximate the individual risk of COVID-19 transmission by a person.
Methods: The case of Catalonia, in Spain, is presented as an example, as it is one of the areas where the incidence of the virus among the population is being higher. The proposed method is known as an ecological study and is based on the statistical regression model between the incidence (or variable that represents it) and the risk factors but using aggregated data and obtaining a risk ratio (RR).
Results: The results obtained have made it possible to find the risk of contracting COVID-19 concerning risk factors for high family income (RR=1.157491), more mobility (RR=1.065475), and high density of population (1.000002).
Conclusions: This method could be used to design an app that predicts how the risk will evolve and calculate the risk of contagion in one area or another to take the proper action. The calculated RR can help us to understand how the variables become risks or protective factors at an ecological level (understanding aggregate data).
Keywords: COVID-19; Outbreak; Epidemic dynamics; Modelling; Relative risk
1. Introduction
SARS-CoV-2 is a new type of coronavirus (a broad family of viruses that normally affect only animals) that can affect people and causes COVID-19. It was detected for the first time in 2019/12 in the city of Wuhan (China). Coronaviruses produce clinical conditions ranging from the common cold to more serious diseases. For example, the coronavirus that
caused severe acute respiratory syndrome (SARS-CoV) a few years ago and the coronavirus that causes the Middle East respiratory syndrome (MERS-CoV) (1,2).
The mathematical models in epidemiology are useful to compute incidence, prevalence and estimate the consequences in the population affected by COVID-19. These models use simple and multivariate linear regression, Bayesian statistics, deterministic and stochastic models. These types of predictions are difficult to manage, but are useful for individual cases, like for example, to know if outbreaks will appear in one place or another in the territory studied, or what is the consequence of applying one measure or another.
That is why it is necessary to delve into other models that approximate the risk of individual contagion, not only knowing the prevalence and incidence of those affected, but also through knowing the associated risk factors, such as the mobility of the person, characteristics of the studied area (wealth, population density, etc.), and many others. The calculation of a probability of contagion is not easy and multiple attempts have been made to obtain and simplify it since multiple factors make one or another person contagious (incidence, prevalence, social contact of the person, associated pathologies, age, etc).
The calculation of the risk of contagion of COVID-19 is proposed through aggregated data using an approach of the ecological study type, as proposed by Morgenstern (4), Beral et al (5) and Silcocks et al (6). Ecologic studies are empirical investigations involving the group as the unit of analysis. In ecological studies, disease rates and exposures are measured in each of a series of populations and their relation is examined and has been extensively studied by works such as Morgenstern in "Uses of Ecologic Analysis in Epidemiologic Research" (1982) (4). Frequently, the information about disease and exposure is abstracted from published statistics and therefore does not require expensive or time-consuming data collection.
In ecological studies, usually, the group is carefully chosen so that it belongs to a geographically defined area (e.g., city, region, country). This eases the data collection because statistics can be obtained by combining existing data files on large populations. Ecologic studies are generally less expensive and take less time than studies involving the individual as the unit of analysis. Ecologic studies allow establishing a relationship between variables or risk factors such as geographical comparisons, time trends, migrant population and social class with the disease or dependent variable of the study. On the other hand, data on many variables (e.g., clinical histories, analysis, questionnaires, etc.) may not be available at the ecologic level, and the results of ecologic analyses are subject to certain limitations not applicable to many other study designs (e.g., clinical trials, cohort studies, etc). One of the most known limitations is the ecological fallacy where conclusions are inappropriately inferred about individuals from aggregate data results. It is erroneously concluded that the statistical correlation between two variables is equal to the correlation between the corresponding variables at the individual level. (7,8).
**Objective**
The present study aims to compute a model that allows to easily calculate the risk of infection in Catalonia (Spain). It is intended to use a model of the incidence as a function of the mobility, density of population, etc. to determine the relative ecological risk of the population and of the person contracting COVID-19, using ecological epidemiology methods.
**2. Material and methods**
**2.1. Statistical data**
The data used to feed the model was obtained from different public sources such as:
IDESCAT (Institute of Statistics of Catalonia): https://www.idescat.cat/pub/?id=aec&n=250&lang=es; https://www.idescat.cat/emex/?id=080193#h40000000, Generalitat of Catalonia (Catalonia Government): http://aquas.gencat.cat/ca/actualitat/ultimes-dades-coronavirus/mapa-per-abs/, Municipal data from environmental pollution stations: https://analisi.transparenciacatalunya.cat/Medi-Ambient/Dades-d-immissi-dels-punts-de-mesurament-de-la-Xar/uy6k-2s8r/data, etc.
The data collection and the epidemiological models can be seen alongside the study code at https://github.com/nicolasayala-aldana/Model-Risk. The aggregated data consists of 233 health areas in the rows (sanitary area) compiled in a spreadsheet by these different databases where the following variables are the columns
• Health_area: Part of the municipality where the incidence of COVID19 has been noted. It is related to a sanitary area where a certain number of inhabitants is controlled.
• Municipality: Municipality or village of Catalonia, Spain.
• Positive_cases: Total positive cases.
• Suspected_cases: Total suspected cases.
• Raw_rate_10k: Raw rate per 10,000 inhabitants.
• Standard_rate_10k: Standardized crude rate per 10,000 inhabitants.
• Insured_people: Insured people in the area.
• Surface_km2: Area occupied by the health area in km$^2$.
• Density: Population density by km$^2$.
• NOX_reduction_january2020: Reduction of the percentage of NOx in January 2020 compared to the previous 3 years.
• NOX_reduction_february2020: Reduction of the percentage of NOx in February 2020 compared to the previous 3 years.
• NOX_reduction_march2020: Reduction of the percentage of NOx in March 2020 compared to the previous 3 years.
• NOX_reduction_april2020: Reduction of the percentage of NOx in April 2020 compared to the previous 3 years.
• NOX_reduction_may2020: Reduction of the percentage of NOx in May 2020 compared to the previous 3 years.
• Income_euros: Family income (thousands of euros). Catalan acronym called “RFBD”
• Income_euros_inhab: Family income per inhabitant (thousands of euros). Catalan acronym called “RFBD per inhabitant”.
• Income_index_100: Family income per inhabitant (index between 0-100%) referred to the whole of Catalonia.
At the time of composing this study, we used the data available until the middle of May 2020, at which point the daily case curve reached the inflection point and was no longer growing. In many cases, the complete information on the variables was not available, since the values were missing, or not all the municipalities had statistical data or environmental information. The pollution information (NOx Reduction) was used to estimate mobility indirectly, so we assumed that the more pollution reduction the less mobility since a smaller number of vehicles emit less NOx. Our main variable (Y) to predict is standard rate, which is the standardized crude COVID19 rate per 10,000 inhabitants in the different health areas.
2.2. Relative risk (RR) estimation using aggregated data of current epidemiological ecological study
As previously mentioned, it is very difficult to calculate the individual risk of COVID-19 infection in a heterogeneous population (e.g., countryside, city, high and low density of inhabitants, rich areas, poor areas, etc.) in a territory as extensive as Catalonia (32,108 km$^2$). That is why an indirect system has been devised for its study from the ideas raised by the ecological studies used in the epidemiological field. One way to approximate this probability is by using the Relative Risk (RR).
In mathematical terms, the RR is the ratio of the probability of an outcome in an exposed group to the probability of an outcome in an unexposed group (9). It is computed as:
$$RR = \frac{\text{Risk in exposed}}{\text{Risk in nonexposed}}$$
$RR$ measures the association between exposure and the outcome. Relative risk can be estimated from a 2x2 contingency table in a clinical or epidemiological study (Table 1)
**Table 1 Calculation of relative risk in epidemiology**
| Group | Disease develops | The disease does not develop |
|----------------|------------------|------------------------------|
| Exposed | a | b |
| Non-exposed | c | D |
Where $a$, $b$, $c$ and $d$ are frequencies of the different cells for group-event. The point estimate of the relative risk ($RR$) is:
Thus, different factors of interest that may be related to the risk of COVID-19 infection could be studied, such as mobility of people, population density, family income, etc. These factors are reasonably easy to know but unfortunately, the epidemiology of COVID-19 is complex and data for individual risk at habitant levels is not possible to obtain. As these are not available, we use the crude rate per 10,000 inhabitants, which would correspond to the independent variable of the model (Y). Finally, the variables that can be used as regressors (independent terms of the model, X) are aggregate and quantitative variables (e.g.: population density per city, family income per area, mobility per area).
Different authors suggest that it is possible to use the $RR$ (relative risk) or risk ratio as a tool calculated by the linear regression models [$RR=1+(\text{slope}/\text{intercept})$] using ecological studies (4,5). This is an indirect and inexpensive method for estimating relative risk between disease in exposed people versus disease in non-exposed people. Even though problems of confusion factors could be inconvenient, it seems feasible in exploratory designs (6).
The calculation of the relative ecological risk approximation as a method to approximate RR related to COVID-19 from the aggregated data is presented below, using the method proposed by Morgenstern (4) using a linear regression model, simple or multiple, to obtain an estimation of RR. The level of aggregation, in this case, is the municipality/portion present in the Health System of the Generalitat de Catalunya (see data explanation).
The linear regression model was used to estimate RR using the method proposed by Morgenstern (4) where the distribution of a single response variable ($Y = \text{standardized crude rate per 10,000 inhabitants}$) is related to several explanatory variables, $X_1, X_2, \ldots$, by a regression model Monleon, 2017 (10):
$$Y_i = \beta_0 + \beta_1 X_{i1} + \cdots + \beta_p X_{ip} + \varepsilon_i$$
Where $\beta_0$ is the intercept, $\beta_1, \ldots, \beta_p$ the model coefficients, and $\varepsilon_i$ the error term, where $\varepsilon_i \sim N(0, \sigma)$ is the distribution of the errors, has mean zero and captures the residual variability.
Independent variables used in the linear models ($X_i$) are:
- Density of population: Inhabitants / km$^2$
- Reduction % NOx for March 2020 vs 2017-2019
- Reduction % NOx for April 2020 vs 2017-2019
- Reduction % NOx for May 2020 vs 2017-2019
- Family income per inhabitant (thousands of euros)
First, a univariate regression model (see equation 2) with a single $X_i$ was calculated. RR was computed using:
$$RR = 1 + \frac{\beta_1}{\beta_0}$$
In a second step, a regression model with different $X_i$ and their interactions $X_i X_j$ was tested. In this case, the significance level used was 0.001. We proposed a highly demanding alpha because it is more difficult to accept interactions.
3. Results
Figure 1 shows the number of daily cases of COVID-19 from February to July 2020. The peak of new cases observed was 1895 on 20th March, then daily cases had a significant decrease until reaching 51 cases on 6th May. After that, the number of cases fluctuated until July 2020. The statistical package R (version 3.6) was used to perform the analyses using simple linear regression calculations between the incidence of COVID-19 as the dependent variable (Y) and the commented risk factors ($X_i$) as independent variables. Once the regression coefficients were obtained, the relative ecological risk for each factor was calculated and after that, a model with multiple significant factors for the individual model was presented (see Table 2 and Table 3).
Figure 1 Representation in time of the number of cases in Catalonia (north-east of Spain) from February to July 2020.
Table 2 Estimations of $\beta_0$ and $\beta_1$ (regression coefficients in a simple linear model), RR: relative ecological risk approximation, and $R^2 =$ coefficient of determination, also the p-values are presented
| Factor | $R^2$ | $\beta_0$ | $\beta_1$ | RR |
|-----------------------------------------------------------------------|----------------|-------------------|-------------------|----------------------------|
| Density (inhabitants /km$^2$) | 0.1025 | 6.494e+01 | 1.318e-03 | 1.00002 |
| | (p=2.182e-05) | (p=<2e-16) | (p=2.18e-05) | (1.000021,1.000021,1.00002)*|
| Mobility (Reduction of the percentage of NOx in March 2020 compared to the previous 3 years) | 0.07924 | 20.669 | 1.353 | 1.065475 |
| | (p=0.0005279) | (p=0.223378) | (p=0.000528) | (1.112431,0.9752694,0.8381082)*|
| Mobility (Reduction of the percentage of NOx in April 2020 compared to the previous 3 years) | 0.9987749 | 86.9407 | -0.1065 | 0.9987749 |
| | (p=0.8204) | (p=0.00283) | (p=0.82039) | (1.007246,1.003234,0.9992213)*|
| Mobility (Reduction of the percentage of NOx in May 2020 compared to the previous 3 years) | 0.09093 | 56.94779 | 0.35091 | 1.006162 |
| | (p=0.0002017) | (p=< 2e-16) | (p=0.000202) | (1.006911,1.006368,1.005825)*|
| Family income per inhabitant | 0.08054 | 19.3861 | 3.0531 | 1.157491 |
| | (p=3.755e-05) | (p=0.13) | (p=3.76e-05) | (1.180263,1.114656,1.04905)*|
*Estimation of RRE (ULCI95%, mean, LLCI95%) using resampling with 1000 iterations and n=200 samples with replacement.
Table 3 Estimations of $\beta_0, \beta_1, ..., \beta_4$ (regression coefficients in a multiple linear model), RR: relative ecological risk approximation, and $R^2 =$ coefficient of determination, also the p-values are presented
| Regression model Coefficient, name | $\beta_0$ | $\beta_1$ | $\beta_2$ | $\beta_3$ | $\beta_4$ |
|-----------------------------------|-----------|-----------|-----------|-----------|-----------|
| | Density | Mobility | Family income | Interaction |
| | (inhabitants/km2) | (Reduction of the percentage of NOx in May 2020 compared to the previous 3 years) | per inhabitant (thousands of euros) | between NOx reduction of May 2020 and Family income per inhabitant |
| Regression model Coefficient, value | -1.147e+02 | 7.303e-04 | 3.955e+00 | 8.384e+00 | -2.059e-01 |
| p-value | 0.08506 | 0.07547 | 0.00318 | 0.02052 | 0.00204 |
| R2 | 0.2082 | | | | |
Assuming the causal effect between the exposure ($X_i$) and the outcome ($Y$), values of RR can be interpreted as follows:
- RR = 1 means that exposure does not affect the outcome.
- RR < 1 means that the risk of the outcome is decreased by exposure.
- RR > 1 means that the risk of the outcome is increased by exposure.
In Table 1 the estimations of $\beta_0$ and $\beta_1$ (regression coefficients in a simple linear model) used to compute RR are shown: relative ecological risk approximation. The compute of $\beta_1$ variables are: Density ($\beta_1 = 1.318e-03; p=2.18e-05$), March 2020 mobility ($\beta_1 = 1.353; p= 0.000528$), April 2020 mobility ($\beta_1 = -0.1065; p= 0.000202$), May 2020 Mobility ($\beta_1 = 0.35091; p=3.76e-05$) and Family Income ($\beta_1 = 3.0531; p=3.76e-05$). They are significant to reject the null hypothesis of Ho: $\beta_1=0$ (p<.001) and in consequence accepting the alternative hypothesis.
When analysing the RR, we can state that people who live in high-density areas have 1.00002 times more risk of infection than people who live in low-density areas (RR=1.00002). Regarding mobility, the results obtained for each month were: March 2020 (RR=1.065475), April 2020 (RR=0.9987749) and May 2020 (RR=1.006162). Therefore, the reduction of mobility can be considered as a protective factor for COVID-19 risk infection. In April 2020, the risk of infection for people who live in areas with less mobility is 1.0012251 times lower than people who live in high-mobility areas. Finally, the RR obtained for family income per inhabitant is 1.157491, meaning that the higher the family income per inhabitant, the more risk of infection.
Density, mobility, family income and interaction of multivariate analysis are presented in Table 3. Density and Family Income per inhabitant have a p-value >0.01 and therefore it cannot rule out that RR=0. Because the p-values are greater than 0.01, no relationship can be established between variables such as density ($\beta_1 = 7.303e-04; p = 0.07547$) and Family income ($\beta_3 = 8.384e+00; p = 0.02052$). Mobility ($\beta_2 = 3.955e+00; p = 0.00318$) has p-value less than 0.01 therefore a positive relationship can be established between the mobility and COVID-19 cases. Interaction between variables NOx reduction of May 2020 and family income per inhabitant is observed ($\beta_4 = -2.059e-01; p = 0.00204$). Generally, it is suggested not to use multivariate models with joint factors for their use in epidemiology.
4. Discussions
In a study with 1031 suburban areas of 314 Latin American cities a 10% decrease in weekly mobility was associated with an 8.6% lower incidence of COVID-19 in the following week (12). In a study that used population mobility from Google services in 34 OECD countries plus Singapore and Taiwan, in two-thirds of the countries examined, reductions of up to 40% in mobility were associated with a decline of COVID-19 cases, especially at the beginning of the pandemic (13). In a study where 52 countries were analysed according to WHO and CDC databases, the reduction in mobility in 73% of the countries was associated with a decrease in new cases of COVID-19 (14). It is striking in our study that in April there was an inverse relationship between the decrease in NOx and the cases of COVID-19. For March and May, the relationship was weak. The April measure was $\beta_1 = -0.1065; p = 0.82039$, with an RR = 0.9987749. Reduced mobility should be even more conclusive, but it is interesting to calculate the relative risk directly from this simple method used in epidemiology. It can be explained by public policies such as the restriction of mobility. The lockdown in Spain began on March 14th and lasted until May. After that, the population was gradually allowed to carry out outdoor activities. On June 21, the Spanish government authorized the extension of the state of alarm, allowing the population to move freely from 6am until midnight. This situation was changing in the following months according to the prevalence and incidence by the autonomous community (11).
According to population density, studies show a linear relationship between the number of cases, incidence and mortality of COVID-19 in areas with higher population density such as in the case of Italy (15), England (16), Malaysia (17), Germany and Japan (18). The areas with higher density could explain the difficulties in applying adequate physical distancing and problems in isolation. In our case, no relevant association was observed, and the computation of the relative risk would assume a higher density as a risk for the disease.
Regarding family income as a social determinant in health, other studies have shown that groups with high socioeconomic vulnerability have a higher risk of having COVID-19 determinate by difficulty in carrying out physical distancing in their homes and difficulty in accessing health services (19–21). In Catalonia, a higher incidence of COVID-19 has been founded in the poorest areas of the town of Barcelona (22). In our case, a weak positive relationship is observed compared to the effect of population density. It may be that the per capita family income is not an optimal measurement variable in a regression model, or the variable is representing a fallacy at this ecological level.
Limitations
Results of this study should be interpreted cautiously, bearing in mind limitations attributable to the ecological design of the study. The methodological problem fundamental in assuming individual associations based on group data (ecological fallacy) is well known (7,8). Some limitations of this study could be the absence of measurement of individual socioeconomic characteristics, pollutions levels and range of density by subareas. These variables were measured on an ecological level making it impossible to adequately control the confounding factors, effect modifiers and mediators at an individual level. To overcome the limitations and bias, it is recommended to compute a multilevel analysis considering an individual and contextual level. It is impossible to distinguish the individual and contextual effects of a variable using an ecological design.
5. Conclusion
In conclusion, the proposed method is known as an ecological study and is based on the statistical regression model between the incidence (or variable that represents it) and the risk factors using aggregated data and obtaining a risk ratio (RR). The present study shows the risk factors linked to the COVID-19 in the population of Catalonia (Spain) in the first wave in 2020. Low density of population and reduction of mobility are related with lower risk of contagions of COVID-19. High family income outcomes were related as a risk factor. The ecologic study method could be an effective way to design an APP that predicts how the population is infected with COVID-19 according to risk factors and identify areas most susceptible to contagion. Its development could help in decision-making in public health.
Compliance with ethical standards
Acknowledgments
This work was supported by the University of Barcelona, Catalonia, Spain.
Disclosure of conflict of interest
All authors have no conflict of interest to declare.
References
[1] Li Q, Guan X, Wu P, Wang X, Zhou L, Tong Y, et al. Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus-Infected Pneumonia. N Engl J Med. 2020 Mar, 382(13):1199–207.
[2] Center for Coordination of Health Alerts and Emergencies. Ministry of Health S. Document about COVID-19. 2020 [Internet]. 2020. Available from: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov-China/documentos/20200224.Preguntas_respuestas_COVID-19.pdf
[3] López MG, Chiner-Oms Á, García de Viedma D, Ruiz-Rodriguez P, Bracho MA, Cancino-Muñoz I, et al. The first wave of the COVID-19 epidemic in Spain was associated with early introductions and fast spread of a dominating genetic variant. Nat Genet [Internet]. 2021, 53(10):1405–14. Available from: https://doi.org/10.1038/s41588-021-00936-6
[4] Morgenstern H. Uses of ecologic analysis in epidemiologic research. Am J Public Health. 1982 Dec, 72(12):1336–44.
[5] Beral V, Chilvers C, Fraser P. On the estimation of relative risk from vital statistical data. J Epidemiol Community Health [Internet]. 1979 Jun, 33(2):159–62. Available from: https://pubmed.ncbi.nlm.nih.gov/490098
[6] Silcocks PB, Murphy M. Relative risk estimation from vital statistical data: validation, a pitfall and an alternative method. J Epidemiol Community Health. 1987 Mar, 41(1):59–62.
[7] Schwartz S. The fallacy of the ecological fallacy: the potential misuse of a concept and the consequences. Am J Public Health [Internet]. 1994 May, 84(5):819–24. Available from: https://pubmed.ncbi.nlm.nih.gov/8179055
[8] Loney T, Nagelkerke NJ. The individualistic fallacy, ecological studies and instrumental variables: a causal interpretation. Emerg Themes Epidemiol [Internet]. 2014 Nov 19, 11:18. Available from: https://pubmed.ncbi.nlm.nih.gov/25745504
[9] Celentano , Szklo, M., Gordis, Leon,, DD. Gordis epidemiology. 2019.
[10] Monleón A, Gomez C. Probability and Statistics I. 1st ed. Barcelona U of, editor. Barcelona, 2017.
[11] La Moncloa. Estado de Alarma [Internet]. [cited 2022 Mar 22]. Available from: https://www.lamoncloa.gob.es/covid-19/Paginas/estado-de-alarma.aspx
[12] Kephart JL, Delclòs-Alió X, Rodríguez DA, Sarmiento OL, Barrientos-Gutiérrez T, Ramirez-Zea M, et al. The effect of population mobility on COVID-19 incidence in 314 Latin American cities: a longitudinal ecological study with mobile phone location data. Lancet Digit Heal [Internet]. 2021/08/26. 2021 Nov, 3(11):e716–22. Available from: https://pubmed.ncbi.nlm.nih.gov/34456179
[13] Oh J, Lee H-Y, Khuong QL, Markuns JF, Bullen C, Barrios OEA, et al. Mobility restrictions were associated with reductions in COVID-19 incidence early in the pandemic: evidence from a real-time evaluation in 34 countries. Sci Rep [Internet]. 2021 Jul 2, 11(1):13717. Available from: https://pubmed.ncbi.nlm.nih.gov/34215764
[14] Nouvellet P, Bhatia S, Cori A, Ainslie KEc, Baguelin M, Bhatt S, et al. Reduction in mobility and COVID-19 transmission. Nat Commun [Internet]. 2021, 12(1):1090. Available from: https://doi.org/10.1038/s41467-021-21358-2
[15] Pluchino A, Biondo AE, Giuffrida N, Inturri G, Latora V, Le Moli R, et al. A novel methodology for epidemic risk assessment of COVID-19 outbreak. Sci Rep [Internet]. 2021 Mar 5, 11(1):5304. Available from: https://pubmed.ncbi.nlm.nih.gov/33674627
[16] Bray I, Gibson A, White J. Coronavirus disease 2019 mortality: a multivariate ecological analysis in relation to ethnicity, population density, obesity, deprivation and pollution. Public Health [Internet]. 2020/07/07. 2020 Aug, 185:261–3. Available from: https://pubmed.ncbi.nlm.nih.gov/32693249
[17] Ganasegeran K, Jamil MF, Ch'ng AS, Looi I, Peariasamy KM. Influence of Population Density for COVID-19 Spread in Malaysia: An Ecological Study. Vol. 18, International Journal of Environmental Research and Public Health . 2021.
[18] Diao Y, Kodera S, Anzai D, Gomez-Tames J, Rashed EA, Hirata A. Influence of population density, temperature, and absolute humidity on spread and decay durations of COVID-19: A comparative study of scenarios in China, England, Germany, and Japan. One Heal [Internet]. 2021, 12:100203. Available from: https://www.sciencedirect.com/science/article/pii/S2352771420303049
[19] Niedzwiedz CL, O'Donnell CA, Jani BD, Demou E, Ho FK, Celis-Morales C, et al. Ethnic and socioeconomic differences in SARS-CoV-2 infection: prospective cohort study using UK Biobank. BMC Med [Internet]. 2020, 18(1):160. Available from: https://doi.org/10.1186/s12916-020-01640-8
[20] Whittle RS, Diaz-Artiles A. An ecological study of socioeconomic predictors in detection of COVID-19 cases across neighborhoods in New York City. BMC Med [Internet]. 2020 Sep 4, 18(1):271. Available from: https://pubmed.ncbi.nlm.nih.gov/32883276
[21] Yoshikawa Y, Kawachi I. Association of Socioeconomic Characteristics With Disparities in COVID-19 Outcomes in Japan. JAMA Netw Open [Internet]. 2021 Jul 14, 4(7):e2117060–e2117060. Available from: https://doi.org/10.1001/jamanetworkopen.2021.17060
[22] Baena-Díez JM, Barroso M, Cordeiro-Coelho SI, Díaz JL, Grau M. Impact of COVID-19 outbreak by income: hitting hardest the most deprived. J Public Health (Bangkok) [Internet]. 2020 Nov 23, 42(4):698–703. Available from: https://doi.org/10.1093/pubmed/fdaa136 |
Proposed Mitigated Negative Declaration
Project: Mattley Meadow Restoration Project
Lead Agency: Upper Mokelumne River Water Authority (UMRWA)
Availability of Documents: The Initial Study for this Proposed Mitigated Negative Declaration is available for review at:
http://www.umrwa.org/docs.html
Questions or comments regarding this Proposed Mitigated Negative Declaration (MND) and Initial Study may be addressed to:
Richard Sykes
Executive Officer
firstname.lastname@example.org
(510)390-4035
Project Location: The Mattley Meadow project is located in Calaveras County in the headwaters of Mattley Creek, tributary to the North Fork Mokelumne River on public lands managed by the U.S. Forest Service, Stanislaus National Forest, Calaveras Ranger District, and private lands owned by Stan Dell’Orto in T70N, R17E, Sections 8 and 17 MDBM (Figure 1).
Figure 1. Vicinity Map for Mattley Meadow Restoration Project
Mattley Creek Meadow
Mattley Meadow
Mattley Meadow Project
Vicinity and Project Area Map
Plumas Corporation
0 600 1,200 2,400 3,600 4,800 Feet
1:24,000
2/25/2015
Project Description: USFS and project stakeholders propose to:
- Restore the natural hydrologic functions of the Mattley Meadow complex using a partial or near complete channel fill restoration technique (i.e. plug and pond method) in Mattley Meadow and Mattley Creek Meadow.
- Revegetate and stabilize all disturbed areas within the meadow, including access routes, borrow pond margins, and gully fill/plugs.
- Reroute a 0.1-mile segment of motorized trail 17EV16 that crosses Mattley Creek Meadow around the meadow.
- Reconstruct range fencing on the north property boundary and east edge of Mattley Meadow and construct temporary fencing around the restored area in Mattley Creek Meadow to restrict and manage livestock grazing.
Findings: An Initial Study was prepared to assess the proposed project’s potential effects on the environment and the significance of those impacts. Based on the Initial Study, UMRWA has determined that the proposed project would not have a significant impact on the environment because mitigation measures would be implemented to reduce impacts to less-than-significant levels. This conclusion is supported by the following findings:
1. The proposed project would have no impact on:
- Agricultural and forest resources.
- Energy.
- Land use and planning.
- Mineral resources.
- Noise
- Paleontology.
- Population and housing.
- Public services.
- Recreation.
- Transportation and traffic
- Utilities and service systems.
- Wildfire.
2. The proposed project would result in a less-than-significant impact on:
- Aesthetics.
- Greenhouse gas emissions.
3. Mitigation measures have been adopted to reduce potentially significant impacts to less-than-significant levels on:
Air quality.
Biological resources.
Cultural resources.
Geology and soils
Hazards and hazardous materials
Hydrology and water quality.
Tribal cultural resources.
Mitigation Measures
The following mitigation measures will be implemented by the U.S. Forest Service and implementing partners to avoid, minimize, and mitigate environmental impacts resulting from implementation of the proposed project. Implementation of these mitigation measures would reduce the environmental impacts of the proposed project to a less-than-significant level. A Mitigation Monitoring and Reporting Program for these measures is included in the Initial Study.
Air Quality
- Construction fill and cut areas would be watered as necessary to prevent visible emissions from extending more than 100 feet beyond the active work areas unless the area is inaccessible to watering vehicles due to slope conditions or other safety factors.
- Disturbed surface areas would be watered in sufficient quantity and frequency to suppress dust and maintain a stabilized surface.
- At least 80 percent of all inactive disturbed surface areas would be watered on a daily basis when there is evidence of wind driven fugitive dust, excluding any areas which are inaccessible due to excessive slope or other safety conditions.
- All unpaved roads used for any vehicular traffic would be watered at least once per every two hours of active operations.
Biological Resources – Wildlife Species
- The project activities will conform to the conservation measures and terms and conditions requirements in the Biological Opinion (USFWS, 04/29/2020), and Lake and Streambed Alteration Agreement (CDFW, to be obtained prior to implementation; application in process), which appends this to those documents.
- Precautions to minimize turbidity/siltation shall be taken into account during project planning and implementation. This shall require the placement of silt fencing or sediment barrier cloth along the boundary of the project area so that silt and/or other deleterious materials are not allowed to pass to adjacent or downstream reaches. Passage of sediment beyond the sediment barrier(s) is prohibited. If any sediment barrier fails to retain sediment, corrective measures shall be taken. The sediment barrier(s) shall be maintained in good
operating condition throughout the construction period and the entire stretch of barrier shall be monitored daily prior to commencement of construction activities to ensure wildlife species have not become trapped or displaced by the barrier. All sediment contained along the barrier shall be removed and disposed of where it will not re-enter a watercourse. All non-biodegradable silt barriers (such as plastic silt fencing) after the disturbed areas have been stabilized with erosion control vegetation shall be removed. Upon CDFW determination that turbidity/siltation levels resulting from project related activities constitute a threat to aquatic life, activities associated with the turbidity/siltation shall be halted until effective CDFW approved control devices are installed or abatement procedures are initiated.
- Prior to commencement of construction, grading, vegetation removal, equipment staging or other project-related activities, a focused survey for sensitive species (such as but not limited to fish, plants, reptiles, and amphibians) that are listed under the California Endangered Species Act (CESA) or Federal Endangered Species Act (ESA) shall be conducted by a Designated Biologist (i.e. Forest Service- or USFWS and CDFW-approved biologist) within a 200 feet radius of the project area by a designated individual that is educated and familiar with all life stages of local fish, plants and amphibians, within three (3) days prior to the beginning of project-related activities and prior to beginning work on a daily basis.
- If any CESA or ESA listed species are encountered during the conduct of project activity, including maintenance and restoration activities, work shall be suspended, the USFWS and CDFW notified, and conservation measures shall be developed in agreement with respective regulatory authorities prior to initiating the activity. Work may not re-initiate until respective regulatory authorities (USFWS and CDFW) have been consulted and avoidance measures implemented.
**Terrestrial Wildlife**
- The Stanislaus NF District Biologist will conduct pre-construction surveys for California spotted owl and northern goshawk in August, at least two weeks prior to project construction, to determine presence and status of these species within the project area. If California spotted owl or northern goshawk nesting is detected, a limited operating period (LOP) for the detected species may be observed through September 15, when nesting activities are complete. The LOP may not be necessary depending on where the nest/reproductive activity is taking place, in relation to project activities, and will be assessed by the biologist to protect reproduction as necessary. If deemed necessary, the LOP would restrict project activities no more than 0.25 mile from the located nesting/reproductive activity center. Project construction outside the 0.25-mile buffer may continue during the specified LOP.
- If construction is scheduled during the bird breeding season (February 15th to August 31st), a Designated Biologist (i.e. Forest Service- or USFWS and CDFW-approved biologist) shall conduct a breeding bird survey no more than 15 days prior to the start of construction. All active bird nests will be marked following the survey to avoid destruction by equipment.
If nesting raptors or migratory birds are identified within the area, a non-disturbance buffer and any other restrictions will be determined, before project activities commence, through consultation with the CDFW following completion of the survey.
**Aquatic Wildlife**
- During restoration work within Mattley Meadow, a Forest Service- or USFWS and CDFW-approved biologist must be on site during all activities. Survey the immediate work area for listed amphibians before commencement of daily work and following work stoppages exceeding one hour.
- Maintain an 82-foot limited operating area around the SNYLF occupied western channel in Mattley Meadow where mechanical operation for conifer removal is prohibited.
- If Sierra Nevada yellow-legged frogs are detected within the work area, the following procedures will be followed: Each Sierra Nevada yellow-legged frog or Yosemite toad encounter shall be treated on a case-by-case, but the general procedure is as follows: (1) Leave the non-injured animal alone if it is not in danger; or (2) move the animal to a nearby safe location if it is in danger. These two actions are further described below:
- When a Sierra Nevada yellow-legged frog or Yosemite toad is encountered within the project site, the first priority is to stop all activities in the surrounding area that may have the potential to result in the harassment, injury, or death of the individual. Then, the situation shall be assessed by a Forest Service- or USFWS-approved biologist in order to select a course of action that will minimize adverse effects to the individual.
- Individuals of the three listed species shall be captured and moved by hand only when it is necessary to prevent harassment, injury, or death. A Forest Service- or USFWS-approved biologist shall inspect the animal and the area to evaluate the necessity of fencing, signage, or other measures to protect the animal. If suitable habitat is located immediately adjacent to the capture location, then the preferred option is relocation to that site. An individual shall not be moved outside of the radius it would have traveled on its own.
- Only Forest Service- or USFWS-approved biologists may capture the three listed amphibians. Nets or bare hands may be used to capture the animals. Soaps, oils, creams, lotions, repellents, or solvents of any sort cannot be used on hands within two hours before and during periods when the biologist is capturing and relocating individuals. If the animal is held for any length of time in captivity, they shall be kept in a cool, dark, moist environment with proper airflow, such as a clean and disinfected bucket or plastic container with a damp sponge. Containers used for holding or transporting shall not contain any standing water, or objects (except sponges), or chemicals.
- Existing waterholes and other aquatic sites including ponds, lakes and streams used for water drafting would be surveyed for Aquatic State and federal TES species and flow levels taken prior to use. In the event State and/or federal TES species are found to occur at drafting sites; sites will not be used and future surveys would be conducted by an aquatic
specialist to determine presence of potential populations.
- The use of low velocity water pumps and screening devices for pumps (per S&G 110) will be utilized during drafting for project treatments to prevent mortality of eggs, tadpoles, juveniles, and adult SNYLF. A drafting box measuring 2 feet on all sides covered in a maximum of 0.25-inch screening is required.
- Mechanical operation would be prohibited on days where >0.5 inches of rain are predicted and within 24 hours of such rain events.
**Biological Resources — Plant Species**
- Any new occurrences of sensitive, rare, or other listed plants identified within the project area would be flagged and avoided when necessary.
- All off-road equipment would be cleaned to insure it is free of soil, seeds, vegetative matter or other debris that could contain seeds before entering the project area.
- Infestations of invasive plants that are discovered during project implementation would be documented and locations mapped. New sites would be reported to the Forest Service botanist.
- Onsite sand, gravel, rock, or organic matter would be used where possible.
- Any seed used for restoration or erosion control would be native species known to occur in the meadow complex purchased from a reputable local native seed supplier.
**Cultural Resources**
- Four cultural sites in the project area will be flagged with a buffer of at least ten meters prior to project implementation. All contractors will be informed of this location, and no ground disturbing activities will occur within the flagged area. The flagging will be removed post project implementation.
**Geology and Soils**
- Construction would occur during the low flow period, and coincides with the most favorable moisture conditions to the depth of borrow site excavation. The subsurface soil material excavated is used to plug the channel incision. This material requires enough moisture to allow for compaction to background condition of the adjacent native soil. (The purpose of compaction is to preclude subsidence of the plug material during saturated conditions. Subsidence can lead to the initiation of erosion on the plugs.) Utilization of onsite fill material allows the best match of soil types at the least cost. Material too wet to efficiently transport and work would be avoided. The subsurface (compacted) portions of the plug are constructed using the ‘layer lift’ method, which entails spreading the material in a thin veneer over the general area of the plug with each delivered bucket load of material. This repeated action, with occasional re-cutting of the working surface allows for efficient wheel compaction without supplemental equipment.
• Topsoil, and any organic material, in the area of excavation will be removed to a depth of approximately one foot and stockpiled adjacent to the plugs. When the plugs have been constructed to the design elevation, the plug surface will be cross-ripped to a depth of 12” to restore a deep infiltration capacity. Stockpiled topsoil with associated organics and native seed bank will be spread across the plug with a low ground-pressure track loader. The final pass with equipment is to dress and roughen the topsoil surface for microclimate roughness and to fully incorporate the topsoil with the surface of the subsoil.
• Equipment travel into the project area will be restricted to existing open or closed OHV roads and recent timber harvest skid trails and landings. During construction, routes from the borrow sites to plug areas with compaction resulting from construction will be scarified perpendicular to expected surface water flow and dressed with scattered organic material.
• Staging areas and temporary haul routes used during the project will be minimized to lessen soil compaction and disturbance to the greatest extent possible. After construction, they will be sub-soiled, perpendicular to surface flow directions, to the full depth of compaction to restore soil porosity. Areas with residual meadow sod will only be lightly scarified to preserve sod integrity. The emphasis is on the least soil disruption while loosening the soil. Extensive mixing or plowing can have a negative effect on soil microorganisms. This technique has been successful in loosening the soil, restoring soil porosity, providing a high infiltration capacity, and thereby reducing cumulative watershed effects.
• The project will require re-vegetation. Access routes are expected to have residual sod, and thus not require seeding, but may receive mulching and possibly seed, depending on the condition of the sod. Revegetation will consist of the following measures:
o All desirable plant material that would be excavated or buried in plugs, such as sod mats and willow wads, would be removed and transplanted to plugs, pond margins, and at key locations in the remnant channel. Locations of transplants are prioritized according to need for maximum soil protection in bare areas and areas of potentially high stress. Sod would be placed with heavy equipment and could be secured using live willow stakes. Willow wads also would be excavated and replanted using heavy equipment.
o Following project completion in the fall, purchased native seed would be dispersed into plugs, around ponds, and other heavily disturbed areas.
o All revegetation areas would be monitored for three years following project completion. Successful revegetation would consist of 70% survival of willow cuttings and transplanted sod and willow wads. Seeded areas would have at least 50% cover of native vegetation. Any areas that do not meet the survival or cover criteria would be reseeded or replanted.
• Erosion control would be accomplished using locally collected materials (wood chips, duff, pine needles, etc.). Straw would not be used.
Meadow restoration projects include rest from grazing in disturbed areas for up to three years after construction in order to allow the newly planted vegetation to become established. The project area would be fenced to protect disturbed areas from livestock for 2-3 years. Off-site water may also be developed to lessen livestock impacts on riparian areas after grazing is re-established in the project area.
**Hazards and Hazardous Materials**
- Equipment will be re-fueled and serviced at the designated staging area, which is outside of the riparian area and meadow. No fuel will be stored on-site. In the event of an accidental spill, hazmat materials for quick on-site clean-up will be kept at the project sites during all construction activities, and in each piece of equipment.
- For fire prevention, a trash pump and/or water truck will be on-site at all times.
**Hydrology and Water Quality**
*Erosion Control Plan (BMP 2.13 Erosion Control Plans)*
- The erosion control plan will consist of the BMPs incorporated into the project design criteria as well as any additional measures required by regulating agencies as part of the project permitting process (e.g., 404/401 permits, Streambed Alteration Agreement, etc.)
- Implementation of BMPs will be documented in a BMP checklist that will be prepared prior to project implementation.
- Construction would be supervised on-site by at least one person who has worked on at least one previous partial fill (pond and plug) meadow floodplain restoration project.
*Meadow Restoration (BMP 1.19 Streamcourse and Aquatic Protection; BMP 7.1 Watershed Restoration)*
- Required permits would be obtained including the 404 permit from the U.S. Army Corps of Engineers, 401 Permit from the Central Valley Regional Water Quality Control Board, and a 1600 Lake and Streambed Alteration Agreement from the California Department of Fish and Wildlife.
- Construction activities in Mattley Meadow(s) would occur during the time of year when the flow of Mattley Creek is at its lowest. This typically occurs between August 1 and October 30. Anticipated implementation is September 1-30, 2021.
- Equipment access would be on existing and temporary routes. Temporary routes would be restored at the end of project implementation.
- Erosion of disturbed areas would be reduced utilizing one or more of the following techniques: placement of large and small woody debris; soil scarification; scattering of fine organic debris (such as wood straw or chips, pine needles, etc.); other practices as needed or required by permits.
- To promote revegetation, topsoil would be removed and stockpiled during pond excavation and then used to top dress the completed plugs. Live plant material such as sod mats and
willows excavated during construction may be transplanted to plugs or other areas. Locally collected seed, plant stakes, or live plants may be used where needed.
- Grazing would be excluded from restoration areas using temporary fencing until the site has sufficiently revegetated and stabilized, generally a minimum of 2 – 3 years.
**Equipment Refueling and Servicing (BMP 2.11 Equipment Refueling and Servicing; 7.4 Forest and Hazardous Substance Spill Prevention Control and Countermeasure Plan; 1.19 Streamcourse and Aquatic Protection)**
- Allow equipment refueling and servicing only at approved locations, which are well away from waterbodies. Servicing and refueling activities would be located a minimum of 100 feet away from the meadow edge. Site specific locations for equipment fueling would be identified prior to or during project implementation. A non-porous mat or equivalent would be used for the refueling at the staging area.
- Report spills and initiate appropriate clean-up action in accordance with applicable State and Federal laws, rules and regulations. A Spill Prevention Control and Countermeasure (SPCC) plan would be implemented when a total oil product at a site exceeds 1,320 gallons or any single container exceeds 660 gallons. The Forest has a SPCC spill plan designed to guide the emergency response to spills during construction.
- Clean equipment used for instream work prior to entering the water body: Remove external oil, grease, dirt and mud from the equipment and repair leaks prior to arriving at the project site. Inspect all equipment before unloading at site. Inspect equipment daily for leaks or accumulations of grease, and correct identified problems before entering streams or areas that drain directly to waterbodies. Remove all dirt and plant parts to ensure that noxious weeds and aquatic invasive species are not brought to the site.
**Water Sources (2.5 Water Source Development and Utilization)**
- Use of water sources would be in accordance with the conditions (e.g., minimum instream flows, etc.) specified in BMP 2.5 (Water Source Development and Utilization). Water may be needed to assist in construction of structures. Approved drafting sites designated by the District hydrologist would be utilized.
**Monitoring (BMP 7.6 Water Quality Monitoring)**
- Visual and photo point monitoring of the meadow restoration area would be conducted for several years after implementation to ensure restoration actions are functioning as intended and meeting project objectives. BMP effectiveness monitoring using the national protocol may also be conducted. Corrective actions consisting of any of the tools and techniques as described for the proposed action may be implemented where needed.
- Implement all monitoring and reporting required by terms of the 401, 404, and 1600 permits.
Tribal Cultural Resources
- All cultural sites in the vicinity of the project area will be flagged with a buffer of at least ten meters prior to project implementation. All contractors will be informed of site locations, and no ground disturbing activities will occur within the flagged areas. The flagging will be removed post project implementation.
- The following mitigation measure is intended to address inadvertent discoveries made by construction personnel, agencies, or consultants at the work site when no archaeological or tribal monitor is present during ground disturbing activities.
- If potential tribal cultural resources (TCRs) or archaeological resources are discovered during ground disturbing construction activities, all work shall cease within 100 feet (or an appropriate distance based on the apparent distribution of the TCR) of the find. A qualified cultural resources specialist meeting the Secretary of Interior’s Standards and Qualifications for Archaeology, as well as Native American Representatives from traditionally and culturally affiliated Native American Tribes will assess the significance of the find. To avoid or minimize adverse impacts when tribal cultural resources, archaeological resources, or other cultural resources are discovered, Native American Representatives may make recommendations for further evaluation and treatment as necessary. Culturally appropriate treatment may include, but is not limited to, processing materials for reburial, minimizing handling of cultural objects, leaving objects in place within the landscape, or returning objects to a location within the Project area where they will not be subject to future impacts. Recommendations of the treatment of a TCR will be documented in the project record. For any recommendations made by traditionally and culturally affiliated Native American Tribes that are not implemented, a justification for why the recommendation was not followed will be provided in the project record.
- If articulated or disarticulated human remains are discovered during ground disturbing construction activities or ground disturbing activities, all work shall cease within 100 feet of the find and all ground disturbing activities shall not resume until the requirements of Health and Safety Code section 7050.5 and, if applicable, Public Resources Code 5097.98 are met.
Monitoring & Reporting
Monitoring is a means to determine if conditions in Mattley Meadow are meeting or moving toward the desired conditions. Extensive surveys have been conducted to document the existing conditions within the meadow and stream channel(s). Additional monitoring would take place immediately after the project is implemented and annually for two years to document the effectiveness of the project. This monitoring would be conducted by Calaveras Ranger District staff and project partners, and includes: ground water, surface water, sediment transport, planted vegetation success or mortality, wetland condition (CRAM), noxious weed presence, the integrity of the restoration, and the presence of new headcuts (see Table 1 for details).
During construction, Plumas Corporation and SNF staff would be on-site continuously, and
responsible for ensuring that Best Management Practices are followed, mitigations measures are implemented, and water quality leaving the project area is sampled (in the event of surface water during construction). Once the project is completed, a report on construction is sent to the funding agency, as well to the permitting agencies (Regional Water Quality Control Board and US Army Corps of Engineers). The report will certify compliance with mitigation measures.
**Project Monitoring**
The Mattley Meadow Restoration Project is expected to benefit multiple resources by restoring the hydrological and ecological functions of the meadow floodplain system. The purpose of project monitoring is to measure project effectiveness on water quality, timing of flows, and enhancement of wildlife and aquatic habitats. Monitoring parameters and methods that would be utilized are outlined in **Table 1**.
**Table 1. Project Effectiveness Monitoring of the Proposed Action**
| Monitoring Parameter | Method | Responsible Party |
|----------------------|------------------------------------------------------------------------|---------------------------------------------------------|
| Water Temperature | Water temperature data loggers installed above and below project area May-Sept* | Plumas Corporation** |
| Aquatic Habitat | California Rapid Assessment Method (CRAM) | Plumas Corporation |
| Groundwater | 6 groundwater wells (approximately 6 to 12 ft in depth) made of 3/4” galvanized perforated pipe, measured monthly* | Plumas Corporation**; USFS as time allows |
| Stream Flow | Staff gage and pressure transducer installed at the bottom of project area; monthly* manual calibration flow measurements; quarterly* collection of oxygen isotope samples and measurement of electrical conductivity (EC) from inflows, springs, and wells | Plumas Corporation** |
| Sediment Supply | Channel cross-section surveys; CRAM | Plumas Corporation |
| Meadow Vegetation | All revegetation areas would be monitored for three years following project completion. Monitoring will quantify willow survival and percent cover of native meadow vegetation. | USFS |
| Sierra Nevada yellow-legged frog Population | Existing SNYLF population in the untreated “West” channel would be monitored annually, as well as the remnant channel and borrow ponds in the restored area of Mattley Meadow for potential SNYLF dispersal. | USFS |
*As access permits
**Plumas Corporation has secured funding for monitoring through 2020. Additionally, Plumas Corporation is working with the ACCG so that this group can continue monitoring outside of the existing funding window.*
MANDATORY FINDINGS OF SIGNIFICANCE
- No substantial evidence exists that the proposed project would have a negative or adverse effect on the environment.
- The project would not substantially degrade the quality of the environment, significantly reduce the habitat for fish and wildlife species, result in fish or wildlife populations below a self-sustaining level, reduce the number or restrict the range of a special-status species, or eliminate important examples of California history or prehistory.
- The project would not have environmental effects that would cause substantial direct or indirect adverse effects on humans.
- The project would not have environmental effects that are individually limited but cumulatively considerable.
As the UMRWA decision-making body for this project, I have reviewed and considered the information contained in the Final Mitigated Negative Declaration, which includes the Initial Study, Proposed Mitigated Negative Declaration, and comments received during the public review process, prior to approval of the project.
In accordance with Section 21082.1 of the California Environmental Quality Act (CEQA), I find that UMRWA has independently reviewed and analyzed the Initial Study and Proposed Mitigated Negative Declaration for the proposed project and that the Initial Study and Proposed Mitigated Negative Declaration reflect UMRWA’s independent judgment and analysis. I find that although the proposed project could have a significant effect on the environment, there would not be a significant effect in this case because revisions to the project have been made by the project proponents, USFS and implementing partners, as described in the Proposed Mitigated Negative Declaration.
Therefore, on the basis of the whole record before UMRWA, I find that there is no substantial evidence that the project will have a significant effect on the environment. I therefore adopt this Mitigated Negative Declaration pursuant to CEQA Guidelines Section 15074.
_________________________________________ _________________________
Richard Sykes Date
Executive Officer
Upper Mokelumne River Watershed Authority |
AN APPROACH FOR VISUALIZATION OF THE INTERACTION BETWEEN COLLAGEN AND ELASTIN IN LOADED HUMAN AORTIC TISSUES
A. Pukaluk\textsuperscript{1}, H. Wolinski\textsuperscript{2,3}, C. Viertler\textsuperscript{4}, P. Regitnig\textsuperscript{4}, G.A. Holzapfel\textsuperscript{1,5}, G. Sommer\textsuperscript{1}
\textsuperscript{1}Institute of Biomechanics, Graz University of Technology, Austria
\textsuperscript{2}Institute of Molecular Biosciences, University of Graz, Austria
\textsuperscript{3}Field of Excellence BioHealth – University of Graz, Austria
\textsuperscript{4}Institute of Pathology, Medical University of Graz, Austria
\textsuperscript{5}Department of Structural Engineering, NTNU, Norway
firstname.lastname@example.org
Abstract. Knowledge of the interaction between the constituents of loaded aortic tissues is crucial to expand our understanding of load-bearing mechanisms in the aorta. We have therefore developed a procedure that enables simultaneous multi-photon microscopy imaging of collagen and elastin in human aortic tissue during the biaxial extension test. The microscopy images obtained were verified with the results of the histological staining. The mechanical response was also compared with findings from previously performed biaxial extension tests. The proposed pipeline has shown successful and has great potential for structural analysis of human aortic tissue.
Keywords: Human aorta, collagen, elastin, biaxial extension test, multi-photon microscopy
Introduction
The healthy aortic wall consists of three layers, namely the intima, media and adventitia [1]. Each of the layers is characterized by its own structure and function. From a mechanical point of view, the main role is played by the media responsible for the aortic response to loading and the adventitia, which prevents the aorta from overstretching and possible rupture. Both media and adventitia owe their passive mechanical properties mainly to two proteins, namely collagen and elastin. Although the arrangement of these proteins in the aortic layers in the unloaded state has already been described [1,2], little is known about the changes caused by the load. Therefore, this study proposes a method for the efficient visualization of collagen and elastin in loaded aortic tissue.
Methods
The developed procedure was applied to one medial and one adventitial specimen from a non-atherosclerotic and non-aneurysmatic human abdominal aorta (52 yrs old, female). The aorta was received within 24 h of death and frozen at -20°C.
Sample preparation. The aortic tube was thawed at 4°C prior to preparation for testing and imaging. During preparation, all steps were carried out at room temperature and the samples were kept moist with phosphate buffered saline (PBS) at pH 7.4. Loose connective tissue was removed, and the intact aortic tube was cut open in the longitudinal direction. The intimal and adventitial layers were then carefully dissected from the media [3], and square samples measuring 20×20 mm were cut in order to obtain medial and adventitial patches. In addition, adjacent rectangular patches of dimensions of about 4×10 mm were cut for histological examinations. Particular care was taken to ensure that the edges of squares and rectangles match the anatomical longitudinal and circumferential directions of the aorta. The mean thickness of each sample was measured optically [3]. Each square sample was then pierced by four sets of hooks connected by sutures. A set of five hooks was used on each side [4].
Histology. Aortic specimens were embedded in paraffin and cut at 4 µm with the microtome Microm HM 335 (Microm, Walldorf/Baden, Germany). Next, the sections were stained with Picrosirius Red (PSR) to highlight fibrillar collagen and Elastica van Gieson (EvG) to highlight elastin fibers [5] to verify multi-photon microscopy images.
Multi-photon microscopy. The imaging took place at the IMB-Graz Optical Imaging Resource with a tunable picosecond laser (picoEmerald; APE, Berlin, Germany), which was integrated into a Leica SP5 confocal microscope (Leica Microsystems, Mannheim, Germany). The laser was tuned to 880 nm to induce both the second harmonic generation (SHG) signal from collagen and the two-photon excited (TPE) autofluorescence signal from elastin. A two-channel, non-descanned detector (NDD) in epi-mode was used to detect SHG and TPE signals simultaneously (SP 680 nm barrier filter, i.e., excitation light filter; BP 460/50 nm for SHG signal; BP 525/50 nm for TPE signal; beamsplitter RSP 495 for two-channel separation of SHG and TPE signals). Z-stacks were acquired with the HCX IRAPO L 25x NA 0.95 water immersion objective with a large working distance of 1.5 mm for imaging the deep tissue and a sampling interval of 0.6×0.6×5.0 µm.
As a compromise between image quality and acquisition time, four-fold line averaging was used to reduce image noise. A coverglass and water as the immersion medium could not be used with samples mounted on the biaxial test device, since the coverglass
could not be fixed horizontally and the sample quickly soaked up water. Alternatively, an aqueous eye gel Lac® Ophtal® Gel (Dr. Winzer Pharma, Berlin, Germany) was used [6], and the lens was dipped directly into the gel.
**Biaxial extension test.** In order to carry out the planar biaxial extension test and the multi-photon imaging simultaneously, a biaxial testing device was constructed, which could be placed on the microscope stage, based on the design described in [7], but limited by the geometrical and environmental requirements of the microscope. The device integrates four high precision linear positioners SLC-2640 (SmarAct, Oldenburg, Germany) with the maximum travel range of 35 mm and 1 nm resolution while the maximum velocity is limited to 20 mm/s and the maximum blocking force to 3.5 N. Each positioner carries a bracket with an assembled load cell KM10z 25N (ME-Meßsysteme, Hennigsdorf, Germany) characterized by a maximum permissible force of 25 N and 1% accuracy class. The design of the device allows displacement and force measurements in two perpendicular directions (Fig. 1) with one set of sensors on each side.
The stretch-driven testing protocol was implemented with the LabView software (National Instruments, Austin, USA). All samples were loaded equibiaxially and quasi-statically at a speed of 3 mm/min. First, a sample was subjected to the pre-load of 10 mN, which was defined as the reference configuration at a stretch of 1. The pre-load was followed by cycles of preconditioning to obtain a reproducible response. The z-stack series of images was then taken in the center of the sample. After imaging was completed, the sample was stretched to 1.02 and imaged again. The experiment was repeated with 0.02 stretch increments until the stretch of 1.40 was reached.

**Results**
**Histology.** For the media, the histological staining showed crimpy collagen fibers embedded in a network-like arrangement of elastin fibers (Fig. 2). In contrast, the adventitia showed smooth, wavy collagen bundles accompanied by separate, either curly or straight, elastin fibers.

**Multi-photon microscopy.** The emission signal transmitted through the BP 460/50 nm and BP 525/50 nm filters was color-coded in green and red, respectively (Fig. 3). The red-colored channel, which was expected to reflect elastin, captured the network-like fibrillary structure in the media and individual fibers in the adventitia.

Both images resembled the elastin shown by the EvG staining (Fig. 2). For the media, however, the green-colored channel contained not only the curly fibers corresponding to the histological analysis, but also the network-like structure of elastin. The spectral crosstalk of SHG signal from medial collagen and the TPE signal from elastin was observed as a yellow color in the merged images (Fig. 3). For adventitia, the green-colored channel contained fiber bundles...
that were comparable to adventitial collagen as stained by EvG and PSR.
**Biaxial extension test.** The experiment was successfully carried out up to a stretch of 1.40 for the media, but has to be stopped at a stretch of 1.28 for the adventitia (Fig. 4) due to the overload of the linear positioners. In addition, tissue relaxation was observed as a decrease in Cauchy stress during imaging. Nevertheless, the characteristic mechanical response of both layers was recorded. Adventitia showed a stiffer response and more pronounced anisotropy than the media. In addition, a stiffer longitudinal response was observed for the adventitia while it was observed in the circumferential direction for the media.

**Discussion**
**Multi-photon microscopy.** In the course of this study, the importance of the correct setting of multi-photon microscopy and its validation was demonstrated. Although filters with an equivalent transmission range were used for divergent collagen examinations [8-11], this turned out to be unsuitable for imaging the untreated human aortic media without further processing, e.g., image subtraction. The SHG signal from collagen can be observed to be induced by the laser excitation wavelength in the range from 730 to 940 nm [12,13]. The excitation wavelength commonly used is around 800 nm [10,14,15] as described by Zoumi et al. [12] or around 880 nm [8,9,11,16], which corresponds to near 900 nm, as reported by Chen et al. [17]. For this study, the emission wavelength 880 nm was chosen based on our previous studies on the human abdominal aorta [8,9], as the signal was observed to be optimal for this tissue in terms of the intensity of the emission. Differences in optimal values of the excitation and emission wavelengths can be caused by the sensitivity of the collagen SHG signal to the biochemical properties of the solution in which the fibrils are located. Although it proved to be insensitive to the pH value within the physiological range, it changes dramatically with the ionic strength of the solution [18]. The spectral crosstalk of medial collagen and elastin identified during this study was also observed for the human thoracic aortic media imaged by Koch et al. [19], who excited the tissue with a laser wavelength of 830 nm and used 400±50 nm and 525±25 nm bandpass filters to capture collagen and elastin, respectively. Interestingly, no prominent spectral crosstalk was reported by Phillippi et al. [20] when examining the human aortic media with the same settings. This discrepancy can be caused by using different gains for the distinct channels. In addition, van Zandvoort et al. [21] observed a spectral crosstalk of elastin TPE in the carotid arteries of mice using 410-490 nm bandpass emission filter. The presence of a TPE signal for elastin in this shorter wavelength range (410-490 nm) can be caused by a relatively higher intensity of this TPE signal compared to the SHG signal of collagen [22].
**Biaxial extension test.** During the test it was not possible to provide a physiologically similar environment (immersion with PBS at 37°C), resulting in noticeable drying of the tissue borders, which can affect the mechanical response. In addition, the Cauchy stress-stretch curves are affected by relaxation phenomena during imaging. Despite these limitations, our results are comparable with other studies. Similar to our study, Niestrawska et al. [8] reported higher mean values of the Cauchy stresses in the circumferential than in the longitudinal direction for the medial layer of the human abdominal aorta. For the adventitia, too, a stiffer response in the longitudinal direction compared to the circumferential direction was reported in previous studies [8,23].
**Further implications.** The presented novel combination of multi-photon microscopy and biaxial extension tests provides an insight into the microstructure of the human aortic layers, which are exposed to increased equibiaxial stretch. The visualized collagen and elastin can be further analyzed and quantified in order to obtain important structural parameters such as orientation, dispersion, thickness and waviness of the fibers. Available material models are not yet able to take into account all of the structural parameters mentioned above. Therefore, combined microstructural and biomechanical data, as provided in the study, are essential to develop and calibrate novel material models to better reproduce and predict the mechanical behavior of aortic tissues in health and disease.
**Acknowledgements**
We would like to thank A. Donnerer from the Institute of Pathology, Medical University Graz, for his valuable support during tissue harvesting. Special thanks go to M. Triehlhaider for his work on the biaxial testing device. We would also like to thank the Austrian Science Funds (FWF) for financial support with the grant no. P30260.
References
[1] Holzapfel, G.A. and Ogden, R.W.: Biomechanical relevance of the microstructure in artery walls with a focus on passive and active components, *Am. J. Physiol. Heart Circ. Physiol.*, vol. 315, pp. H540-H549, May 2018
[2] Sherifova, S. and Holzapfel, G.A.: Biochemomechanics of the thoracic aorta in health and disease, *Prog. Biomed. Eng.*, vol. 2, pp. 032002, Jul. 2020
[3] Sommer, G., Gasser, T.C. et al.: Dissection properties of the human aortic media: an experimental study, *J. Biomech. Eng.*, vol. 130, pp. 021007, Apr. 2008
[4] Eilaghi, A., Flanagan, J.G. et al.: Strain uniformity in biaxial specimens is highly sensitive to attachment details, *J. Biomech. Eng.*, vol. 131, pp. 0910031-0910037, Sep. 2009
[5] Weisbecker, H., Viertler, C. et al.: The role of elastin and collagen in the softening behavior of the human thoracic aortic media, *J. Biomech.*, vol. 46, pp. 1859-1865, Apr. 2013
[6] Bancelin, S., Lynch, B. et al.: Ex vivo multiscale quantitation of skin biomechanics in wild-type and genetically-modified mice using multiphoton microscopy, *Sci. Rep.*, vol. 5, pp. 17635, Dec. 2015
[7] Sommer, G., Haspinger, D.C. et al.: Quantification of shear deformations and corresponding stresses in the biaxially tested human myocardium, *Ann. Biomed. Eng.*, vol. 43, pp. 2234-2348, Oct. 2015
[8] Niestrawska, J.A., Viertler, C. et al.: Microstructure and mechanics of healthy and aneurysmatic abdominal aortas: experimental analysis and modeling, *J. R. Soc. Interface*, vol. 13, pp. 20160620, Nov. 2016
[9] Schriefl, A.J., Wolinski, H. et al.: An automated approach for three-dimensional quantification of fibrillar structures in optically cleared soft biological tissues, *J. R. Soc. Interface*, vol. 10, pp. 20120760, Dec. 2012
[10] Chow, M., Turcotte, R. et al.: Arterial extracellular matrix: a mechanobiological study of the contributions and interactions of elastin and collagen, *Biophys. J.*, vol. 106, pp. 2684-2692, Jun. 2014
[11] Krasny, W., Morin, C. et al.: A comprehensive study of layer-specific morphological changes in the microstructure of carotid arteries under uniaxial load, *Acta Biomater.*, vol. 57, pp. 342-351, May 2017
[12] Zoumi, A., Yeh, A., and Tromberg, B.J.: Imaging cells and extracellular matrix in vivo by using second-harmonic generation and two-photon excited fluorescence, *Proc. Natl. Acad. Sci. USA*, vol. 99, pp. 11014-11019, Aug. 2002
[13] Green, N.H., Delaine-Smith, R.M. et al.: A new mode of contrast in biological second harmonic generation microscopy, *Sci. Rep.*, vol. 7, pp. 13331, Oct. 2017
[14] Sugita, S. and Matsumoto, T.: Multiphoton microscopy observations of 3D elastin and collagen fiber microstructure changes during pressurization in aortic media, *Biomech. Model. Mechanobiol.*, vol. 16, pp. 763–773, Nov. 2017
[15] Timmins, L.H., Wu, Q. et al.: Structural inhomogeneity and fiber orientation in the inner arterial media, *Am. J. Physiol. Heart Circ. Physiol.*, vol. 298, pp. 1537-1545, Feb. 2010
[16] Di Giuseppe, M., Alotta, G. et al.: Identification of circumferential regional heterogeneity of ascending thoracic aneurysmal aorta by biaxial mechanical testing, *J. Mol. Cell Cardiol.*, vol. 130, pp. 205-215, Apr. 2019
[17] Chen, X., Nadiarynkh, O. et al.: Second harmonic generation microscopy for quantitative analysis of collagen fibrillar structure, *Nat. Protoc.*, vol. 7, pp. 654-669, Mar. 2012
[18] Williams, R.M., Zipfel, W.R. and Webb, W.W.: Interpreting second-harmonic generation images of collagen I fibrils, *Biophys. J.*, vol. 88, pp. 1377-1386, Feb. 2005
[19] Koch, R.G., Tsamis, A. et al.: A custom image-based analysis tool for quantifying elastin and collagen micro-architecture in the wall of the human aorta from multi-photon microscopy, *J. Biomech.*, vol. 47, pp. 935-943, Mar. 2014
[20] Philippi, J.A., Green, B.R. et al.: Mechanism of aortic medial matrix remodeling is distinct in patients with bicuspid aortic valve, *J. Thorac. Cardiovasc. Surg.*, vol. 147, pp. 1056-1064, Mar. 2014
[21] van Zandvoort, M., Engels, W. et al.: Two-photon microscopy for imaging of the (atherosclerotic) vascular wall: a proof of concept study, *J. Vasc. Res.*, vol. 41, pp. 54-63, Jan. 2004
[22] Zoumi, A., Lu, X. et al.: Imaging coronary artery microstructure using second-harmonic and two-photon fluorescence microscopy, *Biophys. J.*, vol. 87, pp. 2778-2786, Oct. 2004
[23] Li, H., Mattson, J.M. and Zhang, Y.: Integrating structural heterogeneity, fiber orientation, and recruitment in multiscale ECM mechanics, *J. Mech. Behav. Biomed. Mater.*, vol. 92, pp. 1-10, Apr. 2019 |
LET'S BE FRANK
PLANS BOOK BY TEAM 401
Wienerschnitzel
NSAC NATIONAL STUDENT ADVERTISING COMPETITION
LET’S BE FRANK, HOT DOGS ARE IN TROUBLE
For generations, hot dogs have been a social staple in America. Through baseball games, backyard barbecues and campouts, hot dogs have helped shape American culture. They’ve served as a symbol of social unity. However, national hot dog sales have been steadily declining since 2014, and this trend is projected to continue.
Hot dogs’ beloved cousin, the brat, has consistently flourished in market sales and has never experienced a purchasing low like their family member. Meat consumption has increased nationally, with chicken leading category sales. This caused the once mighty hot dog to be placed in the back of consumers’ minds. Hot dogs need help, so who’s their ultimate rival? It’s easy to point fingers at brats and burgers, but a closer look suggests a different villain – negative perceptions. Americans have set aside their old friend based on ideas they’re absurdly unhealthy, filled with mystery meat and too basic.
Perceptions lead to stigmas, and stigmas guide social acceptability. It’s time to elevate the perception of hot dogs.
No one buys into the hot dog stigma more than trend-following MILLENNIAL WOMEN. Hot dogs don’t fit into the social mold of picture-worthy foods they proudly share on Instagram. In their world of wilted kale salads and tiny avocado toasts, they don’t have social permission to embrace eating hot dogs.
Women have the food purchasing power in their households. They control 72 percent of household spending, yet they make up only 52 percent of all hot dog sales. Millennial women have the potential to buy and consume more hot dogs than any other target market. With Wienerschnitzel’s voice leading the way, it’s time to tell these women to be authentic to themselves and what they love.
RESEARCH
THE CHALLENGE
Elevate the national perception of the hot dog.
CAMPAIGN OBJECTIVES
This 12-month campaign will:
+ Increase primary demand of hot dogs by 4 percent.
+ Increase overall hot dog consumption by 4 percent, which is the equivalent of roughly 800 million hot dogs.
+ Increase positive social sentiment from 28 percent → 40 percent.
+ Decrease negative social sentiment from 20 percent → 15 percent.
THE CURRENT SITUATION
Since 2014, hot dog sales have been slowly declining. However, the rest of the meat industry has experienced an upward growth, with chicken leading sales. Poultry accounted for 44 percent of the total U.S. meat consumption in 2018, followed by beef with 23 percent. Though it may seem like other animal-based proteins are hot dogs’ biggest competitors, their fiercest rival is the negative perception consumers have of hot dogs. Research revealed people believe hot dogs are absurdly unhealthy, filled with mystery meat and too basic to be made into memorable meals. Because of this negative perception, hot dogs have been shoved to the back of people’s minds. Even with hot dogs’ declining sales, Wienerschnitzel has seen a 4-5 percent increase in yearly store revenue. It’s been beating the odds, making it the ideal voice to help change the perception of hot dogs.
METHODS
130 SECONDARY SOURCES
24 ONE-ON-ONE INTERVIEWS
91 GROCERY STORE INTERCEPTS
1,009 SURVEY RESPONDENTS
15 FOCUS GROUPS PARTICIPANTS
AGENCY 401 HAD SOME QUESTIONS TO ASK
+ Who’s impacted the most by hot dogs’ perception?
+ What builds trust in brands?
+ What leads to sharing experiences?
+ Where are hot dogs commonly enjoyed?
+ Where are new, trendy products found?
IN AN ARRAY OF FOOD CHOICES, I THINK HOT DOGS WOULD COME LAST.
BRITTANY, FOCUS GROUP PARTICIPANT
2018 243.99 MILLION CONSUMERS
2020 237.41 MILLION CONSUMERS
PROJECTED HOT DOG CONSUMPTION
SOURCE: WIENERSCHNITZEL CASE STUDY
The challenge of elevating the perception of hot dogs triggered a deep investigation into the inner workings of the trend process.
Three groups contribute to this process: trendsetters, trend followers and mainstreamers. Trendsetters create the trends. Their creative ideas are the perfect way to trigger social desirability among trend followers. The trend followers then adopt these trends into their own lives. From there, the trend reaches a mainstream audience and hits its peak. **Trendsetters create trends, but their desire to continuously move on to “the next new thing” leads to a lack of loyalty needed to make trends flow into the mainstream.**
**ENTER TREND FOLLOWERS.**
Open-minded, curious and loyal, they’re the perfect group to adopt trends into their lives. Trend followers are constantly on the lookout for new things to help curate their social image. Only after trend followers adopt a trend does it reach the mainstream.
**Who are trend followers? They’re the go-getting, coffee-drinking, millennial women we see around us everyday.**
Trend followers know the influence of a trend and understand the power of perceptions. The negative perception of hot dogs has led to a hot dog stigma. They know it, causing them to hide that they’re proud of eating them. **They eat hot dogs – 52 percent have hot dogs in their homes** – but conversations proved their thoughts and their actions were at a disconnect. It’s clear they have potential to change the perception of hot dogs if they’re given the go-ahead. **Trend followers have the power to bring positive sentiments toward hot dogs front-of-mind for almost all women 25-34.**
Even though the meat industry is seeing a steady rise in consumption, hot dogs are projected to experience the opposite. Women like hot dogs, but because of the negative perception, they aren’t proud to say they eat hot dogs. This leads to a purchasing disparity in a market that has great potential.
Personal authenticity is important to 98.6 percent of women surveyed. These women want to be true to themselves and what they love. One focus group participant said, “It’s important to be your own person. You need to be true to yourself.”
Forty-one percent of women in the target claim they share positive experiences with others. They share the things they’re proud of through social media and in person.
The target audience eats hot dogs at social gatherings like cookouts and sporting events. Additionally, focus group participants said they don’t think hot dogs can be made into a complete meal at home.
Trend followers long for experiences made up of moments that fuel their creativity. They love changing things up and paving a new path, but they need to see a trendsetter doing it first to give them permission.
**KEY INSIGHT**
TREND FOLLOWERS’ AUTHENTICITY DRIVES THEIR PRIDE.
STRATEGY
TARGET
Trend followers are women who enjoy eating hot dogs, but they don’t currently advocate for them. They play a key role in elevating a fad to a trend. They have a sense of pride about the things they discover and share those discoveries among friends. Trend followers are vocal discoverers whose authenticity drives their pride.
TONE
+ **Playful**, defined by discovery on one’s own terms
+ **You-centric**, and revolves around the target and its experiences
+ **Honest** in showing – not telling – trend followers the perceptions of hot dogs
These appeals to discovery, individuality and authenticity will create a drive for trend followers to share these ideas and spread the message.
TAKEAWAY
Authentically eating hot dogs, simply because one enjoys them, is something to be proud of.
SUPPORT
Wienerschnitzel is showing that being true to oneself is more valuable than being true to society’s standards, and they taste good too.
POSITIONING STATEMENT
For trend followers, hot dogs create a sense of pride. Millennial women are going against societal expectations and eating hot dogs simply because they like them.
There are a few things to know about trend followers:
+ Trend followers **crave discovery**. They want to come across things organically rather than having new ideas and messages shouted at them – they want to be a part of the conversation.
+ These women also **can’t be fooled**. They know when they’re being told something and when they’re being sold something.
+ Trend followers enjoy hot dogs, but they **feel ostracized** by the current perceptions.
With these facts in mind, Wienerschnitzel encourages trend followers to be loud about what they’re proud of: **HOT DOGS**.
This campaign is a voice of permission that moves the target to rediscover a food they enjoy and **FIND IT**. This campaign inspires them to be authentic and **OWN IT**.
Finally, it allows them to be a part of the campaign and **SHARE IT**. This campaign calls trend followers to embrace themselves and the hot dogs they love.
Simply put...
THERE’S SOMEBODY YOU SHOULD MEET
Say hello to Vera Frank.
She is the social media influencer hot dogs deserve. Vera’s social account oozes authenticity, but she is a character that is an extension of creative efforts, prevalent in all activations. So, if authenticity is essential, why create a fictional character? Every trend follower needs a trendsetter to follow. Unabashed love for hot dogs is not showcased by any social influencer. Team 401 created an embodiment of that passion of being honest to oneself.
Vera Frank is the voice of permission in all creative executions who tells trend followers, “Don’t hide your pride.”
SERVING THE MESSAGE
Starting a trend requires three stages: discovering, embracing and sharing. To change the perception of hot dogs, all three steps need to be addressed. By creating a layered-messaging system that allows trend followers to seamlessly jump in at any time, each step of the trend process will be activated. **FIND IT** invokes discovery. **OWN IT** embraces authenticity. **SHARE IT** spreads pride.
**VERA’S INSTA**
Social media is a trendsetter’s best friend – especially Instagram. Vera is the quintessential authentic influencer. Her profile is a flawless example of how authenticity drives pride, and pride fuels sharing. She’s proud of what she shares, and she shares often.
Vera’s active on her social media and posts at least once a day. Her favorite thing to post? Hot dogs, fashion and fashion inspired by hot dogs.
**MICROSITE**
Trend followers are online, and so is this campaign. VeraFrank.com is a campaign microsite that is the home for all creative production. It will also be a hub for information and locations of PR events. The site will host a photostream of social posts from users who join Vera in sharing their pride. The site is linked in Vera Frank’s Instagram profile, so as audiences discover her, they can find, follow or catch up on the campaign.
FIND IT
FIND IT will get the campaign rolling. What’s most important to trend followers? Personalized experiences, which are this phase’s strong suit. Trend followers aren’t the go-with-the-flow type, they’d much rather discover things for themselves.
30-SECOND TV
“MEETING AN OLD FRIEND”
Nothing’s more authentic than meeting with old friends. This 30-second traditional TV spot associates familiarity and rediscovery with hot dogs. A trend-following woman is preparing to meet an old friend – Vera Frank – who has brought along a couple of other old friends: hot dogs.
Watch this spot HERE.
20-SECOND PRE-ROLL
“ALL YOU SEE IS WHAT YOU WANT”
Sometimes all you see is what you really want. This 20-second pre-roll spot shows just that. Vera Frank stands alongside a trend follower showing her that embracing her authentic enjoyment over social pressure is something to be proud of.
Watch this spot HERE.
Sometimes all you see is what you really want.
PRINT
How do trend followers overcome crushing social pressure? By being true to themselves, “Sometimes All You See” depicts exactly that: a trend-following woman seeing her honest thoughts reflected back at her. Keeping with the playful tone, this interactive print ad peels away to reveal Vera Frank’s Instagram and links to the microsite.
SPOTIFY
Music and discovery? They go hand in hand. Playlists like Discover Weekly will be “brought to you by hot dogs, served to you by Wienerschnitzel,” encouraging trend followers to go find them. A sponsored audio ad will run for free users.
Listen to it HERE.
OUT OF HOME
Vera Frank is more than just a hot dog Instagrammer. The “It’s a Vera Frank” billboard shows that she is a lifestyle icon who isn’t afraid to let her love for hot dogs show – even in her fashion. This billboard will show that trend followers can find hot dogs in all parts of their lives. The mystery of Vera Frank will drive discovery for Vera’s social page and the campaign microsite.
PR BRAND INTEGRATION
It’s the delicious Food Network show we all know and love, “Chopped.” A sponsored episode of the show will air during the Find It phase of the campaign, featuring hot dogs as the main ingredient in the mystery basket. Contestants will develop a recipe using the hot dog and other ingredients provided by the kitchen. After the episode airs, all recipes will be uploaded to the Food Network website for viewers to try at home. Reruns will occur once a month throughout the duration of the campaign, keeping the recipes fresh in the minds of trend followers and reaching new viewers with each airing.
OWN IT
The second phase of the campaign calls trend followers to take their discoveries and own them. The key to trend making is embracing. **OWN IT** encourages trend followers to make discovery a prominent part of their lives.
PRE-ROLL/ONLINE TV
“OWN IT”
Social pressures can be stifling. That’s why two trend-following women are pretending to enjoy their smoothie and salad. That is until they see the model for authenticity – Vera Frank – eating a hot dog and owning it. This ad is also expandable to a 30-second ad for online streaming services.
Watch this ad [HERE](#).
INSTA INFLUENCERS
Instagram is a millennial woman’s best friend. When it comes to finding new things, influencers show trend followers what’s worth discovering. Their latest posts? All about hot dogs.
PR MUSIC FESTIVAL TENT
Music and film festivals have become a millennial woman’s paradise. From community interaction to over-the-top exhibits, festivals are a great way to make an impression on trend followers. In fact, 81 percent of millennials attend music festivals specifically to engage with a like-minded community. A hot-dog-centric experience – *The Haute Hideaway* – will be at these five events: SXSW, Hangout, Bonnaroo, Lollapalooza and Austin City Limits. The inside of the tent will be upscale while the outside walls are transparent, displaying a shameless exhibit of one’s pride. It will feature photo backdrops encouraging people to share their experiences online. Attendees will be treated to a curated menu of gourmet hot dogs and crafted cocktails, driving momentum during a time when hot dog sales are already high.
A hot dog cannon will be present at the same events in which the Haute Hideaway is placed. Between performances, hot dogs will be launched into the audience, giving free dogs to attendees and offering a pick-me-up to those anticipating their favorite artists. It will entertain a captive audience during a time when no other performances are happening.
Nothing shows pride like sharing. As the final phase of the campaign, the ultimate goal of SHARE IT is to encourage trend followers to share their love for hot dogs across social platforms. This will keep hot dogs relevant, even into their traditional off-season: winter months.
Vogue, stylish, chic. These words have never described the hot dog – until now. To kick off the Share It phase of the campaign around mid-November, residents of Portland, Denver, Kansas City, Atlanta and Miami will begin to see boxes being constructed in high-traffic city centers. Digital countdown clocks will encourage pedestrians to return at a specific time without disclosing what is happening. Across the country, curtains concealing the boxes will drop simultaneously to reveal a hot-dog-inspired fashion show. This tactic is designed to achieve earned media both locally and nationally as these events unfold nationwide. The show will feature the Vera Frank fashion line to highlight pride with hot dog accessories.
OUT OF HOME
The sightseeing wall will allow viewers to see the world the way Vera Frank does: full of hot dogs. Murals will show hot dogs becoming a part of familiar landmarks. In a playful tone, it will call trend followers to take photos and share them. The wall will be placed in high traffic areas of millennial magnet cities, and will help hot-dog-loving trend followers share how they see the world.
INSTA INFLUENCERS
Let’s get real. While Vera Frank can show these women how authenticity and pride are linked, influencers prove it. Whether they are trend followers’ favorite blogger or a TV personality, influencers have a major say in what their followers adopt into their own lives.
**Macro-influencers will post twice** during the campaign. They will be the vessel showing trend followers how to embrace pride.
**Micro-influencers will post more frequently with 5 posts** throughout the campaign. Having a variety of posts will cause an increase in social media interactions.
MACRO-INFLUENCERS (5 TOTAL)
- ANTONI POROWSKI @ANTONI
- KRISTEN BELL @KRISTENANDIEBELL
- TIFFANY HADDISH @TiffanyHaddish
MICRO-INFLUENCERS (20 TOTAL)
- SABRINA @DINNERTHENDESSERT
- ALISON ROMAN @ALISONEROMAN
- JEREMY JACOBOWITZ @RUNCHBOYS
INSTA GIVEAWAYS
Giveaways – some of the most interactive social media posts – allow micro-influencers to engage with their followers. This move is designed to place hot dogs in an entirely different context. After liking, following and commenting, the audience will be entered to win one of the following packages: **Treat Yo’ Self**, Oktoberfest, Summer BBQ or Wine & Dine. Over the course of the **OWN IT** and **SHARE IT** phases, there will be a total of 20 giveaways.
OUT OF HOME
Everyone likes to be noticed. The “To The Girl Who…” billboards address and embrace women who have found and are owning their hot dogs, and calls other to join the movement. These billboards create a sense of relatability and personality for Vera Frank. The personal and out-of-the-box remarks will cut through standard out-of-home ads to engage the audience and drive them to share Vera’s messages.
MEDIA PLAN
LET’S GET DOWN TO BUSINESS.
A $25 million budget drives a 12-month national campaign starting in March 2020. The campaign will kick off right before summer, placing hot dogs top of mind for consumers and continuing until the end of winter. The budget accounts for media, public relations, social media and production costs.
MEDIA OBJECTIVES
+ Execute a yearlong national advertising campaign using a $25 million budget to reach the target audience of women ages 25-34.
+ Use the most effective media mix to implement the three-stage strategy: FIND IT, OWN IT, SHARE IT.
+ With traditional and digital media, establish a minimum 80 percent reach among trend followers with an average frequency of 12 during the yearlong campaign.
STRATEGY
Remember the trend followers? It’s time to give them the go-ahead to embrace hot dogs. The best way to do this? Digital, traditional, non-traditional and owned media. Using a strategically crafted media schedule, the campaign will consistently reach the target.
Here are the specifics: focusing on influencer marketing will increase trend followers’ awareness, consideration and advocacy for hot dogs. Increased pride and sharing on social media will elevate the perception of hot dogs.
TACTICS
The goal? Meet trend followers where they’re already discovering new things. The campaign’s media buys are influenced by factors in the trend followers’ lifestyles and media usage as defined by Commspoint Media Software, Simmons Research and Kantar Media.
FIND IT: Heavily reach and encourage trend followers to try (or retry) a hot dog.
OWN IT: Inspire them to be proud of eating hot dogs.
SHARE IT: Get them to share positive messages about hot dogs with others on social media.
MEDIA SCHEDULE
OWN IT
TIMING IS EVERYTHING
A March 2020 launch is the best way to build momentum before hot dog sales peak in the summer and will utilize a pulsing strategy to carry that momentum throughout the year.
FIND IT will run continuously throughout the course of the campaign.
OWN IT will kick off in July when hot dog sales peak, making it a perfect time for the audience to own its dog.
SHARE IT will start in November to continue the buzz about hot dogs during the winter months.
IT’S ALL ABOUT IMPRESSIONS
TOTAL IMPRESSIONS: 958,539,457
DIGITAL MEDIA
ONLINE TV
COST: $4,084,000
IMPRESSIONS: 83,900,504
16.3%
This advertisement will be seen in both FIND IT and OWN IT. Women are pulling the plug on their cable habits. Currently, 61 percent of trend followers primarily watch TV through streaming services. The target audience will see the video advertisements on Hulu.com, ABC.com, NBC.com, CBS.com and MTVu. The plan will use programmatic scheduling to ensure trend followers are seeing the videos ads.
ONLINE AUDIO
COST: $1,654,000
IMPRESSIONS: 48,137,322
6.6%
Online audio will be heard during the FIND IT phase. Spotify is the dominant player in the music streaming industry. Of Spotify’s 207 million users, 53.6 percent are not Premium subscribers – so they hear advertisements during their listening sessions. This campaign will also sponsor Spotify’s Discover Weekly playlist to reach a heavily engaged audience for both free and Premium subscribers.
VIDEO PRE-ROLL
COST: $2,381,000
IMPRESSIONS: 49,398,872
9.5%
Video pre-rolls provide opportunities for brand awareness during FIND IT and OWN IT. Viewers find 15-20 second pre-roll advertisements 3.5 times less interruptive than other video advertisements. These videos cannot be skipped and are effective for brand recall and incorporating calls to action. The pre-roll ads will be seen on YouTube, Facebook and Twitter.
PAID SEARCH
COST: $810,000
IMPRESSIONS: 19,509,439
3.2%
Paid search advertisements are effective for people seeking out new ways to incorporate hot dog recipes into their everyday lives. Ads will be active throughout the campaign on Google, Yahoo and Bing. Keywords like hot dogs, fun recipes, Wienerschnitzel, Vera and Vera Frank will point to the microsite. Paid search will run throughout the campaign.
MICROSITE
COST: $20,000
VERAFRANK.COM
The microsite will be live throughout the campaign. It will serve as a hub for all content. Shared social posts of the audience’s love for hot dogs will be projected and curated into a nationwide social stream.
PRODUCTION
COST: $280,000
These funds will be used to finance the production of all creative executions. They will also cover any unanticipated expenses.
TRADITIONAL MEDIA
TELEVISION
COST: $2,249,000
IMPRESSIONS: 16,477,922
Advertisements on cable television will be seen during FIND IT. During weekends, 86.8 percent of trend followers spend over an hour watching television. The advertisement will run frequently on E! Network, Freeform, HGTV’s “Fixer Upper” and “Property Brothers.”
PRINT
COST: $2,646,000
IMPRESSIONS: 32,424,459
Print advertisements will run during the FIND IT phase. Full-page, four-color (CMYK) magazine advertisements will appear in six of the 12 issues of Vogue, Magnolia Journal and Rachael Ray Everyday. The target audience reads these magazines to find new information. Each magazine has high circulation, vast readership among the target and index numbers ranging from 189-248.
OUT OF HOME
COST: $1,885,000
IMPRESSIONS: 57,245,489
A quarter of millennial women are heavily influenced by out-of-home ads. To capitalize on this, billboards will be placed near high-traffic intersections in select top-20 DMA cities. Efforts from the Find It phase will generate awareness. Once the SHARE IT phase begins, billboards will encourage engagement.
NON-TRADITIONAL
PUBLIC RELATIONS
COST: $4,273,000
PR tactics will provide personal experiences to trend-following women, giving them the opportunity to find and embrace hot dogs in their own ways. Approximately 34 percent of event attendees say they would post about their experiences on social media. These events will create earned exposure within the target audience.
SOCIAL ADS
COST: $3,291,000
IMPRESSIONS: 431,009,565
Social advertisements will feature weekly boosted social posts from Vera Frank and macro-influencers on Facebook and Instagram. These platforms offer an influential reach – 65.7 percent of trend followers spend more than two hours a day on social media. Social content will increase traffic to the microsite, provide information about PR events and include strong calls to action for the OWN IT and SHARE IT phases of the campaign.
INFLUENCERS
COST: $1,500,000
IMPRESSIONS: 220,435,875
To reach trend followers online, this campaign will utilize both macro- and micro-influencer marketing. Posts will be curated by each influencer and serve as the force behind giving permission to trend followers to share their love of hot dogs. Influencer marketing works because it improves brand advocacy by 94 percent.
EVALUATION
THIS CAMPAIGN WORKS
Team 401 reached out to trend followers in order to evaluate this campaign. One hundred twenty-one women took part in a nationwide online survey and 21 women participated in a local intercept survey. Respondents had the opportunity to view creative executions before answering questions.
35% decrease in negative sentiment toward hot dogs after viewing the content.
“I SHOULD BE PROUD TO EAT HOT DOGS.”
JESSICA
8% increase in those who are “very likely” to purchase hot dogs in the next month.
“PEOPLE SHOULD EAT HOT DOGS WITH PRIDE INSTEAD OF SHAME.”
COURTNEY
47% said they feel more confident sharing that they eat hot dogs.
“HOT DOGS ARE COOL AND MODERN.”
OLIVIA
CAMPAIGN OBJECTIVES
+ Increase primary demand of hot dogs by 4 percent.
+ Increase overall hot dog consumption by 4 percent, which is the equivalent of roughly 800 million hot dogs.
+ Increase positive social sentiment from 28 percent to 40 percent.
+ Decrease negative social sentiment from 20 percent to 15 percent.
MEASUREMENT
+ Hot dog sales
+ Surveys
+ Social listening using Digimind
POTENTIAL RESULTS
+ Increase in the intent to purchase hot dogs
+ Increase in the overall consumption of hot dogs
+ More women taking pride in the act of eating hot dogs
+ An overall elevated perception of the hot dog
Well, that’s a wrap, and what a journey it’s been. We would like to thank Wienerschnitzel and the American Advertising Federation for this opportunity. We’re not only grateful for the chance to compete, but also for the chance to send an empowering message to women.
SINCERELY,
TEAM 401 |
BLOOD FLOW CONTROL SYSTEM AND METHODS FOR IN-VIVO IMAGING AND OTHER APPLICATIONS
BLUTFLUSSSTEUERUNGSSYSTEM UND VERFAHREN FÜR DIE IN-VIVO-BILDEGEBUNG UND ANDERE ANWENDUNGEN
SYSTÈME DE CONTRÔLE DE FLUX SANGUIN ET MÉTHODES D'IMAGERIE IN VIVO AINSI QU'APPLICATIONS AUTRES
Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Priority: 18.10.2013 CH 17872013
Date of publication of application: 24.08.2016 Bulletin 2016/34
Proprietor: Gutzeit, Andreas
8044 Zürich (CH)
Inventor: Gutzeit, Andreas
8044 Zürich (CH)
Representative: Charrier Rapp & Liebau
Patentanwälte PartG mbB
Fuggerstrasse 20
86150 Augsburg (DE)
References cited:
EP-A1- 1 938 751 WO-A1-2013/110929
WO-A2-01/74247 RU-A- 2009 118 031
US-A1- 2003 062 041
• SEBASTIAN LEY ET AL.: "MRI Measurement of the Hemodynamics of the Pulmonary and Systemic Arterial Circulation: Influence of Breathing Maneuvers", AMERICAN JOURNAL OF ROENTGENOLOGY, vol. 187, no. 2, 1 August 2006 (2006-08-01), pages 439-444, XP055080600, ISSN: 0361-803X, DOI: 10.2214/AJR.04.1738
• KOWALLICK, J. T. ET AL.: "Real-time phase-contrast flow MRI of the ascending aorta and superior vena cava as a function of intrathoracic pressure (Valsalva manoeuvre)", BR. J. RADIOL., vol. 87, 20140401, 16 September 2014 (2014-09-16), pages 1-7, XP009181735,
Note: Within nine months of the publication of the mention of the grant of the European patent in the European Patent Bulletin, any person may give notice to the European Patent Office of opposition to that patent, in accordance with the Implementing Regulations. Notice of opposition shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).
Description
FIELD OF THE INVENTION
[0001] The present invention is defined in the claims. The disclosure relates to blood flow control systems, devices and methods, in particular to an imaging system for the human body, such as x-ray and related tomographic imaging systems.
BACKGROUND OF THE INVENTION
[0002] Images of the interior of the human body are a long-established tool for providing graphic information in form of pictures, prints and screen displays for a subsequent interpretation by skilled practitioners.
[0003] For many purposes detection of blood flow related conditions is an important part of such images. In order to improve the detection of blood flow conditions it is known that injection of a contrast medium into the blood stream can add information.
[0004] A well-known example of such methods is computer tomography (CT) angiography, which is widely accepted as standard method for the examination of patients with suspected pulmonary embolism and other vascular and parenchymal diseases. The advantages of CT are obvious: it is widely available, the method is rapid, and it is highly sensitive to nodules, embolus or clots in the blood stream.
[0005] To increase the image quality of the images generated by the CT scanner, it is further known that administration of a contrast agent during the scanning process enhances the vascular compartment and other fluids in the body, usually via venous access over the upper extremity such as via the back of the hand or via an elbow vein. Alternatively, it is also known to inject contrast material in the lower extremities. It is known that the contrast-enhanced blood flows through the superior vena cava (SVC) into the right atrium, while at the same time a volume of non-contrasted blood reaches the right atrium from the inferior vena cava (IVC). Evidently, the proportion of non-contrasted blood of the IVC in relation to the contrast enhanced SVC influences the dilution of contrast medium in the right atrium/venticle, left atrium/ventricle and in the pulmonary artery (PA) and all subsequent arteries (e.g. coronary artery, carotid and brain arteries, and more distant arteries), in an effect known as transient interruption of the contrast bolus. This dilution influences potentially the diagnostic performance and quality of the entire investigation.
[0006] Several studies have been published on the effect of ventilatory activity on the blood flow as listed in the list of references below.
[0007] US-B 6 631 716 suggests to set a defined volume of the lung despite respiration of a patient. No coordination of inhaling or exhaling with taking a MRI or CT is described and a contrast substance is not mentioned.
[0008] A method for improving lung delivery of pharmaceutical aerosols is disclosed in WO 01/74247 A2, wherein a real-time imaging technique is used to investigate the effect of air way structures or administration and respiratory drugs when administered by oral inhalation.
[0009] In European patent application EP 1 938 751 A1, an X-ray photography apparatus is disclosed which is comprising an aspiration flow amount measuring section for measuring an aspiration flow amount of an examinee.
[0010] In RU 2009 118 031 A a device for increasing air pressure in larynx is disclosed. The device is intended for diagnostics of deseases of larynx and hypopharynx, when carrying out computer tomography and the device contains a hollow tube for exhaling air and a manometer with a scale and a removable mouth piece, wherein the tube for air exhaling is provided with a metal plate which is getting in contact with an arrow of the manometer when the intrathoracic pressure reaches a predefined value.
[0011] Document WO 2013/110929 A1 (1 August 2013) discloses a method according to the preamble of claim 1.
SUMMARY OF THE DISCLOSURE
[0012] In the view of the above it is seen as an object of the disclosure to provide a specific dedicated device and its use, a scanning system and methods with improved and standardized flow accuracy and enhancement in the control of blood flow, dilution and enhancement properties for imaging of contrast enhanced blood flow (per-fusion, first-pass enhancement, vascular supply of tumors, lesions and various tissues), particularly in relation to the vascular flow (perfusion, fist-pass enhancement, arterial enhancement, improved detection of thromboembolic material within blood vessels, vascular space and supply of lesions, tumors and normal tissue) through the pulmonary artery or other arteries and veins as well as other vessels distally to the heart.
[0013] Hence, according to an aspect of the disclosure, there is provided a method of controlling and/or standardizing the distribution of a substance in the human body comprising the steps of applying a respiratory resistance device to the respiratory system of the body, and injecting the substance into the body and controlling or standardizing the distribution of the substance in the body through the selection of respiratory states characterized by a controlled interaction between the respiratory system of the body and the respiratory resistance device.
[0014] In another aspect, there is provided a method of acquiring in-vivo a series of images of interior parts of the human body, using an imaging system and including the steps of positioning a body relatively to the imaging system, applying a respiratory resistance device to the respiratory system of the body, and performing the image acquisition step during an inhalation, inspiration or suction phase, during which the body exercises suction against a resistance as provided by the respiratory resistance device. Alternatively or in addition, the image acquisition step is performed during the exhalation phase.
[0015] The imaging system can be a scanner using an x-ray imaging method, a scanner using magnetic resonance imaging or ultrasound imaging method including for example scanners for angiography, CT scanners, MR and positron emission based variants such as PET/CT or SPECT/CT, PET/MRI or ultrasound scanners.
[0016] The respiratory resistance device includes preferably an inner volume with an opening or openings in direction towards the physiological openings (nose, mouth) of the respiratory system of the body and essentially no or only small openings or leaks towards the environment. The dimensions of the volume and the openings are selected such that a normal untrained patient can achieve an underpressure (in the case of suction or inspiration against resistance) or an overpressure (in the case of exhalation against resistance or Valsalva) in the inner volume of the device and, preferably, maintain such pressure for the duration of the image acquisition, e.g. preferably between 1 and 60 seconds and preferably between 5 and 45 seconds and preferably between 5 and 30 seconds. The preferred pressure range for such an underpressure is -1 up to -80 mmHg and preferably up to -60 mmHg and preferably up to -40 mmHg, more preferably -8 to -20 mmHg. For overpressure a preferred range is +1 to +80 mmHg, more preferably +10 to +30 mmHg with the pressure 0 mmHg being gauged to equal atmospheric pressure.
[0017] In a preferred embodiment, the respiratory resistance device includes a replaceable and disposable mouth piece to connect the inner volume of the device with the respiratory system of the body. The mouth piece can be for example a tube or a modified tube, e.g., with an elliptical or round cross-section or with a specifically designed end for ease of use when applied to the mouth. However, in cases where it is preferred to include all openings of the respiratory system of the body, the mouth pieces can also be shaped as a mask.
[0018] It is preferred that a mouth piece fits closely and thus tightly with the resistance device. A mouth piece may also fit with defined spaces for the exit or entry of air between mouth piece and resistance device. A mouth piece may as well be formed integral with the resistance device.
[0019] In a further preferred embodiment, the respiratory resistance device includes or is coupled to a sensor for measuring a parameter indicative of the pressure inside the inner volume of the device. The measurement can be displayed in a numerical form or as acoustic or optical signals or symbols, preferably indicating in operation whether the inhaling/inspiration/suction or exhaling/expiration/valsalva, respectively, is to be increased or decreased in intensity to achieve an optimal and/or steady-state pressure.
[0020] The respiratory resistance device is best operated in parallel to and in conjunction with the image acquisition of the image acquisition system and preferably also in parallel and in conjunction with an injection system for injecting of a contrast medium or other diagnostic substance into a venous vessel of the body. The disclosed device can however also be used without injection of supplementary contrast agent. When performed with contrast agent administration, injection into the upper extremity or lower is used in the case of an inhaling or suction action and injection into vessels of the lower extremity in the case of an exhaling or Valsalva action. The timings of these two or three parallel operations are chosen such that all operations are concurrently effective (well coordinated outside and in the body) during the actual image acquisition or any other administration step.
[0021] In a variant the respiratory resistance device and the image acquisition device are linked. The link can be implemented in form of a data communication link or in form of a partial or full incorporation of the elements of the respiratory resistance device into the image acquisition system and/or injection system.
[0022] Further aspects of the disclosure include the respiratory resistance device, a combination of respiratory resistance device and the image acquisition system, preferably in combination with an injection system, and any images acquired by the use of the above methods and/or devices or combination of devices and scanning systems.
[0023] The method of the invention is particularly useful in improving the enhancement and image acquisition related to various steps of angiography of the pulmonary arteries or other arteries and veins in the rest of the body (perfusion, first-pass vascular enhancement, vascular supply of tumors, lesions and various tissues, detection of thromboembolic material).
[0024] The disclosed subject-matter can be further used in methods and devices for administration, preferably intravenous, of a substance in order to control or standardize the distribution and/or concentration of such a substance in the body.
[0025] The respiratory resistance device can be used in general to influence via defined respiratory states the distribution and/or standardization of blood supply either from the upper, superior vena cava or lower, inferior vena cava according to the respective requirement of any medical or technical conditions such as the task to increase blood supply from the respective vessel to the right atrium of the heart or enhance the concentration of an injected substance in the blood flow in the pulmonary arteries or in vessels beyond the pulmonary arteries. This can be extended to applications such as drug injection through the upper or lower peripheral veins, invasive procedures, surgery or any blood supply related indication.
[0026] The methods, the devices and systems and their use are in particular able to control and standardize blood flow to perform high contrast density within arteries and/or veins, such as pulmonary vessels, brain vessels, vessels of visceral organs or vessels of the extremities or other vessels within a human or animal body. Standardized blood flow increases contrast density in the above vessels, increasing image quality of images taken with imaging systems such as mentioned. On the other hand the methods, devices and systems and their use may allow to reduce the amount of contrast substances.
[0027] The above and other aspects of the present disclosure together with further advantageous embodiments and applications of the invention are described in further details in the following description and figures.
BRIEF DESCRIPTION OF THE FIGURES
[0028]
FIG. 1A is a schematic cross-section of a respiratory restriction device in accordance with the invention;
FIG. 1B is a schematic cross-section of a variant of the respiratory restriction device of Fig. 1 A;
FIG. 1C shows a schematic cross-section of another simplified respiratory restriction device in accordance with the invention;
FIG. 2 illustrates schematically different respiratory states during an image acquisition;
FIG. 3 is a graph of test results indicating mixing ratios between flow from the vena cava superior vs flow from the vena cava inferior depending on respiratory states ;
FIG. 4 illustrates steps in accordance with the invention.
DETAILED DESCRIPTION
[0029] An exemplary respiratory resistance device 10 is shown in FIG. 1A. The device has a main body 11 of resilient material such as Teflon® or stainless steel or other similar materials. The main body provides a cap and holder for a disposable mouthpiece 12. The mouth piece and the main body are connected to each other by a simple form fitting attachment so that the mouth piece can be easily attached and removed from the main body by a straight insertion and extraction movement, preferably without involving a twist or use of a tool. Any similar form or attachment method might be suitable.
[0030] The mouth piece 12 has an essentially tubular, hollow shape with a proximate opening 121 adapted for insertion into a patient's mouth and a distal opening 122 providing a flow connection into the interior of the main body 11.
[0031] It should be however clear that materials, dimensions and shapes of the main body 11 and the mouthpiece can vary widely while still maintaining the function of providing resistance against free breathing. For example, it is possible to shape the proximate opening more ergonomically or give the cross-section a more elliptical circumference. Such and similar modifications can, however, be regarded as being well within the scope of an ordinarily skilled person.
[0032] Further mounted onto the main body 11 is a pressure sensitive device 13, which can be for example a piezoresistive transducer integrated with processing circuits onto a silicon substrate. Such sensors are commercially available for example as MPXV7002 from Freescale Semiconductor Inc.
[0033] The sensor 13 is connected to a control signal generator 14. The control signal can be a numeric display of the pressure in the interior of the main body as shown. However the control signal can alternatively or in addition be an acoustic signal or an optical signal selected according to predefined pressure thresholds or ranges. The respiratory resistance device 10 of Figure 1 can as well omit the pressure sensitive device 13 and will work in this very simple form as well.
[0034] Thus the control signal generator 14 can give a patient or an operator of a scanning or injection apparatus a feedback on the ventilatory activity or respiratory state of the patient during the image acquisition by the scanner or during a controlled injection of a substance. The respiratory device, the methods connected therewith and its use are able to control and standardize blood flow within patients related veins arteries during CT or MRI or other diagnostic procedures. In particular it can be indicated whether or not a patient is in the desired ventilatory activity or respiratory state or whether the patients breathing should be adapted or even changed to reach the desired state, e.g. in case of inhalation/suction whether the patient should inhale suck stronger, less strong or steady. It is for example possible to use a programmable microcontroller (not shown) as part of the control signal generator 14 so as to control a display or color coded lights depending on the parameter as measured by the sensor 13 as feedback to patient and/or operator.
[0035] Optionally the sensor 13 can be connected to a synchronizing element 15 that is also linked to the image acquisition system. The link can be for example a wired, a wireless or an optical link for data transmission. Such an element can be used to combine information from the ventilatory or breathing activity of the patient (device) with the images acquired by any image acquisition system. This would enable a manual or automated selection of images acquired during the desired state of ventilatory activity even where this activity is fluctuating (around the desired state) during the scan. For example the synchronizing element can include a display of pressure values along with the date and temporal information of the image acquisition. Corresponding time stamps may be included on the acquired image.
[0036] In the example of FIG. 1B the main body 11 includes a small opening 111 to the exterior to allow for a limited air flow into or from the interior and hence into or out of the patient's respiratory system. The dimensions of the opening 111 are in such a case selected so as to
provide sufficient air flow resistance or restriction to prevent normal (abdominal) breathing. Small openings allowing a controlled air flow can be advantageous in order to achieve a controlled and steady state inflow of air or other respiratory gases (oxygen, xenon or other). Such an opening 111 or multiple openings may alternatively or additionally be present on the mouth piece or may be formed by the connection means of mouth piece and main body.
[0037] The control signal generator 14 of the example of FIG. 1B is designed as an optical indicator showing a patient in simplified symbols whether to increase or decrease the breathing efforts.
[0038] However, it is worth noting that the respiratory resistance device does not necessarily require any electronic components or any sensors to perform the function of an air flow resistance or restriction. If, for example, a simpler, more cost efficient device is required, the main body 11 can be embodied or replaced, respectively, by a simple cap over the opening 122 of the mouth piece as shown in FIG. 1C. If parts of the cap are designed as flexible or moveable, then the ventilatory activity can be monitored by the movement or deformation of such parts. A thin membrane in the cap or elsewhere along the tube would for example bulge in or out depending on the pressure generated by the patient during in- or exhaling as indicated in FIG. 1C by the dashed lines. Other examples can include a movable object or column of liquid placed in a tube and moving in dependence of the ventilatory activity of the patient. Such variants would still be sufficient to implement examples of the present disclosure.
[0039] The tube or mouth piece can be adapted for use with nasal openings or with both mouth and nose. In the latter cases, it is advantageous to use a mask type connector as mouth piece between the main body 11 of the respiratory resistance device 10 and the respiratory system of the patient instead of a tubular connector. The mask would be typically designed (e.g. with an elastic lip at its circumference) to provide sufficient air tightness to still function as a resistance against free breathing. It is further worth noting that the respiratory resistance device is not intended to provide breathing assistance during the scan as may be applied to support breathing for patients with significant respiratory failures. Thus, the known breathing masks connected to breathing support elements such as bellows or gas supply are not understood as respiratory resistance device within the meaning of the present invention.
[0040] It is further contemplated to integrate the respiratory resistance device 10 into an image acquisition system used to acquire images of the interior of the patient’s body. In such a variant at least part of the main body 11, in particular the sensor 13, the control signal generator 14 and/or the synchronizing element 15 and related circuitry would be located within the housing of the image acquisition system and for example connected to the mouth piece by means of an elongated, essentially airtight flexible tube. Such an integration has the advantage of reducing the number of separate parts in an area which best contains only essential equipment.
[0041] In some applications, the respiratory resistance device 10 is operated typically simultaneously with the operation of the image acquisition system. The image acquisition system can be a computer tomography (CT) scanner or a magnetic resonance imagine device (MRI), Angiography, PET/CT, PET/MRI, any ultrasound imager and other similar imaging devices.
[0042] In such applications the patient is positioned within the image acquisition system with the respiratory resistance device applied to either mouth and/or nose. To enhance the contrast of any images acquired, a contrast medium, for example iodine based contrast fluid, ultrasound contrast agent or Gadolinium based contrast material, is injected through a venous vessel of the patient. The respiratory resistance device, the methods and systems may be operated together with the injection system for injecting the contrast enhancing substance.
[0043] Details of a method of acquiring in-vivo images of the interior of a human or animal body in accordance with an example of the present invention are described in the following making reference to FIG. 2.
[0044] In FIG. 2 there is shown a patient 20 being positioned horizontally within the tunnel of a scanner 21, which can be for example a CT scanner or an MRI scanner. A respiratory resistance device 10 in accordance with an example of the disclosure is placed on the mouth of the patient 20. An injection system for administering a contrast fluid is connected to a venous vessel of the patient but not shown as such systems are well known in the state of the art.
[0045] The three panels of FIG. 2 illustrate three different respiratory states of the patient as can be registered by the respiratory resistance device 10. The enlarged detail shows a simplified representation of the human heart together with the blood flow through the vena cava superior SCV (entering the right atrium from above) and through the vena cava inferior ICV (entering the right atrium from below).
[0046] The respiratory states are characterized in the figure by arrows indicating predominant direction of air or blood flow or diaphragm movements including movements of the lung, respectively, on the one hand and by the meter 14 readings as displayed on the other.
[0047] The upper panel represents the basic conditions under which for example PA images are presently acquired. It is characterized herein as free breathing with no respiratory resistance device 10 in place. The air is moved into and out of the respiratory system of the human body 20 as indicated by the arrows in the area of the head. At the same time the thorax moves up and down as indicated by the arrow in the chest region of the patient 20. The breathing is typically accompanied by movement of the diaphragm as indicated by the arrows in the abdominal region of the patient 20. A flow or pressure measurement 14 shows a swing to and fro between positive or negative values (representing inflow (suction)
or outflow (Valsalva) of air or a swing between under- or overpressure as would be measured when using the respiratory resistance device during this state of free breathing).
[0048] The respective blood flows through the ICV and SCV are as normal indicated by the two arrows of equal line thickness in the enlarged view. No change or contrast enhancement is expected in this respiratory state.
[0049] In the middle panel a respiratory state characterized as Valsalva maneuver is illustrated. In this state the patient breathes into the closed or flow restricted inner volume of the respiratory resistance device 10. The arrows in the head region indicate the direction in which the air flow is directed. The thorax moves inwards and the diaphragm upwards towards the thorax. The sensor registers this Valsalva state as overpressure typically in the range of 1 to 100 mbar for an untrained patient attempting to maintain a constant pressure for the period of the scan between 1 and 60 seconds, preferably between 5 and 45 seconds.
[0050] Again a contrast agent or any type of dye can be injected into the patient's body 20 shortly before and/or during the Valsalva state. A change from normal in the respective flows through the ICV and SCV can be observed as indicated by the arrow in the ICV being thicker than the respective arrow in the SCV. This indicates that the anti-Valsalva state favors the venous blood flow from the extremities of the lower body. This provides an indication that by administering a contrast medium into a venous access in a lower extremity during the image acquisition step an improved and/or more stable contrast enhancement can be achieved.
[0051] To achieve this enhancement it can be necessary to maintain the Valsalva status during the scan acquisition and even injection or, conversely, to interrupt the scanning process during periods in which the patient exits the Valsalva state or discard or mark images obtained outside the optimal Valsalva state. For such operations the monitoring as provided by the respiratory resistance device is advantageous.
[0052] In the lower panel of FIG. 2 a respiratory state is illustrated characterized as breathing against resistance or anti-Valsalva maneuver. In this state the patient sucks air from the closed or flow restricted inner volume of the respiratory resistance device 10. Again the arrows in the head region indicate the direction in which the air flow is directed. The thorax moves outwards and the diaphragm downwards towards the lower body. The sensor 14 registers this state as underpressure typically in the range of -1 to -60 mmHg for an untrained patient attempting to maintain a constant pressure for the period of the scan between 1 and 60 seconds, preferably between 5 and 45 seconds.
[0053] Again a contrast fluid or another substance can be injected into the patient's body 20 shortly before and/or during the anti-Valsalva (suction against resistance) state. A change from normal in the respective flows through the ICV and SCV can be observed as indicated by the arrow in the SCV being thicker than the respective arrow in the ICV. This indicates that the anti-Valsalva state favors the venous blood flow from the extremities of the upper body. This provides an indication that by administering the contrast medium into a venous access in an upper extremity or a lower extremity during the image acquisition step an improved and/or more stable contrast enhancement can be achieved. To achieve this enhancement it can be necessary to maintain the anti-Valsalva state for the duration of the scan or, conversely, to interrupt the scanning process during periods in which the patient exits the anti-Valsalva state or discard or mark images obtained outside the anti-Valsalva state. Again, the presence or absence of such states is enabled and monitored by the respiratory resistance device 10.
[0054] Test results using various standardized breathing states or maneuvers and flow-sensitive MR phase contrast techniques in the SVC and IVC and imaged in the supine position on a 1.5 Tesla MRI unit (Achieva 1.5 T, Phillips Healthcare, Best, The Netherlands) are shown in FIG. 3 using an 8-channel torso coil (Philips Health care) covering the entire chest allowing the regular acquisition of two sets of heart triggered dynamic phase contrast (PC) images (TR 50 msec and TE 4 msec; Slice thickness 8 mm, flip angle 15°, velocity encoding 100 msec; voxel size 1.9 x 2.5) in the axial section of the SVC and IVC.
[0055] In order to guarantee standardized and reproducible breathing an MR-compatible respiratory resistance device was used for controlling and monitoring the respiratory pressure and blood flow during the entire manœuvre. Besides the newly defined breathing method "suction against resistance", previously defined techniques such as valsalva, apnea after end of inspiration, apnea after end of expiration and free breathing are also investigated allowing comparison with known studies (see references).
[0056] The capital letters in FIG. 3 indicate the respiratory state or the interaction with the respiratory resistance device, where used. IVC/SVC ratios for stroke volumes (white boxes) and flux (grey boxes) are shown for free breathing (A), end of inspiration position with breath hold (B), end of expiration position with breath hold (C), Valsalva maneuver at +10 mm Hg (D), Valsalva maneuver at +20 mm Hg (E), Valsalva maneuver at +30 mm Hg (F), suction maneuver at -10 mm Hg (G), similar suction maneuver at -20 mm Hg (H). Boxes show the median and the 25th and 75th quartiles; whiskers show minimum and maximum values. The optimal ratio is achieved in the suction mode with thoracic underpressure, but standard deviations are higher, demonstrating more unstable conditions. Other states such as the Valsalva maneuver can be considered, too, but show a much reduced effect under these circumstances.
[0057] It should be noted that the method and respiratory device as described in the example using an MRI scanner above may work equally well or even better in connection with a CT scanner or other imaging or diagnostic techniques.
[0058] The steps performed on a patient are summarized in the flow chart of FIG. 4. However it should be noted that the sequence of steps as shown in FIG. 4 is not indicative of a specific temporal order of such steps as most of the steps are best undertaken simultaneously to achieve the better results.
[0059] It should be noted that the above methods and devices can be used in any method requiring control or standardization of the mixing of the flow of blood from the IVC und SVC, and can be effective even in the blood circulation beyond the pulmonary arteries and the lungs, e.g., into the peripheral organs and body parts. Such a control and standardization can enable for example the improved performance of first pass measurements or perfusion, particularly for tumors or other vessels and tissues, or the distribution of drugs or dyes into the body, particularly where such drugs or dyes are administered intravenously.
[0060] When used with a contrast medium suited for ultrasound acquisition system, such as gas bubbles, the above methods and devices can also be applied to image acquisitions using an ultrasound scanner.
[0061] While there are shown and described presently preferred embodiments of the disclosure in accordance with the invention, it is to be understood that the invention is not limited thereto but may be otherwise variously embodied and practised within the scope of the following claims.
Claims
1. A method of acquiring in-vivo an image of interior parts of the human body (20) or an image based quantification of blood flow conditions, using an imaging system (21) and comprising the steps of positioning the body (20) relatively to the imaging system (21), applying a respiratory resistance device (10) to the respiratory system of the body (20), and performing an image acquisition step during an inhalation phase, during which the body (20) provides suction against a resistance as provided by the respiratory resistance device (10), and/or performing an image acquisition step during an exhalation phase, during which the body provides exhalation against a resistance as provided by the respiratory resistance device (10), characterised respectively by either a contrast fluid or dye being administered into a preestablished venous access in an upper or a lower extremity of the body (20) before and/or during the inhalation phase or by a contrast fluid or dye being administered into a preestablished venous access in a lower extremity of the body (20) before and/or during the exhalation phase.
2. The method of claim 1, wherein the image acquisition is performed while the inhalation reduces the pressure in the respiratory resistance device (10), in particular to a pressure in a range of -1 up to -80 mmHg and preferably up to -60 mmHg and preferably up to -40 mmHg, and preferably -1 to -20 mmHg.
3. The method according to any of the preceding claims, wherein the inhalation is maintained during at least 1 second and preferably over a period of between 1 and 60 seconds and preferably wherein the inhalation is maintained during at least 5 seconds and preferably over a period of between 5 and 45 seconds and preferably over a period of between 5 and 30 seconds.
4. The method according to any of the preceding claims, performed during the performance of an imaging method of the body (20), in particular computer tomographic (CT) scanning, ultrasound or magnetic resonance scanning (MRI), or during angiography, perfusion, first pass measurements of the pulmonary arteries (PA) or other blood containing vessels distal to the vena cava and/or the heart.
5. The method according to any of the preceding claims, wherein the image acquisition is performed while the exhalation increases the pressure in the respiratory resistance device (10), in particular to a pressure in a range of + 1 mmHg to + 80 mmHg and preferably to a pressure in the range +10 mmHg to +40 mmHg.
6. The method according to any of the preceding claims, wherein the exhalation is maintained during at least 1 second and preferably over a period of between 1 and 60 seconds and preferably wherein the exhalation is maintained during at least 5 seconds and preferably over a period of between 5 and 45 seconds and preferably over a period of between 5 and 30 seconds.
7. The method according to any of the preceding claims, wherein a parameter related to a pressure generated through inhalation or through exhalation is monitored during the image acquisition and the monitored parameter is used to generate a control signal indicative of deviation from an optimal inhalation or exhalation state, respectively.
8. The method according to any of the preceding claims wherein the respiratory resistance device (10) is comprising a main body (11) with one or more openings (121, 122) to connect in use with the respiratory system of the human body (20), and a closed inner volume or a inner volume with one or more constrictions (111) blocking partially the flow of air into or out of the respiratory system of the body (20) during an inhalation phase or exhalation phase, respectively.
9. The method according to claim 8, wherein the one or more constrictions (111) are sufficiently small to enable the generation of an under/over pressure in the inner volume of the device (10) under normal inhaling/exhaling conditions of the human respiratory system.
10. The method according to claims 8 or 9, wherein the respiratory resistance device (10) is having a replaceable subpart (12) which is comprising a mouth piece (12) or is consisting of a mouth piece (12) and is providing the one or more openings (121, 122).
11. The method according to any of claims 8 to 10, wherein the main body (11) of the respiratory resistance device (10) comprises a sensor (13) for measuring a parameter related to the pressure in the inner volume.
12. The method according to any of claims 8 to 11, wherein the respiratory resistance device (10) is further comprising a control signal generator (14) for generating a control signal indicative of a deviation from a desired respiratory state or from a preset pressure value or range of pressure values.
13. The method according to claim 12, wherein the control signal generator (14) includes an indicator indicating whether inhalation/exhalation is too weak and/or too strong.
14. The method according to any of claims 8 to 13, wherein the respiratory resistance device (10) is used to influence via defined respiratory states the distribution and/or standardization of blood supply either from the upper, superior vena cava or lower, inferior vena cava according to the respective requirement to increase blood supply from the respective vessel to the right atrium of the heart and/or to enhance the concentration of a substance in the blood flow in the pulmonary arteries or in vessels beyond the pulmonary arteries.
Patentansprüche
1. Verfahren zur in-vivo Aufnahme eines Bildes des Inneren des menschlichen Körpers (20) oder einer bildbasierten Quantifizierung der Blutfluss-Bedingungen, unter Verwendung eines Abbildungssystems (21) und umfassend die Schritte der Positionierung des Körpers (20) relativ zu dem Abbildungssystem (21), Anwendung eines Atmungs-Widerstands-Geräts (10) für das Atemsystem des Körpers (20), und Durchführung eines Schrittes der Bilderfassung während einer Inhalations-Phase, während welcher der Körper (20) das Einatmen entgegen einen vom Atmungs-Widerstands-Gerät (10) erzeugten Widerstand leistet, und/oder Durchführung eines Schrittes der Bilderfassung während einer Exhalations-Phase, während welcher der Körper (20) das Ausatmen entgegen einem vom Atmungs-Widerstands-Gerät (10) erzeugten Widerstand leistet, jeweils gekennzeichnet durch entweder Verabreichung eines Kontrastmittels oder Färbemittels in einen vorher eingerichteten venösen Zugang in einer oberen oder unteren Extremität des Körpers (20) vor und/oder während der Inhalations-Phase, oder gekennzeichnet durch Verabreichung eines Kontrastmittels oder Färbemittels in einen vorher eingerichteten venösen Zugang in einer unteren Extremität des Körpers (20) vor und/oder während der Exhalations-Phase.
2. Verfahren aus Anspruch 1, worin die Bilderfassung durchgeführt wird während die Einatmung den Druck in dem Atmungs-Widerstands-Gerät (10) reduziert, insbesondere zu einem Druck in einem Bereich von -1 bis zu -80 mmHg und vorzugsweise bis zu -60 mmHg und vorzugsweise bis zu -40 mmHg, und vorzugsweise -1 bis -20 mmHg.
3. Verfahren gemäß einem der vorhergehenden Ansprüche, worin die Inhalation während mindestens 1 Sekunde und vorzugsweise über einen Zeitraum zwischen 1 und 60 Sekunden beibehalten wird und worin die Inhalation vorzugsweise während mindestens 5 Sekunden und vorzugsweise über einen Zeitraum zwischen 5 und 45 Sekunden und vorzugsweise über einen Zeitraum zwischen 5 und 30 Sekunden beibehalten wird.
4. Verfahren gemäß einem der vorhergehenden Ansprüche, durchgeführt während der Durchführung eines Bildaufnahmememeverfahrens des Körpers (20), insbesondere computertomographisches (CT) Scannen, Ultraschall- oder Magnetresonanz-Scannen (MRI), oder während einer Angiographie, Perfusion, oder einer Erstdurchgangsmessung der Pulmonararterien (PA) oder anderer blutenthalternder Gefäße fern von der Hohlvene und/oder dem Herzen.
5. Verfahren gemäß einem der vorhergehenden Ansprüche, worin die Bilderfassung durchgeführt wird während die Ausatmung den Druck in dem Atmungs-Widerstands-Gerät (10) erhöht, insbesondere zu einem Druck in einem Bereich von +1 mmHg bis +80 mmHg und vorzugsweise zu einem Druck in dem Bereich +10 mmHg bis +40 mmHg.
6. Verfahren gemäß einem der vorhergehenden Ansprüche, worin die Ausatmung während mindestens 1 Sekunde und vorzugsweise über einen Zeitraum zwischen 1 und 60 Sekunden beibehalten wird und worin die Ausatmung vorzugsweise während mindestens 5 Sekunden und vorzugsweise über einen Zeitraum zwischen 5 und 45 Sekunden und vorzugsweise über einen Zeitraum zwischen 5 und 30 Sekunden beibehalten wird.
7. Verfahren gemäß einem der vorhergehenden Ansprüche, worin ein Parameter in Bezug auf einen Druck - generiert durch Einatmung oder durch Ausatmung - während der Bilderfassung überwacht wird und der überwachte Parameter verwendet wird um ein Kontrollsignal zu generieren, das auf Abweichung von einem jeweils optimalen Einatmungs- oder Ausatmungs-Zustand hinweist.
8. Verfahren gemäß einem der vorhergehenden Ansprüche, worin das Atmungs-Widerstands-Gerät (10) einen Grundkörper (11) mit einer oder mehreren Öffnungen (121, 122) zur Verbindung im Gebrauch mit dem Atemsystem des Körpers (20) und einem geschlossenen inneren Volumen oder einem inneren Volumen mit einer oder mehreren Einengungen (111), die teilweise den Fluß von Luft in oder aus dem Atemsystem des Körpers (20) während einer Einatmungs- oder Ausatmungsphase blockiert, umfasst.
9. Verfahren aus Anspruch 8, worin die eine oder mehreren Einengungen (111) klein genug sind, um die Ausbildung eines Unter-/Überdrucks im inneren Volumen des Geräts (10) unter gewöhnlichen Einatmungs-/Ausatmungszuständen des menschlichen Atmungssystems zu ermöglichen.
10. Verfahren aus Anspruch 8 oder 9, worin das Atmungs-Widerstands-Gerät (10) einen austauschbaren Teil (12) aufweist, der ein Mundstück (12) umfasst oder aus einem Mundstück (12) besteht und die eine oder mehreren Öffnungen (121, 122) bereitstellt.
11. Verfahren nach einem der Ansprüche 8 bis 10, worin der Grundkörper (11) des Atmungs-Widerstands-Geräts (10) einen Sensor (13) zur Erfassung eines dem Druck im inneren Volumen entsprechenden Parameters umfasst.
12. Verfahren nach einem der Ansprüche 8 bis 11, worin das Atmungs-Widerstands-Gerät (10) weiterhin einen Kontrollsignalerzeuger (14) zur Erzeugung eines auf eine Abweichung von einem gewünschten Atmungszustand oder von einem vorgegebenen Druckwert oder einem vorgegebenen Bereich von Druckwerten hinweisenden Kontrollsignals umfasst.
13. Verfahren aus Anspruch 12, worin der Kontrollsignalerzeuger (14) eine Anzeige enthält, die anzeigt, ob die Ein-/Ausatmung zu schwach und/oder zu stark ist.
14. Verfahren nach einem der Ansprüche 8 bis 13, worin das Atmungs-Widerstands-Gerät (10) verwendet wird, um über definierte Atmungszustände die Verteilung und/oder Standardisierung des Blutflusses entweder von der oberen Hohlvene oder der unteren Hohlvene entsprechend der jeweiligen Anforderung, die Blutzufuhr des entsprechenden Gefäßes zum rechten Herzvorhof zu erhöhen, und/oder um die Konzentration einer Substanz im Blutfluß der Pulmonararterie oder in den Gefäßen außerhalb der Pulmonararterie, zu beeinflussen.
Revendications
1. Procédé d’acquisition in vivo d’une image de parties intérieures du corps humain (20) ou d’une quantification basée sur l’image de conditions de débit sanguin, en utilisant un système d’imagerie (21) et comprenant les étapes de positionnement du corps (20) par rapport au système d’imagerie (21), d’application d’un dispositif de résistance respiratoire (10) au système respiratoire du corps (20), et de réalisation d’une étape d’acquisition d’image pendant une phase d’inspiration, pendant laquelle le corps (20) fournit une aspiration contre une résistance telle que fournie par le dispositif de résistance respiratoire (10), et/ou de réalisation d’une étape d’acquisition d’image pendant une phase d’expiration, pendant laquelle le corps fournit une expiration contre une résistance telle que fournie par le dispositif de résistance respiratoire (10), caractérisé respectivement soit en ce qu’un fluide ou un colorant de contraste est administré dans un accès veineux préétabli dans une extrémité supérieure ou inférieure du corps (20) avant et/ou pendant la phase d’inspiration, soit en ce qu’un fluide ou un colorant de contraste est administré dans un accès veineux préétabli dans une extrémité inférieure du corps (20) avant et/ou pendant la phase d’expiration.
2. Procédé selon la revendication 1, dans lequel l’acquisition d’image est réalisée alors que l’inspiration réduit la pression dans le dispositif de résistance respiratoire (10), en particulier à une pression dans une plage de -1 jusqu’à -80 mmHg et de préférence jusqu’à -60 mmHg, et de préférence de -1 à -20 mmHg.
3. Procédé selon l’une quelconque des revendications précédentes, dans lequel l’inspiration est maintenue pendant au moins 1 seconde et de préférence sur une période d’entre 1 et 60 secondes et de préférence dans lequel l’inspiration est maintenue pendant au moins 5 secondes et de préférence sur une période d’entre 5 et 45 secondes et de préférence sur une période d’entre 5 et 30 secondes.
4. Procédé selon l'une quelconque des revendications précédentes, réalisé pendant la réalisation d'un procédé d'imagerie du corps (20), en particulier une tomodensitométrie (TDM), une échographie ou un balayage à résonance magnétique (IRM), ou pendant une angiographie, une perfusion, des premières mesures de passage des artères pulmonaires (AP) ou d'autres vaisseaux contenant du sang distaux de la veine cave et/ou du coeur.
5. Procédé selon l'une quelconque des revendications précédentes, dans lequel l'acquisition d'image est réalisée alors que l'expiration augmente la pression dans le dispositif de résistance respiratoire (10), en particulier à une pression dans une plage de +1 mmHg à +80 mmHg et de préférence à une pression dans la plage de +10 mmHg à +40 mmHg.
6. Procédé selon l'une quelconque des revendications précédentes, dans lequel l'expiration est maintenue pendant au moins 1 seconde et de préférence sur une période d'entre 1 et 60 secondes et de préférence dans lequel l'expiration est maintenue pendant au moins 5 secondes et de préférence sur une période d'entre 5 et 45 secondes et de préférence sur une période d'entre 5 et 30 secondes.
7. Procédé selon l'une quelconque des revendications précédentes, dans lequel un paramètre lié à une pression générée par l'intermédiaire d'une inspiration ou par l'intermédiaire d'une expiration est surveillé pendant l'acquisition d'image et le paramètre surveillé est utilisé pour générer un signal de commande indicatif d'un écart par rapport à un état d'inspiration ou d'expiration optimal, respectivement.
8. Procédé selon l'une quelconque des revendications précédentes, dans lequel le dispositif de résistance respiratoire (10) comprend un corps principal (11) pourvu d'une ou de plusieurs ouvertures (121, 122) pour être relié, lors de l'utilisation, au système respiratoire du corps humain (20), et un volume intérieur fermé ou volume intérieur pourvu d'une ou de plusieurs constrictions (111) bloquant partiellement le débit d'air entrant dans ou sortant du système respiratoire du corps (20) pendant une phase d'inspiration ou phase d'expiration, respectivement.
9. Procédé selon la revendication 8, dans lequel les une ou plusieurs constrictions (111) sont suffisamment petites pour permettre la génération d'une sous/sur-pression dans le volume intérieur du dispositif (10) dans des conditions d'inspiration/expiration normales du système respiratoire humain.
10. Procédé selon la revendication 8 ou 9, dans lequel le dispositif de résistance respiratoire (10) a une sous-partie remplaçable (12) qui comprend un embout buccal (12) ou est constitué d'un embout buccal (12) et fournit les une ou plusieurs ouvertures (121, 122).
11. Procédé selon l'une quelconque des revendications 8 à 10, dans lequel le corps principal (11) du dispositif de résistance respiratoire (10) comprend un capteur (13) pour mesurer un paramètre lié à la pression dans le volume intérieur.
12. Procédé selon l'une quelconque des revendications 8 à 11, dans lequel le dispositif de résistance respiratoire (10) comprend en outre un générateur de signal de commande (14) pour générer un signal de commande indicatif d'un écart par rapport à un état respiratoire souhaité ou à une valeur de pression ou plage de valeurs de pression prédéfinie.
13. Procédé selon la revendication 12, dans lequel le générateur de signal de commande (14) inclut un indicateur indiquant si l'inspiration/expiration est trop faible et/ou trop forte.
14. Procédé selon l'une quelconque des revendications 8 à 13, dans lequel le dispositif de résistance respiratoire (10) est utilisé pour influencer via des états respiratoires définis la distribution et/ou la normalisation de l'apport sanguin soit depuis la veine cave supérieure soit depuis la veine cave inférieure en fonction de l'exigence respective nécessitant d'augmenter l'apport sanguin depuis le vaisseau respectif vers l'atrium droit du coeur et/ou d'améliorer la concentration d'une substance dans le débit sanguin dans les artères pulmonaires ou dans des vaisseaux au-delà des artères pulmonaires.
FIG. 1A
FIG. 1B
FIG. 1C
FIG. 2
Inspiration/expiration
Valsalva
Suction
FIG. 3
Breathing method
Ratio IVC/SVC
Place and align body within the image acquisition system
Connect contrast fluid injection system
Administer Contrast fluid
Apply respiratory restriction device to the body
Let the body assume the desired respiratory state (In-/ Exhalation against restriction)
Acquire image
REFERENCES CITED IN THE DESCRIPTION
This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.
Patent documents cited in the description
- US 6631716 B [0007]
- WO 0174247 A2 [0008]
- EP 1938751 A1 [0009]
- RU 2009118031 A [0010]
- WO 2013110929 A1 [0011] |
Mob vs. Rotational Grazing: Impact on Forage Use and Artemisia absinthium
Heidi Reed
*South Dakota State University*
Alexander Smart
*South Dakota State University*, firstname.lastname@example.org
David E. Clay
*South Dakota State University*, email@example.com
Michelle Ohrtman
*South Dakota State University*, firstname.lastname@example.org
Sharon A. Clay
*South Dakota State University*, email@example.com
Follow this and additional works at: [https://openprairie.sdstate.edu/plant_faculty_pubs](https://openprairie.sdstate.edu/plant_faculty_pubs)
Part of the [Natural Resources Management and Policy Commons](https://openprairie.sdstate.edu/naturalresourcesmanagementandpolicycommons), and the [Weed Science Commons](https://openprairie.sdstate.edu/weedsciencecommons)
**Recommended Citation**
Heidi Reed, Alexander Smart, David E. Clay, Michelle Ohrtman and Sharon A. Clay (November 5th 2018). Mob vs. Rotational Grazing: Impact on Forage Use and Artemisia absinthium, Forage Groups, Ricardo Loiola Edvan and Edson Mauro Santos, IntechOpen, DOI: 10.5772/intechopen.79085. Available from: https://www.intechopen.com/books/forage-groups/mob-vs-rotational-grazing-impact-on-forage-use-and-artemisia-absinthium
This Book Chapter is brought to you for free and open access by the Department of Agronomy, Horticulture, and Plant Science at Open PRAIRIE: Open Public Research Access Institutional Repository and Information Exchange. It has been accepted for inclusion in Agronomy, Horticulture and Plant Science Faculty Publications by an authorized administrator of Open PRAIRIE: Open Public Research Access Institutional Repository and Information Exchange. For more information, please contact firstname.lastname@example.org.
Mob vs. Rotational Grazing: Impact on Forage Use and *Artemisia absinthium*
Heidi Reed, Alexander Smart, David E. Clay, Michelle Ohrtman and Sharon A. Clay
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/intechopen.79085
Abstract
Short duration (≤24 h), high stocking density grazing systems (e.g., mob grazing) mimics historic prairie grazing patterns of American bison (*Bison bison*), and should minimize selective grazing. We compared mob [125 cow-calf pairs on either 0.65 ha for 12 h; or 1.3 ha for 24 h] vs. rotational [25 cow-calf pairs on 8.1 ha for 20 days starting in mid-May with or without 2,4-D application prior to grazing; and 15 days starting mid-April (no herbicide)] grazing systems based on forage utilization and impact to *Artemisia absinthium* (absinth wormwood) in a tall grass pasture of Eastern South Dakota. Grass height and density, and *Artemisia absinthium* patch volume were quantified pre- and post-grazing at sampling points along multiple transects. Mob grazing had >75% forage utilization, whereas rotational grazing averaged 50% (all consumption). Within a grazing season, three grazing systems suppressed *Artemisia absinthium* patches with rotation/spray (100% decrease) > mob (65 ± 10% decrease) > mid-May rotation (41 ± 16% decrease), whereas *Artemisia absinthium* patches in the mid-April rotation followed by summer rest dramatically increased in size. *Artemisia absinthium* patches <19,000 cm$^2$ were browsed, whereas larger patches were trampled in mob-grazed areas, but avoided in rotational grazing. All *Artemisia absinthium* patches had regrowth the year following any grazing event.
Keywords: cattle, forage utilization, mob grazing, rotational grazing, weed management
1. Introduction
Grazing lands are managed to optimize forage and animal productivity, and minimize adverse impacts to soil and the surrounding environment. The annual economic impact of
all weedy species in U.S. grazing lands is greater than all other pests combined [1], and has been estimated at 1 billion dollars for forage loss and 5 billion dollars for control costs [2]. Weed infestations cause a variety of problems in grazing lands. Weeds can reduce forage vegetative quality and quantity; displace native plants and animals; reduce animal fertility, weight gains, or be toxic, resulting in fatalities; reduce meat and/or hide quality; increase management costs; and reduce land values [3, 4]. Tactics for weed management in pastures and grazing lands vary with the type of weed, livestock species, and applicability of other methods (e.g., mowing, biocontrol, herbicide treatment) [5, 6].
Livestock can help manage weeds by grazing or trampling and can improve pasture condition and competitiveness of desirable plants by increasing soil nutrients through manure and urine deposition [3]. Weed species and stage of growth; livestock species; and stocking rate and duration influence grazing effectiveness on weeds [3, 7]. Unfortunately, cattle (*Bos taurus*), the grazing livestock of choice in the Northern Great Plains (NGP), selectively consume forage in dung-free areas [8, 9], and avoid weeds for a wide variety of reasons [10]. Cattle herds are not managed specifically for weed control for several reasons. First, cattle are expensive to raise and replace and, even with premium prices, the economic margin is narrow [7]. Weeds may not be as palatable as grasses, and lower consumption may reduce weight gains [7], or, if high in alkaloids, problems with reproduction and/or toxicity can occur [11].
Rotational grazing often uses a ‘take half’- ‘leave half’ forage philosophy to maintain healthy, vigorous plant communities [12, 13]. Mob grazing has been promoted to mimic the world’s historic grassland ecosystems [14] with herds of large animals intensively grazing areas and moving often. The definition of mob grazing is subjective, but typically includes using extremely high stocking rates (100 head or more per ha) for short periods of time (moving every 12 or 24 h) [15] followed by recovery periods of 6–12 months. The goal of mob grazing is to have every plant within the enclosure eaten [16] or trampled [17], limiting selectivity or avoidance of specific species [9], and providing a more homogeneous grazing treatment. Barnes et al. [16] reported that grazing homogeneity correlated with paddock size, with pastures ranging from 1 to 8 ha in size grazed nearly uniformly, even if the same stocking rate per ha are used on larger areas.
Grazing impact for weed management is maximized when the target weed is most palatable, is the only forage option, or is made more palatable to livestock in some way (e.g., salt or sugar treatment) [7], and the desired vegetation is at its least vulnerable phenotypic stage [1]. High animal densities maximize trampling, which incorporates plant litter, manure, and urine into soil, increasing organic carbon and soil nutrients [17]. The combination of eating, trampling, and long rest periods is expected to increase productivity of more desirable forage [3, 18].
Mob grazing has been adopted by ranchers in Texas, SE Colorado, central Nebraska, Missouri, and other areas [19] where vegetative regrowth can occur quickly due to warm conditions, and high rainfall or irrigation capabilities. Under dryland conditions of the NGP, timing mob grazing to fit within the vegetative and environmental constraints of the area is difficult as growing seasons are short, and pastures often experience summer drought. McCartney and Bittman [20] reported on a mob grazing study that used 7–14 heifers ha\(^{-1}\) (dependent on seasonal timing) on about 0.3 ha paddocks at different intensities (light, grazed twice a year; to intense, grazed five times a year) in northeastern Saskatchewan. They observed positive
[decline in smooth brome (*Bromus inermis*), negative [decline of intermediate wheatgrass (*Elytrigia repens*), and increase bluegrass (*Poa* sp.) species], or no effects on specific species [e.g., green needlegrass (*Nassella viridula*)] over 4 years. These findings suggest that intensive grazing benefits are related to plant species, stocking density, and grazing timing, all of which can be manipulated for maximum impact [21]. Ranchers interested in using mob grazing for increased productivity and harvest efficiency would benefit from on-farm research that examines the relationship among stocking densities and timing on vegetative utilization and the impact to locally invasive weed species.
There have been few comparisons in the NGP among mob grazing and other, more conventional, grazing systems. Fundamental problems in grazing research often include small enclosure sizes and animal numbers, which provide data that are difficult to scale to commercial operations [16]. Due to the expense, need for many animals, and labor and time involved to move cattle frequently, the study was managed by an Eastern South Dakota rancher who incorporates both rotational and mob grazing techniques into his cattle operation. *Artemisia absinthium* was selected as a model invasive plant as it is a non-native perennial forb that cattle typically avoid due to woody stems of older plants, and unpalatability due to the production of secondary compounds and essential oils [22]. The objectives of this study were to (1) quantify forage present before grazing (pre-graze) to be able to estimate forage utilization post-graze in mob and rotational grazing systems, (2) determine the impact of grazing system on *Artemisia absinthium* suppression in-season based on mob and rotational grazing and (3) examine the recovery of *Artemisia absinthium* patches a year after mob grazing.
### 2. Grazing impacts to forage utilization and *Artemisia absinthium*
#### 2.1. Experimental site
The effects of rotational and mob grazing stocking densities on *Artemisia absinthium* and surrounding forage utilization were compared in an eastern South Dakota rangeland location in the tall grass prairie habitat near Hayti (44.66°N, −99.22°W) in 2013 and 2014 [23]. The dominant soil series of the rotationally grazed pasture were the: Poinsett-Waubay silty clay loams (Calcic Hapludolls/Pachic Hapludolls); Buse-Poinsett complex (Typic Calciudolls/Calcic Hapludolls); and Poinsett-Buse-Waubay complex (Calcic Hapludolls/Typic Calciudolls/Pachic Hapludolls) [https://soilseries.sc.egov.usda.gov/osdname.aspx]. Mob grazing pasture soils were similar to the rotational pasture with the addition Barnes-Buse loam complex (Calcic Hapludolls/Typic Calciudolls). The plant communities in these pastures were a mix of cool season native and invasive grasses, warm season grasses, and broadleaf species (Table 1).
#### 2.2. Weather
Growing degree days (GDD) were calculated to provide a reference for plant development between sampling dates and years. The GDD calculation \[GDD = \sum (\text{maximum daily temperature} + \text{minimum daily temperature})/2 - \text{base temperature}\] used the base temperature of 0°C, due to majority of cool season species with GDD accumulations starting on January 1 of each year.
| Mob-grazed sites | Rotational sites |
|------------------|-----------------|
| Common name | Scientific name | Common name | Scientific name |
| Big bluestem | *Andropogon gerardii* | Western wheatgrass | *Pascopyrum smithii* |
| Sweet clover | *Melilotus officinalis* | Absinth wormwood | *Artemisia absinthium* |
| Alfalfa | *Medicago sativa* | Smooth brome | *Bromus inermis* |
| Red clover | *Trifolium pratense*| Kentucky bluegrass| *Poa pratensis* |
| Kentucky bluegrass| *Poa pratensis* | | |
| Dandelion | *Taraxacum officinale* | | |
| Absinth wormwood | *Artemisia absinthium* | | |
| Western wheatgrass| *Pascopyrum smithii* | | |
| Smooth brome | *Bromus inermis* | | |
**Table 1.** Plant species in the mob-grazed and rotational grazed sites at Hayti, SD in 2013 and 2014.
Precipitation (from January 1) was also determined. The rotational pre-graze samples in 2013 were taken on June 13, with 641 GDD and 243 mm precipitation (www.noaa.gov) with values similar to the 30-year (1980–2010) average. Post-grazing samples were taken July 22, with GDD of 1540 and precipitation totaled 343 mm. In 2014, samples were taken May 13 with GDD at the spring assessment (which was taken after the early spring grazing) 262 and 65.5 mm of precipitation. The fall assessment was taken September 16 with GDD of 2603, and total rainfall of 370 mm fall. Rotational grazing was done much earlier in 2014 because the rancher was concerned about low amounts of precipitation (nearly 60% below average) during the 2013 fall and winter.
GDD accumulations for mob-grazed areas in 2013 were 1801 (August 6) and 1855 (August 9) for pre- and post-graze samples, respectively. Precipitation totaled 376 mm before and after mob grazing. In 2014, GDDs were 1693 pre-graze (July 29) and 1817 post-graze (August 4) and precipitation for pre-graze and post-graze totaled 230 and 270 mm, respectively.
### 2.3. Grazing treatments
Stocking treatments (rotation vs. mob) were repeated, although cattle densities and time of grazing differed between the 2 years due to feeding needs and differences in forage growth due to low rainfall in 2014 (**Table 2**). Rotational grazing was conducted in 8 ha pastures with 25 cow-calf pairs (1560 kg ha\(^{-1}\)). In 2013, in one paddock, the cow-calf pairs were allowed to graze for 14 days starting June 13 (referred to as ‘rotation’). In a separate paddock, generic 2,4-D ester at 1.1 kg ha\(^{-1}\) [24] was applied 1 day before the start of grazing on June 13 with a grazing duration of 14 days (referred to as ‘spray/rotation’). In 2014, a different pasture was grazed by 25 cow/calf pairs for 15 days, starting April 27 and ending May 11 (referred to as ‘early spring grazed/summer rest’).
Mob grazing was conducted for 12 h in a 0.65-ha paddock on August 8, using 125 cow-calf pairs (stocking rate of 223,250 kg ha\(^{-1}\) day\(^{-1}\)) (**Figure 1**). In 2014, a different 1.3-ha area was mob grazed on July 30 for 24 h with 125 cow-calf pairs (stocking rate of 53,580 kg ha\(^{-1}\) day\(^{-1}\)).
| Year | Grazing system | Stocking density | Grazing duration | Pre-graze Sampling date | Forage biomass | Post-graze Sampling date | Biomass | Forage use efficiency | Forage utilization |
|------|-------------------------|------------------|-----------------|-------------------------|----------------|--------------------------|---------|----------------------|--------------------|
| | | | | | kg ha\(^{-1}\) | | | % | % |
| 2013 | Mob | 223,250 | 12 h | 6-Aug | 2910\(^a\) | 9-Aug | 570\(^b\) | 62 | 80 |
| | Rotation | 1560 | 20 days | 13-Jun | 2600\(^a\) | 22-Jul | 1190\(^b\) | 45 | 45 |
| | Rotation/spray | 1560 | 20 days | 13-Jun | 4530\(^a\) | 22-Jul | 2528\(^b\) | 57 | 57 |
| 2014 | Mob | 53,580 | 24 h | 29-Jul | 4640\(^a\) | 4-Aug | 1170\(^b\) | 34 | 75 |
| | Rotation/Summer ungrazed| 1560 | 15 days | | ~1700\(^a\) | 13-May | 870 | | |
\(^1\)Vegetation biomass was estimated using the cool season mixed pasture grazing stick method, (vegetation height cm – 7.6 cm) * 79 to estimate kilograms per ha.
\(^2\)Trampled vegetation was any plant with a stem less than a 450 angle from the soil surface.
\(^3\)Forage efficiency (consumption only) was calculated by: [(before grazing vegetation –(standing + trampled))/(before grazing vegetation)]*100.
\(^4\)Forage utilization (consumption + trampling) was calculated by: [(before grazing vegetation – standing vegetation after grazing)/before grazing veg]*100.
\(^5\)Values with different letters within the same row for the pre graze vegetative biomass compared with post-graze standing or trampled (mob) or total vegetative biomass (rotational) differed at P < 0.0001.
\(^6\)Samples were not taken pre-graze in this treatment but estimated from the leave half/take half grazing system.
\(^7\)Forage in autumn following the grazing treatment in the spring.
Table 2. Mob and rotational grazing stocking density, grazing duration, sampling dates, forage biomass pre- and post-graze, and forage efficiency and utilization by year.
Figure 1. A mob grazing herd waiting for the next pasture.
2.4. Forage amounts and utilization
Eight 50-m long transects were established in each paddock for vegetative production evaluation. Sampling points were placed every 5 m along each transect, with GPS coordinates (Garmin etrex 20, Garmin, LTD, Schaffhausen, Switzerland) recorded so that resampling occurred at the same points pregrazing and post-grazing. At the sampling points, pre-graze measurements (in 2013, rotational graze and spray/rotational graze—13 June; mob graze—6 August; 2014, mob graze—29 July) included vegetation height using a grazing stick [25], and ocular estimates of basal cover of living vegetation, litter cover, and bare ground (0–100%) in a 1 m$^2$ area around the point. In 2013, vegetation in a 0.25 m$^2$ area was clipped to within 1 cm of the soil surface, and bagged (n = 30). Litter under the vegetation also was collected. Samples were weighed, dried at 38 C to constant weight, and dry weight of vegetative biomass and litter per unit area were calculated. The biomass values and grazing stick estimates were compared at each sampled point.
A few days after grazing (in 2013, rotational graze and spray rotational/graze—22 July; mob graze—9 August; in 2014, mob graze—4 August), the same transects and sampling points were reestablished for post-grazing measurements. Vegetation height was measured using the grazing stick, and percent trampled vegetation (e.g., new litter; defined as living vegetation oriented less than 45° from the soil surface) was estimated in the same areas as pre-graze sampling.
In 2014 due to the producer’s needs, cattle grazed the designated rotational pasture in April and then this pasture was untouched for the remainder of the season (summer rest). Unfortunately, due to the early timing of the grazing in the second year, no pre-grazing measurements were taken for this pasture. Measurements occurred on 13 May, after the early season grazing was completed, and then resampled on 16 September (designated as regrowth after early spring grazed/summer rest). In addition, the transects which were sampled in 2013 were reestablished and vegetative height was quantified in May 2014 to examine recovery after grazing.
2.5. *Artemisia absinthium* measurements
Another three 50-m transects were established in each pasture with vegetative height measured pre- and post-graze every 2.5 m along the transects. *Artemisia absinthium* patches (individual plants if small or a patch if large) were selected and tagged near the base of the plant/patch every 5 m along these transect lines in each treatment (in 2013 rotation; spray/rotation; and mob graze; and 2014 rotation/summer rest and mob graze). Pre- and post-grazing grass height and *Artemisia absinthium* patch volume (height and two perpendicular widths) were measured at the same time as forage measurements in 2013. In late May of 2014, *Artemisia absinthium* patches measured in 2013 experimental pastures were inspected for recovery and shoot regrowth. The rotation/summer rest had *Artemisia absinthium* measurements taken in May, 2014 just after grazing, and again in September (as summer rest measurement).
2.6. Statistical analysis
Data analyses were performed using JMP®, Version 5.0.1, (SAS Institute Inc.). Forage amounts pre-graze were based on clipped biomass measurements and compared with the grazing stick method. The grazing stick equation, based on plant height, was:
\[
\text{Estimated biomass (kg ha}^{-1}\text{)} = [\text{plant height (cm)} - 7.6 \text{ cm}] \times 79
\]
(1)
This estimated biomass for a cool season mixed grass pasture [26, 27]. The 7.6 cm value accounts for basal stems and leaves that would not be eaten by grazing animals. Two-tail, two-sample homoscedastic t-tests were used to compare forage biomass with the grazing stick estimates. Grazing stick estimates were found to be statistically similar to the clipping method.
Forage biomass and *Artemisia absinthium* volume were compared pre- and post-grazing and forage utilization (consumption plus trampling) was determined by examining new litter and remaining biomass at each transect point. These data were analyzed using one-tailed (post-graze < pre-graze) matched pair t-tests. Due to timing and treatment differences among rotational treatments, data were analyzed by treatment and year. Treatment differences are reported at a significance level of \( P \leq 0.10 \).
Binomial analysis of *Artemisia absinthium* patch volume data (yes = less volume post grazing; no = same or greater volume) using the equation:
\[
[p \pm t_{(0.1)} \sqrt{p (1 - p)/n}]
\]
(2)
was used to examine the influence of each treatment on *Artemisia absinthium* patches [28]. In the mob grazing treatments, *Artemisia absinthium* data were combined across years. To better understand the relationship between weed patch size and grazing system impact, *Artemisia absinthium* patches were separated into two volume classes (<19,000 cm\(^3\) and >19,000 cm\(^3\)). In Myer [23], four volume classes originally were designated, but were combined into the two volumes due to similarity of results within smaller and larger size classes.
3. Measured impacts of grazing systems
3.1. Forage utilization
3.1.1. Mob grazing
Pre-graze forage coverage averaged 85% (grass and forb) in 2013 and neared 100% in 2014. In 2013, pre-graze forage biomass was estimated to be 2910 (±816) kg ha\(^{-1}\) with the clipped method and 2720 kg ha\(^{-1}\) with the grazing stick. These measurements were statistically similar. Pre-grazing biomass in 2014 averaged 4640 kg ha\(^{-1}\), the grazing stick method estimated 3980 kg ha\(^{-1}\), with estimates statistically similar. The discrepancy between direct biomass sampling and grazing stick can be partially explained by sampling method, as forage was cut to within 1 cm of soil level, but the grazing stick calculation subtracts 7.6 cm from forage height to account for unconsumed stubble. Whereas the clipping method provided excellent data, the process was labor intensive, slow, and required preweighing, drying, and postweighing. In addition, it was found that after mob grazing there was no biomass to clip. The grazing stick method provided a reasonable estimate of available forage.
In 2013, mob grazing forage utilization was about 80% (Table 2; Figure 2) with a harvest efficiency (amount consumed) of 62% (~1800 kg ha\(^{-1}\)). The remaining 20% of the vegetation was trampled. In 2014, the same stocking rate (125 cow-calf pairs) was used, but the area was two times larger, had about 1.5 times greater pre-graze biomass, and grazing time was doubled from 12 to 24 h. Forage utilization in 2014 was 75%, similar to 2013. The amount consumed was 1600 kg ha\(^{-1}\), similar to the amount consumed in 2013, but due to the greater starting biomass, the harvest efficiency (percent consumed) was 34%, and the trampled amount was 40%.
3.1.2. Rotational grazing
In 2013, pre-graze forage amount averaged 2600 kg ha\(^{-1}\) and post-graze was 1190 kg ha\(^{-1}\) (Table 2). Both harvest efficiency (amount consumed) and utilization (amount consumed + trampled) were 45%, as new trampled litter was not observed. In the rotational/spray treatment,
Figure 2. Pre-graze forage and post-graze results, the impact of mob grazing.
pre- and post-graze forage was 4530 and 2610 kg ha\(^{-1}\), respectively, which indicated that forage consumption neared 57%. As in the rotational area, there was little newly trampled litter.
The 2014 rotational pasture was grazed in April, which allowed recovery during the summer/fall of 2014. Forage after grazing was 870 kg ha\(^{-1}\). The rancher follows the ‘take half, leave half’ utilization recommendation [12, 13], so a reasonable pre-graze forage estimate would have been about 1300 kg ha\(^{-1}\). Grass forage increased from 11 (May) to 23 (September) cm in height (\(P < 0.001\)) with fall forage biomass estimated at 2090 kg ha\(^{-1}\).
### 3.2. Grazing impact on *Artemisia absinthium*
A pre-grazing assessment of *Artemisia absinthium* was conducted with the volume of the patch related to its dry biomass by recording patch volume and comparing with clipped dried biomass. In mid-June of 2013, 30 *Artemisia absinthium* patches were quantified for volume, plants clipped, and dry biomass determined before grazing. Regression analysis of biomass (expressed as log biomass +1) on plant volume (expressed as log plant volume + 1) for these 30 patches resulted in the equation: \(\log (\text{biomass}+1) = 1.35 \log (\text{volume} + 1) - 5.89\), [23] which implies a direct increase in biomass as patch volume increased. This regression fit the data very well (\(r^2 = 0.90; P < 0.001\)), and was intended to be used to express differences in *Artemisia absinthium* biomass pre- and post-grazing. However, trampling dramatically increased *Artemisia absinthium* plant volume, as the shoots spread apart, in the mob-grazed areas (Figure 3) but because the samplings were within a few days of each other, it would not have been possible to increase biomass as the equation suggests. Therefore, data are presented and discussed in terms of plant volume, rather than biomass.
#### 3.2.1. Mob grazing
Matched-pair analysis of 2013 and 2014 combined indicated that about 65% of the *Artemisia absinthium* patches had less volume after mob grazing (Table 3). In 2013 the decrease averaged 75%, whereas in 2014, the decrease was about 20%. In 2014, grass surrounding the *Artemisia absinthium* patches had 60% of the forage consumed. Therefore, it appeared that cattle were grazing close to,

| Year | Grazing system | Pre graze ave. vol. | #Decrease/total | Ave. vol. of remaining | % Control<sup>a</sup> |
|------------|-------------------------|---------------------|-----------------|------------------------|-----------------------|
| | | cm³ | | cm³ | |
| 2013/2014 | Mob | 66,500 | 39/60 | 25,650 | 65 (10) |
| | | <19,000 | 28/38 | | 73 (9) |
| | | >19,000 | 11/22 | | 50 (NS) |
| 2013 | Rotation | 12,380 | 12/29 | 8830 | 41 (16) |
| | Spray/rotation | 16,500 | 26/27 | 0 | 100 (1) |
| 2014 | Summer recovery | 2850 | 1/28 | | 3 (NS) |
The average pre-graze volume, the number of patches from the initial number that decreased in volume post-graze, and the average volume of the patch remaining. Patches in the mob-grazed pastures were separated into those with an initial volume < or >19,000 cm³ and number that decreased in volume are presented.<sup>a</sup>Numbers in parentheses are confidence intervals based on binomial testing of the number of patches that showed a decrease over the total number with t = 0.1.
**Table 3.** Effect of grazing system on *Artemisia absinthium* average patch volume.
if not directly on, the *Artemisia absinthium* plants. The remaining patches increased in volume by 120% in 2013 and 154% in 2014. This volume increase at first does not seem correct, as pre- and post-grazing samples were taken within days of each other each year. However, the volume increase was due to an increase in patch width (**Figure 3**), and was attributed to trampling.
### 3.2.2. Rotational grazing
In 2013, 41% of the *Artemisia absinthium* patches in the rotational paddocks had a 30% decrease in volume and the remaining patches had similar volume pre- and post-grazing. Post-graze forage height of plant near the *Artemisia absinthium* patch averaged 15 cm (33%) shorter than pregrazing measurements ($P < 0.001$), which indicates that *Artemisia absinthium* may have been consumed. In the spray/rotation 2013 pasture, nearly 100% of the *Artemisia absinthium* patches decreased in volume by 100% after grazing (**Table 3**). Grass surrounding the *Artemisia absinthium* patches was 51% shorter ($P < 0.001$) post-grazing, which strongly suggests that plants in the sprayed patches were consumed with forage.
In 2014, with no grazing pressure during the summer season, only 1 (3%) of the *Artemisia absinthium* patches decreased in volume. The remainder had a volume increase of 5000% from May (average volume = 2850 cm³) to September (average volume = 151,200 cm³). In addition, the average height increased from 15 (May) to 86 cm (September). Because there was no trampling and an increase in shoot height, this increase can be attributed to plant growth.
### 3.2.3. Influence of initial *Artemisia absinthium* patch volume on grazing system impact
Initial *Artemisia absinthium* patch volume in the rotation and rotation/spray areas did not influence final volume. All *Artemisia absinthium* size categories in the rotationally grazed areas had about 50% of the patches increase and 50% decrease in volume. All *Artemisia absinthium* patches in the rotation/spray treatment were reduced to near 0, irrespective of initial plant volume. Initial *Artemisia absinthium* patch volume in mob-grazed areas influenced final *Artemisia absinthium* volume.
When data were combined over both years, 28 of 38 *Artemisia absinthium* patches <19,000 cm$^3$ decreased in volume with reductions ranging from 43 to 84% (data not shown). The other 10 patches in this category increased in volume by about 50%. There were 22 *Artemisia absinthium* patches >19,000 cm$^3$. Of these, 11 patches had a slight decrease in volume. The volume increase in the other 11 patches averaged 150%. Based on the height of the surrounding forage and *Artemisia absinthium* plant condition, it appears that *Artemisia absinthium* patches <19,000 cm$^3$ were consumed with forage, whereas plants in the larger patches were trampled and not browsed.
*Artemisia absinthium* plant height also was used to evaluate treatment effects. The average height of 30 *Artemisia absinthium* plants was similar before (average height 39 cm) and after (average height 37 cm) rotational grazing in 2013, which may be considered avoidance. The initial height [tall (>33 cm) vs. short (<33 cm)] did not influence rotational grazing impact. Before grazing, *Artemisia absinthium* plant height in the mob-grazed treatment averaged 58 cm. After mob grazing, 75% (+ 9) of the *Artemisia absinthium* plants were 37% shorter, with no plants increasing in height. Even plants that were very tall (>97 cm) were reduced in height by about 50%. These data were consistent with either trampling or consuming. We concluded that animals in the rotation pasture had enough area and forage to selectively avoid *Artemisia absinthium* plants. Spraying 2, 4-D followed by rotational grazing (spray/rotation treatment), however, resulted in a height reduction of 96% of the *Artemisia absinthium* plants (from 54 to 9 cm).
In 2014, all tagged patches in the 2013 pastures were reevaluated to determine if patches and plants in the patches were still present and the amount of regrowth. Plants in the treated patches of the rotation/spray treatment, which provided excellent control of *Artemisia absinthium* in 2013, had less volume than those originally measured in 2013, but *Artemisia absinthium* plants were still present at the same location as the original patches (data not shown). Rotational grazing, with a 2,4-D application just prior to grazing, helped manage *Artemisia absinthium* plants in the same growing season as the herbicide application as they were no longer visible just after grazing. However, this treatment did not eliminate this perennial weed, as plants regrew the year after this treatment. Plants in the mob-grazed and rotational grazed areas were also present and had no observed injury.
4. Discussion
Rotational grazing for 20 days at 25 cow/calf pairs in 8 ha had comparable results in forage consumption to mob grazing with 125 cow/calf pairs for 12 or 24 h. in 0.65 or 1.3 ha, respectively. There were other differences between the systems, most notably the vegetative growth stage of forage, which was more mature during mob grazing. Trampled vegetation was observed in the mob grazing areas but not the rotational grazing treatments. However, claims about building soil at rates of cm per year, or significantly increasing N and C content (which was measured and reported in Myer [23]), as often discussed in popular press articles [15, 19], could not be substantiated in this study. However, trampled litter and manure patches (measured as manure patches along the transects and reported in Myer [23]) were greater post-mob grazing compared to both pre-mob and post-rotational grazing.
McCartney and Bittman [20] and others [29–31], suggest that timing and grazing capacity for optimal forage utilization and weed control, with minimal harm to desired species, requires thoughtful management to improve or maintain rangeland health. Our results show that mob grazing (225,000 or 50,000 kg of cattle ha\(^{-1}\) day\(^{-1}\)) could reduce biomass of *Artemisia absinthium* a less palatable species in a pasture. In mob-grazed treatments, *Artemisia absinthium* plants appeared to be consumed if plants were small and, most likely, still had herbaceous, rather than woody, stems. Mob grazing offers the additional benefit of trampling which reduced *Artemisia absinthium* height, although not necessarily the volume, especially of larger plants. Effectiveness of mob grazing is dependent on plants species present, stocking density, and timing [14, 16, 20]. Grazing weeds should be avoided after seed set to minimize seed dispersal, as some weed seeds remain viable or increase in germination after ingestion and passing through the digestive tract of livestock [32, 33]. While we did not find literature that specifically addresses changes in *Artemisia absinthium* seed viability after animal ingestion, *Artemisia absinthium* seeds mature in late August or September [34], after the grazing events of our study, and was not investigated. If grazing an infested pasture must be delayed until a species is past its most palatable stage, or if a weed has inherently low palatability, higher stocking rates, as seen in this study and other studies [7] improved suppression.
Mob grazing with cattle has been proposed as a grazing system to increase forage use efficiency and help in landscape restoration [14] and is likened to grazing patterns of the native plains bison. Kohl et al. [35] reported that bison and cattle differ in grazing, standing, bedding, and moving behaviors, with bison moving from 50 to 99% faster and foraging up to double the land area than cattle during the same duration. This is the precedent for the frequent moves when mob grazing cattle. In addition, cattle, when not pressured, tend to select high plant biomass, whereas bison tend to select intermediate plant biomass [35]. Regardless of the inherent differences between these two species, when managed correctly, mob grazing with cattle can diversify grazing time, with frequent moves, and long rest periods [30]. However, if managed incorrectly, high intensity grazing systems could increase weed infestations [31]. For example, in 3 years, under medium grazing intensity (grazed five times year\(^{-1}\) with 6 cm of vegetation remaining after each grazing event) weeds increased by about 4 plants m\(^{-2}\), whereas under high intensity (grazed seven times year\(^{-1}\) until surface exposure), weed densities increased by 51 plants m\(^{-2}\) [36]. Hart et al. [37] reported that stocking rates that alter grazing frequency and defoliation intensity, rather than grazing system, have greater potential to impact species composition. Plant diversity and complex mixtures of forage species are integral to healthy ecosystems and consistent yields [38, 39]. However, mob grazing, if repeatedly used in the same area and at the same seasonal timing, could decrease plant species diversity and richness, change functional plant traits (e.g., tall vs. short), but improve productivity of the remaining plants [40].
The animal of choice for grazing also can influence grazing results. Goats (*Capra aegagrus hircus*) and sheep (*Ovis aries*) [7, 41] are often suggested to control brush and other undesirable vegetation, as they are more efficient at foraging and have faster growth rate than cattle. However, there are to numerous disadvantages to using goats and sheep which include: poor return on investment due to low per capita consumption of their meat products in the US and low wool prices; limited genetic improvement in milk or meat production; high predation rates compared with cattle; difficulty in fencing confinement; and susceptibility to internal
parasites, which discourages multiple species grazing [41–43]. Cattle are, by far, the grazing animals of choice in South Dakota (1.8 million cattle vs. 260,000 sheep) [44] and across the Northern Great Plains of the US.
Herbicide applications are reported to be the most effective methods for *Artemisia absinthium* control [22, 45–47]. There are numerous reports about the enhanced effectiveness of combining weed control strategies for weed suppression in grazing lands [1, 7, 47–49]. In this study, using 2,4-D ester herbicide in combination with grazing, helped remove *Artemisia absinthium* growth for the first growing season. Some herbicides affect the palatability of certain plants, encouraging livestock to eat plants they would normally avoid, like poisonous plants [50]. However, precautions must be taken if spraying 2,4-D [24] because this herbicide can cause plants to accumulate excess nitrate, become more palatable, and result in nitrate poisoning of livestock [51]. There are a few grazing restrictions for 2,4-D ester [24]. For example, meat animals could be grazed immediately after application, but not within 7 days of slaughter; and restrictions for a dairy animals differed with no grazing within 7 days post-application.
5. Conclusions
Healthy rangelands grow more grass which aids in *Artemisia absinthium* control by preventing infestations and providing competition to newly establishing plants. Grass density can be optimized by managing livestock to minimize overgrazing through rotational grazing or avoiding heavy, early season grazing [22]. Based on *Artemisia absinthium* size increase in the 2014 recovery area after the early spring rotational grazing/summer rest, it appears that rotational grazing later in the growing season (as in 2013) achieved better suppression of *Artemisia absinthium* patches, although cattle did not necessarily consume *Artemisia absinthium*.
Once present, our study showed that grazing provided temporary reductions to *Artemisia absinthium* patches, with greater reductions in the mob-grazed and rotational/spray treatments than the rotational grazed treatment. Shoots of smaller plants and those in smaller patches appeared to be consumed in both mob grazing and rotational grazing when 2,4-D ester was applied. However, even the most decimated plants had shoots the following season. Once pastures are infested, long-term management plans are needed to keep *Artemisia absinthium* in check.
We found that mob grazing with cattle for 12 or 24 h in pastures where *Artemisia absinthium* was present did indeed improve *Artemisia absinthium* control of smaller plants (as measured in plant volume) with concomitant high forage utilization. Rotational grazing at lower stocking rates for 20 days (late-May through mid-June), when combined with 2,4-D application, also suppressed *Artemisia absinthium* for that growing season. Early (mid-April) rotational grazing with a summer rest resulted in much larger *Artemisia absinthium* plants and patches in the fall. We could not verify the statements that mob grazing would result in (1) an increase of two or more cm of soil per year, nor (2) a species composition change due to the intense grazing, which are two positive benefits of mob grazing often discussed in trade journal articles [15, 19, 52]. In addition, we did not assess the impact of mob grazing on animal performance, although in a single one-time grazing situation, a change in this parameter would not be
expected. Long term management plans are needed for *Artemisia absinthium*, as all *Artemisia absinthium* patches observed after the first grazing season produced shoots the year following grazing, regardless of the amount of grazing or trampling damage that was sustained.
**Acknowledgements**
Thanks to Mr. R. Smith, Hayti, SD for providing access to land and cattle. Funding support for this study was provided by the South Dakota Agricultural Experiment Station and USDA, NRCS Conservation Innovation Grant (CIG) 3FH560 “Demonstrating Mob Grazing Impacts in the Northern Great Plains on Grazing Land Efficiency, Botanical Composition, Soil Quality, and Ranch Economics.”
**Nomenclature**
Cattle *Bos taurus* L.
Absinth wormwood *Artemisia absinthium* L.
**Author details**
Heidi Reed\(^1*\), Alexander Smart\(^2\), David E. Clay\(^1\), Michelle Ohrtman\(^1\) and Sharon A. Clay\(^1\)
*Address all correspondence to: email@example.com
1 South Dakota State University Department of Agronomy, Horticulture and Plant Science, Brookings, South Dakota, USA
2 South Dakota State University Department of Natural Resource Management, Brookings, South Dakota, USA
**References**
[1] DiTomaso JM. Invasive weeds in rangelands: Species, impacts, and management. Weed Science. 2000;48:255-265
[2] Pimentel D, Zuniga R, Morrison D. Update on the environmental and economic costs associated with alien-invasive species in the United States. Ecological Economics. 2005;52:273-288
[3] Popay I, Field R. Grazing animals as weed control agents. Weed Technology. 1996;10:217-231
[4] Plant Protection Act. 7 U.S.C. 7701 et seq. 114 stat. 438 [Internet]. 2000. Available from: http://www.aphis.usda.gov/plant_health/plant_pest_info/weeds/downloads/PPAText.pdf [Accessed: April 13, 2018]
[5] Ohlenbusch PD, Towne G. Rangeland Weed Management. KState Bulletin, MF-1020 [Internet]. 1991. Available from: www.bookstore.ksre.ksu.edu/pubs/mf1020.pdf [Accessed: April 13, 2018]
[6] Sellers BA, Ferrell JA. Weed Management in Pastures and Rangeland—2017. University of Florida IFAS Extension [Internet]. 2017. Available from: SS-AGR-08.edis.ifas.ufl.edu/wg006 [Accessed: June 30, 2017]
[7] Frost RA, Launchbaugh KL. Prescription grazing for rangeland weed management: A new look at an old tool. Rangelands. 2003;25:43-47
[8] Holechek JL. Comparative contribution of grasses, forbs, and shrubs to the nutrition of range ungulates. Rangelands. 1984;6:261-263
[9] Bailey DW, Brown JR. Rotational grazing systems and livestock grazing behavior in shrub-dominated semi-arid and arid rangelands. Rangeland Ecology Management. 2011;64:1-9
[10] Senft RL, Rittenhouse LR, Woodmansee DRG. Factors influencing patterns of cattle grazing behavior on shortgrass steppe. Journal of Range Management. 1985;38:82-87
[11] Fishel F. Plants Poisonous to Livestock. Columbia, MO: University of Missouri Extension; 2001. p. G-4970
[12] Crider FJ. Root-growth stoppage resulting from defoliation of grass. U.S. Dept. Agr. Tech. Bull. 1102.1955.23 pg
[13] Frost WE, Smith EL, Ogden PR. Utilization guidelines. Rangelands. 1994;6:256-259
[14] Savory A, Butterfield J. A commonsense revolution to restore our environment. In: Holistic Management. 3rd ed. Washington, DC: Island Press; 2016. 552pp. ISBN: 9781610917438
[15] Gordon K. Mob Grazing 101. Herford World [Internet]. 2011. Available at: www.Herford.org; hereford.org/static/files/0111_MobGrazing.pdf [Accessed: December 15, 2017]
[16] Barnes MK, Norton BE, Maeno M, Malechek JC. Paddock size and stocking density affect spatial heterogeneity of grazing. Rangeland Ecology & Management. 2008;61:380-388
[17] Pleasants AB, Shorten PR, Wake GC. The distribution of urine deposited on a pasture from grazing animals. Journal Agricutral. Sciences. 2006;145:81-86
[18] Frank DA, McNaughton SJ. Evidence for the promotion of aboveground grassland production by native large herbivores in Yellowstone National Park. Oecologia. 1993;96:157-161
[19] Thomas HS. Ranchers Sing the Praises of Mob Grazing of Cattle. Beef Magazine [Internet] 2012. February 28, 2012. Available from: http://www.beefmagazine.com/pasture-range/ranchers-sing-praises-mob-grazing-cattle [Accessed: March 03, 2018]
[20] McCartney DH, Bittman S. Persistence of cool-season grasses under grazing using the mob-grazing technique. Canadian Journal of Plant Science. 1994;74:723-728
[21] Senft RL. Hierarchical foraging models: Effects of stocking and landscape composition on simulated resource use by cattle. Ecological Modeling. 1989;46:283-303
[22] Maw MG, Thomas AG, Stahevitch A. The biology of Canadian weeds. 66. *Artemisia absinthium* L. Canadian Journal of Plant Science. 1985;65:389-400
[23] Myer H. Mob grazing as a perennial weed management tool in South Dakota grazing-lands [MS thesis]. Brookings, SD: SDSU; 2015, p. 141
[24] Anonymous. 2, 4-D Ester 4. Available from: http://www.cdms.net/LDat/ld_40L006.pdf [Accessed: November 14, 2014]
[25] Harmoney KR, Moore KJ, George JR, Brummer EC, Russell JR. Determination of pasture biomass using four indirect methods. Agronomy Journal. 1997;89:665-672
[26] Barnhart SK. Estimating Available Pasture Forage [Internet]. Iowa State Univ. Extension. 2009. PM 1758. https://store.extension.iastate.edu/Product/Estimating-Available-Pasture-Forage-PDF
[27] Grazing Stick Instruction Manual. The Samuel Roberts Nobel Foundation [Internet]. 2007. Available at: http://www.noble.org/ag/forage/grazingstick/index.html [Accessed: April, 2018]
[28] Steel RGD, Torrie JH. Principles and Procedures of Statistics. 2nd ed. New York: McGraw-Hill, Inc; 1980. 633p. ISBN: 0-07-060926-8
[29] Anderson VJ, Briske DD. Herbivore-induced species replacement in grasslands: Is it driven by herbivory tolerance or avoidance? Ecological Applications. 1995;5:1014-1024
[30] Fuhlendorf SD, Engle DM. Restoring heterogeneity on rangelands: Ecosystem management based on evolutionary grazing patterns. Bio Science. 2001;51:625-632
[31] Manske LL. General Description Of Grass Growth And Development And Defoliation Resistance Mechanisms. Range Management Report DREC 98-1022. Dickinson, North Dakota: NDSU Dickinson Research Extension Center; 1998. 12p
[32] Nishica T, Shimizu N, Ishida M, Onoue T, Harashima N. Effect of cattle digestion and of composting heat on weed seeds. Japan Agricultural. Research Quarterly. 1998;32:55-60
[33] Rahimi S, Mashhadi HR, Banadaky MD, Mesgaran MB. Variation in Weed Seed Fate Fed to Different Holstein Cattle Groups. PLoS ONE. 2016;11(4):e0154057. https://doi.org/10.1371/journal.pone.0154057
[34] Stevens OA. Flowering dates of weeds in North Dakota. North Dakota Agricultural Experiment Station Bimonthly. Bulletin. 1956;18:209-213
[35] Kohl MT, Krausman PR, Kunkel K, Williams DM. Bison versus cattle: Are they ecologically synonymous? Rangeland Ecology Management. 2013;66:721-731
[36] Harker KN, Baron VS, Chanasyk DS, Naeth MA, Stevenson FC. Grazing intensity effects on weed populations in annual and perennial pasture systems. Weed Science. 2000;48:231-238
[37] Hart RH, Clapp S, Test PS. Grazing strategies, stocking rates, and frequency and intensity of grazing on western wheatgrass and blue grama. Journal of Range Management. 1993;46:122-126
[38] Isbell F, Calcagno V, Hector A, Connolly J, Harpole WS, Reich PB, Scherer-Lorenzen M, Schmid B, Tilman D, van Ruijven J, Weigelt A, Wilsey BJ, Zavaleta ES, Loreau M. High plant diversity is needed to maintain ecosystem services. Nature. 2011;477:199-202
[39] Deak A, Hall MH, Sanderson MA. Grazing schedule effect on forage production and nutritive value of diverse forage mixtures. Agronomy Journal. 2009;101:408-414
[40] Laliberte E, Lambers H, Norton DA, Tylianakis JM, Huston MA. A long-term experimental test of the dynamic equilibrium model of species diversity. Oecologia. 2013;171:439-448
[41] Dabaan ME, Magadlela AM, Bryan WB, Arbogast BL, Prigge EC, Flores G, Skousen JG. Pasture development during brush clearing with sheep and goats. Journal of Range Management. 1997;50:217-221
[42] Sahlu T, Dawson LJ, Gipson TA, Hart SP, Merkel RC, Puchala R, Wang Z, Zeng S, Goetsch AL. ASAS Centennial Paper: Impact of animal science research on United States goat production and predictions for the future. Journal of Animal Science. 2009;87:400-418
[43] Lupton CJ. ASAS Centennial Paper: Impact of animal science research on United States sheep production and predictions for the future. Journal of Animal Science. 2008;86:3252-3274
[44] 2017 State Agriculture Overview, South Dakota [Internet]. Available from: https://www.nass.usda.gov/Quick_Stats/Ag_Overview/stateOverview [Accessed: April 12, 2018]
[45] Carey JH. *Artemisia absinthium*. In: Fire Effects Information System [Internet]. USDA, Forest Service, Rocky Mountain Research Station, Fire Sciences Laboratory. 1994. Available from: www.invasive.org/weedcd/pdfs/feis/Artemisiaabsinthium.pdf [Accessed: November 22, 2017]
[46] Eckardt N. Element Stewardship Abstract for *Artemisia absinthium*. Arlington, Virginia: The Nature Conservancy. 1987. Available from: www.invasive.org/weedcd/pdfs/tnc-weeds/arteabs.pdf [Accessed: November 14, 2017]
[47] Lym R, Messersmith C, Dexter A. Absinth Wormwood Control [Internet]. North Dakota State University Extension. 2013. W-838
[48] Lacey JR, Sheley RL. Leafy spurge and grass response to picloram and intensive grazing. Journal of Range Management. 1996;49:311-314
[49] Lym RG, Sedivec KK, Kirby DR. Leafy spurge control with angora goats and herbicides. Journal of Range Management. 1997;50:123-128
[50] Moechnig M, Deneke DL, Wrage LJ, Rosenberg M. Weed Control: Pasture and Range [Internet]. South Dakota State University Extension Service. 2013
[51] Holechek JL, Pieper RD, Herbel CH. Range Management: Principles and Practices. 6th ed. Upper Saddle River, NJ: Pearson/Prentice Hall; 2004. 607p. ISBN-13: 978-0130474759
[52] Hancock D. [Internet] Is Mob Grazing as Effective as we Thought? On Pasture. Available at: https://onpasture.com/2017/11/06/is-mob-grazing-as-effective-as-we-thought/2017 [Accessed: April 12, 2018] |
Novel variants broaden the phenotypic spectrum of PLEKHG5-associated neuropathies
Zhongbo Chen\textsuperscript{1,2*}, Reza Maroofian\textsuperscript{2*}, A. Nazh Başak\textsuperscript{3}, Leena Shingavi\textsuperscript{4}, Mert Karakaya\textsuperscript{5}, Stephanie Efthymiou\textsuperscript{2}, Emil K. Gustavsson\textsuperscript{1}, Leyla Meier\textsuperscript{5}, Kiran Polavarapu\textsuperscript{4,6}, Seena Vengalil\textsuperscript{4}, Veeramani Preethish-Kumar\textsuperscript{4}, Bevinahalli N Nandeesh\textsuperscript{7}, Nalan Gökçe Güneş\textsuperscript{8}, Onur Akan\textsuperscript{9}, Fatma Candan\textsuperscript{10}, Bertold Schrank\textsuperscript{11}, Stephan Zuchner\textsuperscript{12}, David Murphy\textsuperscript{2}, Mahima Kapoor\textsuperscript{2}, Mina Ryten\textsuperscript{1}, Brunhilde Wirth\textsuperscript{5}, Mary M. Reilly\textsuperscript{2}, Atchayaram Nalini\textsuperscript{4}, Henry Houlden\textsuperscript{2*}, Payam Sarraf\textsuperscript{13*}
1. Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, UCL, London, UK
2. Department of Neuromuscular Disease, UCL Queen Square Institute of Neurology, UCL, London, UK
This article has been accepted for publication and undergone full peer review but has not been through the copyediting, typesetting, pagination and proofreading process, which may lead to differences between this version and the Version of Record. Please cite this article as doi: 10.1111/ENE.14649
This article is protected by copyright. All rights reserved
3. Koç University, School of Medicine, Neurodegeneration Research Laboratory, KUTTAM-NDAL, Istanbul, Turkey
4. Department of Neurology, National Institute of Mental Health and Neurosciences (NIMHANS), Bengaluru, India
5. Institute of Human Genetics, Center for Molecular Medicine and Center for Rare Diseases, University Hospital Cologne, University of Cologne, Cologne, Germany
6. Children's Hospital of Eastern Ontario Research Institute; Division of Neurology, Department of Medicine, The Ottawa Hospital; Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada
7. Department of Neuropathology, National Institute of Mental Health and Neurosciences (NIMHANS) Bengaluru, India
8. University of Health Sciences, Ankara Training and Research Hospital, Neurology Dept. Ankara, Turkey
9. Okmeydani Training and Research Hospital, Neurology Department, Istanbul, Turkey
10. Medeniyet University, Göztepe Training and Research Hospital, Neurology Department, Istanbul, Turkey
11. DKD Helios Kliniken, Department of Neurology, Wiesbaden, Germany
12. Department of Human Genetics and Hussman Institute for Human Genomics, University of Miami Miller School of Medicine, Miami, FL, USA
13. Department of Neuromuscular Diseases, Iranian Centre of Neurological Research, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
Correspondence to:
Professor Henry Houlden: email@example.com
Dr Payam Sarraf: firstname.lastname@example.org
*These authors contributed equally
Short running title: PLEKHG5-associated neuropathy
Keywords: spinal muscular atrophy, Charcot-Marie-Tooth disease, genotype-phenotype association, peripheral nerve disease, hereditary sensory and motor neuropathy, hereditary motor neuropathy
TOTAL WORD COUNT = 3496
Abstract
Background: Pathogenic variants in *PLEKHG5* have been reported, to date, to be causative in three unrelated families with autosomal recessive intermediate Charcot-Marie-Tooth disease (CMT) and in one consanguineous family with spinal muscular atrophy (SMA). *PLEKHG5* is known to be expressed in the human peripheral nervous system and previous studies have shown its function in axon terminal autophagy of synaptic vesicles lending support to its underlying pathogenetic mechanism. Despite this, there is limited knowledge of the clinical and genetic spectrum of disease.
Methods: We leverage the diagnostic utility of exome and genome sequencing and describe novel biallelic variants in *PLEKHG5* in thirteen individuals from nine unrelated families originating from four different countries. We compare our phenotypic and genotypic findings with a comprehensive review of cases previously described in the literature.
Results: We found that patients presented with variable disease severity at different ages of onset (8 to 25 years). In our cases, weakness usually started proximally, progressing distally, and can be associated with intermediate slow conduction velocities and minor clinical sensory involvement. We report three novel nonsense and four novel missense pathogenic variants associated with these *PLEKHG5*-associated neuropathies which are phenotypically SMA or intermediate CMT.
Conclusions: Therefore, *PLEKHG5*-associated neuropathies should be considered as an important differential in non-5q SMAs even in the presence of mild sensory impairment and a candidate causative gene for a wide range of hereditary neuropathies. We present this series of cases to further the understanding of the phenotypic and molecular spectrum of *PLEKHG5*-associated diseases.
ABSTRACT WORD COUNT = 247
Graphical/ brief abstract
We describe novel biallelic variants in *PLEKHG5* in thirteen individuals from nine unrelated families extending the phenotypic and molecular spectrum of *PLEKHG5*-associated disease. *PLEKHG5* pathogenic variants should be considered in motor-predominant hereditary neuropathies, usually starting proximally and can be associated with minor sensory involvement.
Background
Biallelic pathogenic variants in pleckstrin homology and RhoGEF-domain-containing G5 (PLEKHG5) have been linked to one family with distal spinal muscular atrophy type 4 (DSMA4) (MIM 611067)\(^1\) and intermediate Charcot-Marie-Tooth disease (CMT) (MIM 615376) in three unrelated families\(^2,3\). Immunohistochemical analysis in the sural nerve biopsy of a CMT patient revealed low PLEKHG5 levels\(^3\). In spinal muscular atrophy (SMA), a homozygous mutation in the pleckstrin homology (PH) domain affected the nuclear factor-κB (NF-κB) transduction pathway\(^1\). Impaired synaptic vesicle autophagy is seen in PLEKHG5-depleted cultured motor neurons that show defective axon growth\(^4\).
In this study, we report seven distinct variants in thirteen individuals from nine unrelated families presenting between 8 and 25 years of age. Most patients presented with proximal weakness, sometimes associated with intermediate slow conduction velocities and minor clinical sensory involvement in some individuals. These cases extend the spectrum of PLEKHG5-associated diseases, and when compared to those previously reported in the literature (Figure 1A, Table 1), further characterise the clinical heterogeneity of PLEKHG5-associated neuropathies.
Methods
All participants provided informed written consent for participation. We provide detailed clinical description of patients presenting to tertiary neurology units in Iran, Turkey, Germany and India. All patients underwent examination by a neurologist in peripheral nerve disease and neurophysiological assessment by a local specialist. Where indicated, further examination with brain and limb MRI as well as muscle biopsy (neuropathologist-reported) are presented. For genetic analyses, DNA was extracted from peripheral blood with consent. Some individuals underwent SMN1 and SMN2 testing through multiplex ligation-dependent probe amplification (MLPA) or quantitative molecular analysis using real-time PCR under consensus criteria\(^5\). Other individuals had targeted neuropathy panel sequencing (SMN1, LMNA, MFN2, MPZ, GJB1, PMP22, SH3TC2, GDAP1, NEFL, DNAJB2, HINT). Exome sequencing and interpretation was carried out as described previously\(^6-8\). Candidate variants were confirmed by Sanger sequencing. *In silico* predictions of pathogenicity were carried out using Sorting Intolerant from Tolerant (SIFT)\(^9\), Prediction of Functional Effect of Human nsSNPs (PolyPhen)\(^10\), Combined Annotation Dependent Depletion (CADD) scores\(^11\) (Table 2), and conservation information from five species. Allele frequencies were interrogated through Genome Aggregation Database (gnomAD) v.21\(^12\); Iranome: a catalogue of variants from whole exome sequencing of 800 Iranian individuals\(^13\); Varbank Platform (1,657 sets of
exomes, Cologne Centre for Genomics); RD-Connect GPAP Platform (3,549 individual exomes) and the UCL Queen Square Institute of Neurology (QS IoN) inhouse database comprising exome sequencing data of 15,000 individuals.
Results
Family 1: Case 1
Patient 1 is a 34-year-old, right-handed Persian man, born to healthy consanguineous parents (first cousins) who had three other healthy children (Figure 1B). He presented with difficulty lifting both arms with cramps aged 19 years, followed by difficulty climbing stairs. Six years after initial symptom onset, both feet dragged on walking. At 31 years, he experienced numbness and paraesthesia affecting his hands and more severe numbness in his feet. He had an otherwise normal antecedent history. There was no family history of note. Examination revealed a symmetrical proximal limb weakness with milder distal weakness. There was left forearm and arm wasting (Figure 2A) with bilateral scapular winging (Figure 2B) and absent tendon reflexes. There was loss of light-touch, pinprick and vibration sensation to the medial malleoli and preserved arm sensation. There were no spine, hand or foot deformities. Bedside cognitive and cranial nerve examinations were normal. He had a waddling and high-stepping gait. Nerve conduction studies (NCS) aged 34 years (Supplementary Table 1) showed a predominant motor neuropathy with slowed motor nerve conduction velocities (MNCV) and normal compound muscle action potentials (CMAP) in the arms and legs. Sensory nerve conduction velocity (SNCV) was moderately slowed but sensory nerve action potentials (SNAP) were preserved. Electromyography (EMG) showed a chronic neurogenic process with some ongoing active denervation in proximal and distal, arm and leg muscles. Serum creatine kinase (CK) was 367 U/L (normal < 195). Leg muscle MRI showed proximal and distal involvement with moderate fatty infiltration of the medial head of gastrocnemius and vastus lateralis, and more mildly in the gluteus muscles (Figure 2C). Brain imaging showed diffuse increased signal in the deep white matter (Figure 2D). Taken together, the clinical and neurophysiology results are consistent with a proximal and distal neuropathy with intermediately slowed conduction velocities and minor sensory involvement clinically.
Quantitative PCR found that the patient carried a heterozygous *SMN1* deletion but further targeted sequencing showed no pathogenic variant on the other allele. Exome sequencing revealed a homozygous frameshift variant in *PLEKHG5* [c.79_83del: NM_198681.3 (p.Pro27Ter)]. Segregation analysis by Sanger sequencing showed that the variant co-segregates with disease (Figure 1B). This variant is present in one
of 227,304 alleles in gnomAD (minor allele frequency (MAF) = 4.4E-6)\textsuperscript{12} and has not been described in Iranome\textsuperscript{13} or QS IoN database (Table 2).
**Family 2: Case 2**
Presenting to the same unit was a 19-year-old, right-handed Persian female who complained of lower limb weakness since age 8 years. She was born to healthy consanguineous parents who were first cousins and had an asymptomatic 12-year-old sister (Figure 1C). There is no family history of note. She initially noticed difficulty climbing stairs. Two years after onset, weakness progressed to both arms. She then experienced a weakened grip and bilateral foot drop. This culminated in her being wheelchair-bound over the last decade. Cranial nerve examination was normal. Bilateral scapular winging was present. Limb weakness affected proximal more than distal muscle groups: there were no antigravity movements in the shoulders or elbows and only a trace of movement at the hips and knees. She had mild weakness in the hands (left more than right) (Figure 2E) and mild ankle weakness. She had prominent lumbar hyperlordosis (Figure 2F, G) without obvious wasting or contractures. Tendon reflexes were absent. There was no evidence of any sensory deficit. Motor and sensory NCS were normal at 8 years (Supplementary Table 1). EMG showed giant motor units with reduced recruitment and no evidence of active denervation, predominantly at proximal sites, consistent with SMA. A repeat EMG aged 19 years showed evidence of more proximal than distal chronic neurogenic process. NCS showed preserved CMAPs but a decrement from those observed ten years previously. CK was normal. Muscle MRI showed severe muscle atrophy and fatty replacement more in the thighs (Figure 2H) than the legs (Figure 2I) reflecting more proximal involvement. MRI brain was normal.
MLPA in *SMN1* showed no abnormalities. Exome sequencing revealed the same homozygous frameshift variant as Case 1 in *PLEKHG5* [c.79_83del: NM_198681.3 (p.Pro27Ter)]. Segregation analysis by Sanger sequencing showed her healthy parents are heterozygous carriers.
**Family 3: Case 3**
The third patient is a 44-year-old Turkish lady born of consanguineous parents. She is the youngest of seven asymptomatic siblings. Her 77-year-old father is well. Her mother died aged 68 from stroke. She presented aged 14 years with proximal lower limb weakness which gradually progressed to involve the distal arms and legs. Examination revealed a picture of pure lower motor neuron involvement in the
proximal and distal lower and distal upper limbs (Supplementary Video 1). There was no upper limb wasting (Figure 2J, K). Bedside cognitive examination was normal. NCS showed slow MNCVs and reduced CMAPs with no sensory involvement (Supplementary Table 1). EMG showed chronic neurogenic changes and muscle biopsy confirmed neurogenic changes. Analysis of SMN1 revealed no abnormalities. Exome sequencing revealed the same homozygous variant in PLEKHG5 [c.79_83del: NM_198681.3 (p.Pro27Ter)] as Cases 1 and 2. Her father, one brother and one sister were found to be carriers by Sanger sequencing (Figure 1D).
**Family 4: Cases 4, 5 and 6**
Presenting to the same unit in Turkey were three affected siblings born of consanguineous parents (first cousins) (Figure 1E). The proband (Case 4) was the eldest brother and presented aged 13 years with stiffness and difficulty climbing stairs. This progressed over several years until he could not walk unaided and had frequent falls. Both younger sisters (Cases 5 and 6) presented with proximal weakness aged 20 and 25 years respectively with a less severe phenotype than their brother. Their parents and two younger brothers and sister were asymptomatic. Case 5 suffered from hearing loss, cataracts, macular degeneration and reduced sensation in the distal limbs. EMG aged 47 showed prolonged, high amplitude polyphasic motor unit action potentials (MUAPs) in proximal muscle groups more than distally. She had slowed SNCV in the right median, ulnar and sural nerves with reduced SNAP in the right sural nerve. Right median and tibial MNCVs were slowed. Case 6 had initially presented with difficulty climbing stairs with cramps and frequent falls. She gradually developed proximal arm weakness. She complained of occasional burning in her hands and feet. Sensory examination revealed reduced light-touch sensation in the extremities. NCS aged 43 years showed moderate slowing in MNCV in the upper and lower limbs with low peroneal nerve CMAP. EMG revealed chronic, high amplitude, polyphasic MUAPs in bilateral upper and lower limbs.
Exome sequencing of Cases 4 and 6 from Family 4 revealed a homozygous PLEKHG5 [c.1648C>T: NM_198681.3 (p.Gln550Ter)] variant in the Rho Guanine Exchange Factor (RhoGEF) domain. Further Sanger sequencing confirmed the same homozygous variant in Case 5. Their unaffected mother and sister were both carriers (Figure 1E). This variant is not reported in gnomAD, Iranome and QS IoN Database.
**Family 5: Cases 7 and 8**
Case 7 developed limb weakness aged 13 years. He noticed difficulty running initially and difficulty making a fist, especially in cold weather. He developed progressive difficulty walking. His asymptomatic parents were second cousins (Figure 1F). Examination showed proximal more than distal weakness in the arms and legs. There was no evidence of cranial nerve or sensory involvement. EMG revealed widespread chronic neurogenic process with normal NCS. A clinical diagnosis of SMA was suspected but *SMN1* analysis was normal. Since the publication of his case\(^8\), an older affected sister (Case 8) also developed proximal limb weakness of later onset. Her EMG showed chronic neurogenic process without evidence of active denervation and decreased CMAPs.
Exome sequencing of Case 7 from Family 5 (Figure 1F) showed a homozygous variant in *PLEKHG5* [c.2120C>A: NM_198681.3 (p.Pro707His)]. Subsequent Sanger sequencing confirmed that his affected sister (Case 8) was also homozygous for the same variant. The affected proline residue is highly conserved and resides in the functionally-important PH domain. (Figure 1A). The variant is not seen in gnomAD, Iranome or QS IoN databases (Table 2).
**Family 6: Cases 9 and 10**
Two members of a non-consanguineous Turkish family (Figure 1G) developed proximal arm and leg weakness in adolescence. Since 13 years of age, Case 9 had difficulties getting up from crouching. She developed intermittent paraesthesia in both hands and shoulder girdles. Examination aged 30 years showed a positive left Trendelenburg’s sign and Gower’s manoeuvre. She had lower limb-predominant proximal weakness, lumbar hyperlordosis and was areflexic. EMG showed neurogenic changes. Muscle biopsy confirmed primary neurogenic muscle atrophy. CK was mildly elevated at 241 U/l. Her brother (Case 10) had difficulty elevating his arms with walking difficulties at 15 years of age. He developed shoulder girdle and left sternocleidomastoid muscle wasting. Unlike his sister, lower limb examination showed no weakness. Deep tendon reflexes were diminished. Sensory examination was normal. The MNCV of the peroneal and tibial nerves were 38 m/s and 40 m/s respectively and the EMG showed a predominant neurogenic pattern. His CK was 384 U/l.
For Family 6, pathogenic variants in the neuropathy gene panel (Methods) were excluded. Exome sequencing of Case 10 revealed a truncating homozygous deletion in *PLEKHG5* [c.289delC: NM_198681.3 (p.Arg97GlyfsTer38)]. Sanger sequencing verified the variant in both affected siblings and both asymptomatic parents were heterozygous carriers (Figure 1G). However, the homozygous single nucleotide deletion leading to frameshift and a premature stop 38 codons downstream was also identified in
their 23-year-old brother (F.6.2.5). He had not shown any signs of neuromuscular disorder to date according to his family. The variant was not found in gnomAD, Iranome, QS IoN database, or Varbank and RD-Connect GPAP Databases.
**Family 7: Case 11**
A 23-year-old male (Case 11) was the seventh child of consanguineous Syrian parents (Figure 1H). At age 10 years, he complained of shoulder weakness. He was unable to lift his arms and had difficulty climbing stairs with mild dysphagia. At 16 years of age, he developed distal arm involvement. His six older brothers and parents are healthy. Mild pes valgus deformity was present (Figure 2L). Arm and shoulder muscles were atrophic in comparison to his legs (Figure 2M, N). There was symmetric weakness in shoulder abduction and weakness in his forearms, wrists and hands. Mild hip abduction, adduction and plantarflexion weakness were present. Trendelenburg’s sign was present. Deep tendon reflexes in the arms were absent and lower limb reflexes were diminished. Sensation was unaffected. CK was 1,100 U/l. MNCV and SNCV of right ulnar and median nerves were normal. EMG showed chronic neurogenic changes. Shoulder MRI showed muscle atrophy and fatty infiltration especially of the subscapularis (Figure 2O-Q). Thigh MRI showed symmetric gluteus maximus and tensor fasciae latae atrophy with prominent septal fatty tissue (Figure 2R-T).
We performed parent-child trio whole genome sequencing. Allele-sharing statistics confirmed parental consanguinity. A homozygous missense variant in *PLEKHG5* [c.1669A>C; NM_198681.3 (p.Met557Val)] was revealed. Sanger sequencing confirmed the homozygous variant in the patient and carrier heterozygosity in the parents (Figure 1H). The variant was not found in gnomAD, Iranome or any of our inhouse databases as described (Table 2). The methionine at residue 557 is highly conserved and located in the functionally-important RhoGEF domain (Figure 1A).
**Family 8: Case 12**
Patient 12 was born of consanguineous Indian parents and had a normal developmental history. He had progressive lower limb proximal weakness and distal muscle cramps from 8 years of age. His paternal grandfather is reported to have had dragging of his feet and so did his paternal uncle. (Figure 1I). Examination aged 17 years revealed hammer toes, pes cavus and ankle contractures. He had normal cranial nerve examination, arm and thigh fasciculations, grade 4 weakness at the shoulder and hip girdles, mild distal weakness, atrophy of distal leg muscles and high-stepping, waddling gait. NCS showed normal motor
and sensory findings. Aged 42 years, distal lower limb weakness had worsened to grade 3, with moderate lower thigh atrophy and severe atrophy of leg muscles, although he continued to be ambulant (Supplementary Video 2). NCS showed moderate sensorimotor neuropathy (Supplementary Table 1). Left quadriceps muscle biopsy was characteristic of neurogenic atrophy (Figure 1 U-X). Exome sequencing showed a biallelic missense variant in *PLEKHG5* [c.2057C>T: NM_198681.3 (p.Thr686Met)] with a MAF of 1.42E-5 (4 heterozygous alleles out of 282,168) in gnomAD (v2.1.1) but absent in Iranome and QS IoN databases. The reference codon lies in the PH domain and is conserved across species.
**Family 9: Case 13**
Patient 13 is born of consanguineous Indian parents who were first cousins and had a normal birth and development. At the age of 16 years, he noticed difficulty running and experienced frequent falls. He developed progressive lower limb proximal weakness followed by distal weakness, then upper limb proximal weakness. His elder sister has similar complaints but with minimal distal leg weakness (Figure 1J). Examination showed right arm muscle atrophy, reduced pin-prick sensation over his hands and up to distal one-third of his legs, impaired joint-position sense at the hallux and absent deep tendon reflexes. He had a high-stepping gait. Serum CK was 1143 U/L. NCS revealed uniformly decreased MNCV and SNCV suggestive of an intermediate sensorimotor neuropathy (Supplementary Table 1). Exome sequencing revealed a biallelic missense variant in *PLEKHG5* [c.1364T>G: NM_198681.3 (p.Val455Gly)]. The variant is not reported in gnomAD, Iranome or inhouse databases. The reference codon for the variant lies in the functional RhoGEF domain.
**Discussion**
Hereditary motor neuropathies (HMN) encompass a subgroup of inherited peripheral neuropathies hallmark by a generally more distally-pronounced, length-dependent motor neuropathy without significant sensory involvement\(^{14}\). CMT (hereditary motor and sensory neuropathy) refers to a group of heterogeneous disorders characterized by chronic motor and sensory polyneuropathy\(^{15}\). SMA comprises a group of disorders presenting with muscle weakness and atrophy from progressive degeneration of the spinal motor neuron\(^{16}\). Although many genes are shared between distal HMN and CMT type 2\(^{15,17}\), *PLEKHG5* is the only known gene associated with both proximal SMA and CMT with intermediate MNCV\(^{15,18}\). The patho-mechanism of such diseases within the motor unit is not fully understood and may be related to spinal motor neuron degeneration or a more widespread peripheral motor and sensory neuropathy or both. Previous studies have shown aggregate formation in murine motor neurons...
overexpressing mutant PLEKHG5 protein although the neurotoxic mechanism of these aggregates remains unknown\textsuperscript{1}. Sural nerve biopsies in three published cases range from normal\textsuperscript{2} in DSMA4 to severe loss of myelinated fibres in the case of intermediate CMT\textsuperscript{3}, reflecting the underlying pathogenesis.
We present a full summary of the thirteen cases in comparison with previously-reported cases of *PLEKHG5*-associated neuropathy. The majority of our cases present with either non-5q SMA or with a proximal and distal motor neuropathy (usually starting proximally), with mild sensory involvement in three families (1, 4 and 6). Two families (8 and 9) had an intermediate CMT phenotype with reduced SNAPs and more significant clinical and neurophysiological sensory involvement. The described motor neuropathy with mild sensory changes has not been previously recognised in *PLEKHG5*-associated neuropathies and may constitute a continuum between SMA and intermediate CMT, thus broadening its phenotypic spectrum. This is exemplified in Case 1, where the proximal onset of weakness and cramps, sparing of distal SNAPs with intermediate SNCV slowing are compatible with a proximal and distal motor neuropathy (previous cases of intermediate CMT had absent or reduced SNAPs). Likewise, Cases 5 and 6 present with mild sensory involvement and predominant motor presentation, consistent with a similar phenotype of a proximal and distal neuropathy, unlike in previous reported *PLEKHG5*-associated CMT where sensory changes are more profound clinically and neurophysiologically\textsuperscript{2,3}. Lastly, Case 9 complains of intermittent sensory symptoms only with little support for neurophysiological sensory involvement. The phenotype of probands in Families 2, 3, 5 and 7 overlaps with a more pure motor involvement described by Maystadt and colleagues\textsuperscript{1}. We also observe the presence of diffuse white matter changes on brain imaging in Case 1. Given the lack of cognitive impairment, it is unclear whether this represents incidental findings. Of note, *PLEKHG5* is expressed in both central and peripheral nervous systems\textsuperscript{19}.
We report seven novel *PLEKHG5* biallelic variants (not found in ClinVar\textsuperscript{20} or Human Gene Mutation Database\textsuperscript{21}) identified in our 13 cases that do not occur within the large population databases with high MAF (\textbf{Table 2}). For the missense variants (p.Pro707His and p.Met557Leu), SIFT\textsuperscript{9} and PolyPhen\textsuperscript{10} predictions show that both are likely deleterious and probably damaging respectively lending \textit{in silico} support to their expected pathogenicity. For non-truncating variants, the CADD score\textsuperscript{11} was more than 20, placing the variants among the 1% most deleterious substitutions. Segregation analyses showed convincing evidence for variant segregation with disease except the asymptomatic brother (F.6.2.5) in Family 6. As this individual is only 23-years-old, he may yet develop symptoms suggestive of disease.
We describe the same novel homozygous nonsense variant in *PLEKHG5* (p.Pro27Ter) in two Persian patients (Cases 1-2) and a Turkish patient (Case 3) who present at different ages of onset with motor-predominant neuropathies. Although this variant occurs in an alternatively spliced exon, isoforms comprising the variant-containing exon exhibit transcript-specific expression in the tibial nerve (Genotype-Tissue Expression Project v.8)\(^9\). Variants described in Families 4, 5, 7, 8 and 9 affect functionally-important domains. In Families 4 (p.Gln550Ter), 7 (p.Met557Leu) and 9 (p.Val455Gly), highly conserved residues are affected in the RhoGEF domain, important for *PLEKHG5*-activated RhoGEF-mediated NF-κB signalling pathway. This domain activates GTPases involved in signalling pathways regulating actin cytoskeleton dynamics, synapse formation, and neuronal survival\(^{1-22}\). Pathogenic variants in the PH domain, which regulates the RhoGEF domain independently, has been shown previously to cause lower motor neuropathy\(^1\) and CMT\(^3\). The variants described in Family 5 (p.Pro707His) and 8 (p.Thr686Met) resides in the PH domain, important in NF-κB signalling.
Within our cohort, we did not discern a clear genotype-phenotype correlation, which may be due to the limited power for detection within such a rare genetic disorder. Within the same nonsense variant (p.Pro27Ter), two patients were wheelchair-bound (Cases 2 and 3) while one remained ambulatory 15 years after disease onset (Case 1). Likewise, for Family 4 (p.Gln550Ter), both sisters had a milder and later-onset phenotype than their brother. Patients with missense variants do not seem to exhibit milder disease. The two cases of intermediate CMT within our cohort were secondary to missense variants although nonsense variants were also associated with reported cases\(^2\).
*PLEKHG5*-associated neuropathies present with predominant motor symptoms, usually starting proximally and can be associated with intermediate slow conduction and minor sensory involvement. We report seven novel biallelic variants in *PLEKHG5* in thirteen clinically well-characterised individuals from nine unrelated families. Individual cases are classified as non-5q SMA, intermediate CMT or a proximal and distal neuropathy by the treating neurologist. *PLEKHG5* should therefore be considered as a candidate causative gene for a wide range of hereditary neuropathies especially if motor-predominant, proximal in onset and associated with intermediate conduction velocities. Through the presentation of our cases, we extend the molecular and phenotypic spectrum to further understanding of *PLEKHG5*-associated diseases.
Acknowledgements
ZC is funded by the Leonard Wolfson Clinical Research Fellowship in Neurodegenerative Disease. This work has been funded by the Deutsche Forschungsgemeinschaft (Wi 945/19-1; RTG1960) and CMMC (C18) to BW, a CMMC clinical scientist award to MK and a Köln-Fortune doctoral fellowship to LM. ANB is grateful to Suna and İnan Kiraç Foundation for its invaluable support and both the Foundation and Koç University-KUTTAM for the stimulating research infrastructure and environment supplied.
Consent
Written informed consent has been provided by all patients and study subjects.
Conflicts of interest
The authors declare no conflicts of interest and individual disclosure forms of all authors are submitted with the manuscript.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
1. Maystadt I, Rezsohazy R, Barkats M, et al. The nuclear factor kappaB-activator gene PLEKHG5 is mutated in a form of autosomal recessive lower motor neuron disease with childhood onset. *American journal of human genetics*. 2007;81(1):67-76.
2. Azzedine H, Zavadakova P, Plante-Bordeneuve V, et al. PLEKHG5 deficiency leads to an intermediate form of autosomal-recessive Charcot-Marie-Tooth disease. *Human molecular genetics*. 2013;22(20):4224-4232.
3. Kim HJ, Hong YB, Park JM, et al. Mutations in the PLEKHG5 gene is relevant with autosomal recessive intermediate Charcot-Marie-Tooth disease. *Orphanet journal of rare diseases*. 2013;8:104.
4. Luningschror P, Binotti B, Dombert B, et al. Plekhg5-regulated autophagy of synaptic vesicles reveals a pathogenic mechanism in motoneuron disease. *Nature communications*. 2017;8(1):678.
5. Mercuri E, Finkel RS, Muntoni F, et al. Diagnosis and management of spinal muscular atrophy: Part 1: Recommendations for diagnosis, rehabilitation, orthopedic and nutritional care. *Neuromuscul Disord*. 2018;28(2):103-115.
6. Dias CM, Punetha J, Zheng C, et al. Homozygous Missense Variants in NTNG2, Encoding a Presynaptic Netrin-G2 Adhesion Protein, Lead to a Distinct Neurodevelopmental Disorder. *American journal of human genetics*. 2019;105(5):1048-1056.
7. Karakaya M, Storbeck M, Strathmann EA, et al. Targeted sequencing with expanded gene profile enables high diagnostic yield in non-5q-spinal muscular atrophies. *Human mutation*. 2018;39(9):1284-1298.
8. Özoğuz A, Uyan Ö, Birdal G, et al. The distinct genetic pattern of ALS in Turkey and novel mutations. *Neurobiology of aging*. 2015;36(4):1764.e1769-1764.e1718.
9. Kumar P, Henikoff S, Ng PC. Predicting the effects of coding non-synonymous variants on protein function using the SIFT algorithm. *Nature Protocols*. 2009;4(7):1073-1081.
10. Adzhubei IA, Schmidt S, Peshkin L, et al. A method and server for predicting damaging missense mutations. *Nature methods*. 2010;7(4):248-249.
11. Rentzsch P, Witten D, Cooper GM, Shendure J, Kircher M. CADD: predicting the deleteriousness of variants throughout the human genome. *Nucleic Acids Research*. 2018;47(D1):D886-D894.
12. Karczewski KJ, Franciolli LC, Tiao G, et al. The mutational constraint spectrum quantified from variation in 141,456 humans. *Nature*. 2020;581(7809):434-443.
13. Fattahi Z, Beheshtian M, Mohseni M, et al. Iranome: A catalog of genomic variations in the Iranian population. *Human mutation*. 2019;40(11):1968-1984.
14. Rossor AM, Kalmar B, Greensmith L, Reilly MM. The distal hereditary motor neuropathies. 2012;83(1):6-14.
15. Pipis M, Rossor AM, Laura M, Reilly MM. Next-generation sequencing in Charcot–Marie–Tooth disease: opportunities and challenges. *Nature Reviews Neurology*. 2019.
16. Prior TW, Leach ME, Finanger E. Spinal Muscular Atrophy. In: Adam MP, Ardinger HH, Pagon RA, et al., eds. *GeneReviews®*. Seattle (WA): University of Washington, Seattle. Copyright © 1993-2020, University of Washington, Seattle. GeneReviews is a registered trademark of the University of Washington, Seattle. All rights reserved.; 1993.
17. Bansagi B, Griffin H, Whittaker RG, et al. Genetic heterogeneity of motor neuropathies. 2017;88(13):1226-1234.
18. Farrar MA, Kiernan MC. The Genetics of Spinal Muscular Atrophy: Progress and Challenges. *Neurotherapeutics: the journal of the American Society for Experimental NeuroTherapeutics*. 2015;12(2):290-302.
19. The Genotype-Tissue Expression (GTEx) project. *Nature genetics*. 2013;45(6):580-585.
20. Landrum MJ, Lee JM, Benson M, et al. ClinVar: improving access to variant interpretations and supporting evidence. *Nucleic Acids Res*. 2018;46(D1):D1062-d1067.
21. Stenson PD, Mort M, Ball EV, Shaw K, Phillips A, Cooper DN. The Human Gene Mutation Database: building a comprehensive mutation repository for clinical and molecular genetics, diagnostic testing and personalized genomic medicine. *Human genetics*. 2014;133(1):1-9.
22. Estrach S, Schmidt S, Diriong S, et al. The Human Rho-GEF trio and its target GTPase RhoG are involved in the NGF pathway, leading to neurite outgrowth. *Current biology: CB*. 2002;12(4):307-312.
Figure 1. Positions of variants and family pedigrees of cases. A. Schematic diagram showing PLEKHG5 based on NCBI reference sequence NM_198681.3 with numbered exons in the top panel. Note previous variants from reported cases have also been converted with reference to NM_198681.3 for consistency. The variants reported in our study are shown in red and the previous reported variants are shown in black. The middle panel represents the locations of the variants across the PLEKHG5 protein domains. The bottom panel shows protein ortholog alignments of missense variants reported in our study and those previously reported. The asterix indicates the position of the amino acid change with darker shades indicating a more conserved sequence. Previous reported variants with the original transcripts are reported in Table 1: Azzedine et al.: PLEKHG5 [c.1940T>C: NM_020631.6 (p.Phe647Ser)] and Kim et al.: PLEKHG5 [c.1988C>T: NM_020631.5 (p.Thr663Met)] and PLEKHG5 [c.2458G>C: NM_020631.5 (p.Gly820Arg)]. Panel B: pedigree of Family 1; Panel C: pedigree of Family 2; Panel D: pedigree of Family 3; Panel E: pedigree of Family 4; Panel F: pedigree of Family 5; Panel G: pedigree of Family 6; Panel H: pedigree of Family 7; Panel I: pedigree of Family 8 where ? indicates possible symptoms; Panel J: Pedigree of Family 9. -/- indicates a homozygous and +/- indicates a heterozygous state of the pathogenic variant.
Figure 2. Clinical and phenotypic information. Panels A, B, C and D relate to Case 1. Panels E, F, G, H and I relate to Case 2. Panels J and K relate to Case 3, Panels L to T relate to Case 11. Panels U to X related to Case 12. Panel A shows evidence of arm and forearm wasting on the left and Panel B shows evidence of scapular winging (Case 1). Panel C displays MRI T1 images that show evidence of moderate fatty infiltration in the thighs and calves (Case 1). Panel D shows evidence of diffuse deep white matter increased signal on FLAIR T2 and T2 axial MRI brain sections. Panel E (Case 2) shows small muscles of the hand were more severely affected on the left than on the right and bilateral arm wasting. There is evidence of severe lumbar hyperlordosis and bilateral thigh atrophy (Panels F and G). T1-weighted MRI images show evidence of fatty infiltration in the thighs (Panel H) more than the legs (Panel I). There was wasting in the distal upper limb and hand muscles (Figure 2J, K, Case 3). Video of examination of this patient shown in the Supplementary Video 1. Panel L shows evidence of relatively preserved lower limb muscle bulk and mild pes valgus deformity (Case 11) and Panels M and N show evidence of proximal upper limb and shoulder girdle weakness as well as shoulder girdle atrophy. Axial T2-weighted MRI of the left shoulder and upper arm shows reduced muscle bulk, partial fatty replacement of the subscapularis (red arrows, Panel O) and to a lesser degree of the triceps muscle (yellow arrows, Panel P). Coronal T1-weighted MRI of the left shoulder and upper arm shows reduced muscle bulk and partial fatty replacement.
of the subscapularis muscle (orange arrows, Panel Q). Axial T2-weighted MRI of the thighs shows symmetric atrophy of the gluteus maximus and tensor fasciae latae muscles with prominent septal fatty tissue in both muscles (Panels R and S). T1-weighted coronal MRI of the thighs show increased septal fat in the gluteus medius and gluteus maximus muscles (Panel T). Panels U-X: Muscle biopsy findings of Case 12. Panels U (x100), V (x200): Microphotographs showing haematoxylin and eosin stain transverse sections of the muscle fibres with variation in fibre size with many angulated atrophic fibres (asterix). Scattered hypertrophic fibres are seen. Panels W, X (both x200): Microphotographs showing Masson’s trichrome stain transverse sections of the muscle fibres showing variation in fibre size with many angulated atrophic fibres (asterix) and clumped nuclei (arrows). Scattered hypertrophic fibres are seen. Video of examination of this patient shown in the Supplementary Video 2.
**Table 1. Comparison of PLEKHG5-associated cases.** Cases reported in this article for Families 1 – 9 are reported with reference to NM_198681.3. Previously reported cases are shown as published. Hmz = homozygous, AR = autosomal recessive, N/A = non-applicable, where the investigation was not carried out. UL = upper limb, LL = lower limb. CMAP is combined motor action potential. MNCV is motor nerve conduction velocity. SNAP is sensory nerve action potential. CVs is conduction velocities. DMLs is distal motor latencies. SMA is spinal muscular atrophy and CMT is Charcot-Marie-Tooth disease.
**Table 2. Variants in PLEKHG5 (all related to NM_198681.3) and associated in silico predictions and allele frequencies.** As described within the Methods: Sorting Intolerant from Tolerant (SIFT), Prediction of Functional Effect of Human nsSNPs (PolyPhen) and Combined Annotation Dependent Depletion (CADD) scores were used. Allele frequencies were interrogated through Genome Aggregation Database (gnomAD) v.21; Iranome: a catalogue of variants collated from whole exome sequencing of 800 Iranian individuals and the Queen Square Institute of Neurology (QS IoN) inhouse database comprising exome sequencing data of 15,000 individuals. N/A indicates “non-applicable”.
**Supplementary Data**
**Supplementary Table 1. Nerve conduction study results.** CMAP is combined motor action potential. MNCV is motor nerve conduction velocity. SNAP is sensory nerve action potential. SNCV is sensory
nerve conduction velocity. EDB is extensor digitorum brevis. TA is tibialis anterior. Note, the reference range is only provided as a guide to aid interpretation of results for the reader and not for diagnostic purposes.
**Supplementary Video 1.** Examination of Case 3 aged 44 years shows evidence of proximal and distal lower limb weakness. There is evidence of wasting in the distal upper limb and hand muscles. The patient is unable to walk unaided and has a high-stepping and waddling gait.
**Supplementary Video 2.** Examination of Case 12 aged 42 years showing normal cranial nerve examination, proximal and distal weakness of the lower limbs, high stepping gait and wasting of the distal lower limb muscles.
| Case | Reported in this article | Previous reported cases |
|------|--------------------------|-------------------------|
| | Family 1 | Family 2 | Family 3 | Family 4 | Family 5 | Family 6 | Family 7 | Family 8 | Family 9 | SMA | CMT Case I | CMT Case II | CMT Case III |
| Reference | Case 1 | Case 2 | Case 3 | Case 4, 5, 6 | Case 7, 8 | Case 9, 10 | Case 11 | Case 12 | Case 13 | Maystadt et al. | Azzedine et al. | Azzedine et al. | Kim et al. |
| Mutation | Hmz nonsense | Hmz nonsense | Hmz nonsense | Hmz nonsense | Hmz missense | Hmz nonsense | Hmz missense | Hmz missense | Hmz missense | Hmz missense | Hmz nonsense | Hmz nonsense | CHZ missense |
| Nucleotide change | c.79_83del | c.79_83del | c.79_83del | c.1648C>T | c.2120C>A | c.289delIC | c.1669A>C | c.2057C>T | c.1364T>G | c.1940T > C (NM_020631.6) | c.269delC (NM_198681.3) | c.1143_1149dup (NM_198681.3) | c.1988C > T, c.2458G > C (NM_020631.5) |
| Amino acid change | p.Pro27Ter | p.Pro27Ter | p.Pro27Ter | p.Gln550Ter | p.Pro707His | p.Arg97GlyfsTer38 | p.Met557Leu | p.Thr686Met | p.Val455Gly | p.Phe647Ser | p.Pro0HifsTer45 | p.Glu384Ter | p.Thr963Met, p.Gly620Arg |
| Sex | Male | Female | Female | 1 male, 2 female | 1 male, 1 female | 1 female, 1 male | Male | Male | Male | 3 male, 2 female | 2 male, 2 female | 1 male, 1 female | Female |
| Inheritance | AR | AR | AR | AR | AR | AR | AR | AR | AR | AR | AR | AR | AR |
| Ethnic Origin | Iran | Iran | Turkey | Turkey | Turkey | Syria | India | India | Mali | Portugal | Morocco | Korea |
| Phenotype | Predominant motor neuropathy | SMA | Proximal and distal motor neuropathy | Predominant motor neuropathy | SMA | SMA | Motor & sensory neuropathy | DSMA4 | Motor & sensory neuropathy | Motor & sensory neuropathy | Motor & sensory neuropathy |
| Onset (years) | 19 | 8 | 14 | 13 – 25 | 13 | 13 and 15 | 10 | 8 | 16 | 2 – 11.5 | 28 – 44 | 7 and 20 | 8 |
| Symptom at onset | Proximal UL weakness | Proximal LL weakness | Proximal LL weakness | Proximal LL weakness | Proximal LL weakness | Proximal upper and LL weakness | Proximal UL and shoulder girdle weakness | Proximal LL weakness | Proximal LL weakness | Proximal LL weakness | Distal limb weakness | Distal limb weakness | Distal LL weakness |
| Sensory loss | Mild distal LL | No | No | Distal UL and LLs | No | No | No | No | Moderate distal UL & LLs | No | Yes (Distal > proximal) | Yes (Distal > proximal) | Yes |
| Areflexia | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| Weakness | Proximal>distal | Proximal>distal | Generalised | Generalised | Proximal>distal | Proximal>distal | Proximal>distal | Distal>Proximal | Generalised | Generalised | Distal>proximal | Distal>proximal | Distal>proximal |
| Cranial nerve involvement | No | No | No | No | No | No | No | No | No | None known | None known | None known | No |
| Foot deformity | None | None | None | None | None | None | None | Pes cavus / Hammer toes/ equino-cavovarus | None | Yes | Yes | Yes | Yes |
| Spine deformity | None | Lumbar hypertordosis | None | None | None | Lumbar hypertordosis | None | None | None | Scoliosis | None | Scoliosis | No |
| Muscle atrophy | Left arm and forearm | Proximal > distal | Distal UL | Proximal > distal | Scapulohumeral and hip girdle | Upper proximal predominant | LL distal predominant | Right UL | Generalised | Distal upper and LLs | Distal upper and LLs | Proximal < distal |
| Respiratory dysfunction | None | None | None | None | None | None | None | None | Yes | None | None | None | None |
This article is protected by copyright. All rights reserved
| Cases | Cardiac involvement | Cortical involvement | NCS | EMG | Sural nerve biopsy | Quadriceps muscle biopsy | Brain MRI (age at MRI) | Lower limb muscle MRI |
|-------|---------------------|----------------------|-----|-----|-------------------|--------------------------|------------------------|-----------------------|
| 1 – 3 | None | None clinically | Moderately reduced MNCV, preserved SNAPs | Chronic neurogenic changes | N/A | Neurogenic muscle atrophy (Case 9) | White matter change (33) | Moderate fatty infiltration |
| 4 – 6 | None | None | Reduced CMAPs, preserved SNAPs | Chronic neurogenic changes | N/A | Neurogenic muscle atrophy (Figure 2U-X) | No abnormalities detected (19) | Severe atrophy, fatty infiltration |
| 7 – 8 | None | None | Prolonged DMLs and reduced CMAPs | Chronic neurogenic changes | N/A | Neurogenic muscle atrophy | N/A | N/A |
| 9 – 10| None | None | Moderately reduced MNCV, CMAP | Chronic neurogenic changes | N/A | Neurogenic muscle atrophy | N/A | N/A |
Table 1. Comparison of *PLEKHG5*-associated cases.
| Cases | Variant annotation (*PLEKHG5*) | In silico predictions | Allele frequencies |
|-------|--------------------------------|-----------------------|--------------------|
| | Position (hg38) | dbSNP ID | cDNA change (NM_198681.3) | Amino acid change | SIFT | PolyPhen | CADD | gnomAD v2.1 | Iranome | QS IoN |
| 1 – 3 | chr1:6496553 | rs1439392787 | c.79_83del | p.Pro27Ter | N/A | N/A | 28.1 | 1/227304 (4.4E-6) | absent | absent |
| 4 – 6 | chr1:6470860 | N/A | c.1648C>T | p.Gln550Ter | N/A | N/A | 41 | absent | absent | absent |
| 7 – 8 | chr1:6469588 | N/A | c.2120C>A | p.Pro707His | Deleterious | Probably damaging | 28.3 | absent | absent | absent |
| 9 – 10| chr1:6476022 | N/A | c.289delC | p.Arg97GlyfsTer38 | N/A | N/A | 23.7 | absent | absent | absent |
| | Chromosome | Position | RefSNP ID | Variant | Protein Change | Pathogenicity | Prediction | Frequency | Allele Frequency | ExAC | 1000G | ESP6500Siv2 |
|---|------------|----------------|-----------|---------|----------------|--------------|------------|-----------|-----------------|------|-------|-------------|
| 11| chr1:6470839 | N/A | c.1669A>C | p.Met557Leu | Deleterious | Probably damaging | 25.8 | absent | absent | absent |
| 12| chr1:6469651 | rs553151077 | c.2057C>T | p.Thr686Met | Damaging | Probably damaging | 29.8 | 4/282168 (1.42E-5) | absent | absent |
| 13| chr1:6471636 | N/A | c.1364T>G | p. Val455Gly | Damaging | Possibly damaging | 33 | absent | absent | absent |
A
RhoGEF
PH
N - C
1 396 583 641 742
Human: V L L D P R I K A L D T E L L L R E A E V I L L F S G A Q F Y T A G S L T G V A S
Chimpanzee: V L L D P R I K A L D T E L L L R E A E V I L L F S G A Q F Y T A G S L T G V A S
Mouse: V L L D P R I K A L D T E L L L R E A E V I L L F S G A Q F Y T A G S L T G V A S
Chicken: V L L D Q R V K V L M D S L F I L D T E L F L R E A D V I L L F S G A Q F Y T A G S L T G V A S
Xenopus: V L L D Q R I K V L M D T L F I L D T E L F L K D A E V I L L F S G A Q F Y T A G S L T G V A S
B
Family 1
c.79_83del, p.Pro27Ter
Case 1
C
Family 2
c.79_83del, p.Pro27Ter
Case 2
D
Family 3
c.79_83del, p.Pro27Ter
Case 3
E
Family 4
c.1648C>T, p.Gln550Ter
Case 4 Case 5 Case 6
F
Family 5
c.2120C>A, p.Pro707His
Case 7 Case 8
G
Family 6
c.289delC, p.Arg97GlyfsTer38
F6.1.1 F6.1.2
H
Family 7
c.1669A>G, p.Met557Leu
Case 11
I
Family 8
c.2057C>T, p.Thr686Met
Case 12
J
Family 9
c.1364T>G, p.Val455Gly
Case 13
This article is protected by copyright. All rights reserved
This article is protected by copyright. All rights reserved |
Dynamic Traffic Grooming: The Changing Role of Traffic Grooming
CSC Technical Report TR-2006-15
Shu Huang and Rudra Dutta
Computer Science Department, North Carolina State University
Raleigh, NC
{firstname.lastname@example.org, email@example.com}
Abstract
Traffic grooming refers to the techniques used to aggregate subwavelength traffic onto high speed lightpaths, while at the same time minimizing some measure of network cost, usually optoelectronic equipment cost. In the last few years, traffic grooming has come to be recognized as an important research area, and has produced extensive literature. Recently, the dynamic traffic grooming problem, where traffic carried in the network varies with the time, has gained in interest. This is because of the growing applicability of QoS concerns and associated network design methodologies in networks closer to the individual users than backbone networks, where the traffic cannot be well modeled as essentially static. A number of studies in this area have recently appeared in the literature, but there is as yet no good resource that introduces a reader to the problem in all its forms, and provides a review of the literature. In this paper, we fill this void by presenting a comprehensive survey of the literature in this emerging topic, and indicating some essential further directions of research in dynamic traffic grooming.
1 Introduction
Computer and communication networking have been maturing over the past several decades, and has moved beyond the age of survival to the age of sophistication. The expectations of the end user from the network have also changed, and the concept of Quality of Service (QoS) and Service Level Agreements (SLA) have become pervasive. Until recently, it was assumed that such concerns were operative primarily in transport networks, that is at the highest level of aggregation of traffic in the planetary network hierarchy. At lower levels of aggregation, the network was seen to be composed of traffic networks, where QoS was neither feasible nor desired.
In this context, *traffic grooming* became an active area of research starting from the late 1990s. The new generation optical networks utilizing Wavelength Division Multiplexing (WDM) are currently in the process of being deployed to form the backbone networks of tomorrow. In WDM, multiple wavelength channels can be used over the same physical link of optical fiber using frequency multiplexing. Each wavelength channel can carry 10 Gbps with current technology, and higher rates are foreseen for the near future. Further, *wavelength routing* technology makes it possible to forward an optical signal at an intermediate node entirely at the optical plane, forming clear end-to-end optical channels that are called *lightpaths*. WDM networks utilizing wavelength routing
\*This work was supported in part by NSF grant #ANI-0322107.
can be modeled as multi-layer networks that consist of a virtual layer formed by such lightpaths implemented over a physical topology of optical fiber, and customer traffic routed at a second level, over the lightpaths of the virtual topology. The customer traffic demands are expected to be generally of much smaller bandwidth than the capacity of a single wavelength channel. Moreover, the traffic demands will be various different rates. For example, in generalized MPLS (GMPLS) [1, 2] networks, the traffic carried by this virtual layer are label switched paths, which can be of arbitrary bandwidth requirements. Because of the significant disparity between the typical bandwidth request of a traffic component and the much higher capacity of a wavelength, it is well recognized that, to reduce the network cost, low speed traffic (referred to as subwavelength traffic) must be multiplexed (using Time Division Multiplexing) into lightpaths.
However, wavelength routing only allows the entire wavelength channel to be switched at the optical plane. If differentiated routing and forwarding of subwavelength traffic components contained in a wavelength channel is required, the optical signal must be terminated using Line Terminating Equipment (LTE), converted into digital electronic signals, and input to an electronic logic device such as a traditional electronic router. At the end of the electronic routing operation, the packets must again be converted to optical signal and injected into outgoing lightpaths. This operation is called Opto-Electro-Optic (OEO) conversion, and is generally not desirable because it offsets the high speed and reliability of optical transport, and the OEO device is significantly more expensive than the optical switching equipment. Thus the subwavelength traffic must be packed into full wavelengths such that the cost of such OEO conversion may be optimized globally. This is the problem usually referred to as traffic grooming. The reader is referred to [3] for a survey.
In this literature, researchers have generally assumed that the magnitudes of traffic demands (given as a single traffic matrix) do not change with time. This assumption is reasonable for the following two reasons. First, in many core networks, low speed traffic requests are aggregated over several hierarchical levels of networks, and at many levels the bandwidth of the higher level network is sufficient to carry the aggregated flows from the tributary networks in terms of the average rates, but not the peak rates. Thus there is periodic buffer buildup and drainout, leading to some smoothing of traffic burstiness in such networks. Second, because of the importance of high speed traffic demands (in terms of the revenue the carrier will obtain), the network is designed such that the peak rates of traffic demands, which do not change drastically, are satisfied. Both reasons make the problem amenable to the static analysis.
However, recently the usefulness of the static approach has been seen as having clear limitations. As WDM optical networks are being deployed not only in Wide Area Networks (WAN) but also in Metropolitan Area Networks (MAN) and Local Area Networks (LAN), traffic demands have shown different dynamics. At the same time, the emergence of end-to-end QoS concerns has made it desirable to apply network design and resource provisioning techniques that were considered more suited to backbone networks to these lower level networks. In such networks, the magnitudes of traffic demands are more appropriately modeled as some functions of time. The traffic grooming problem has been generalized into this arena, giving rise to dynamic traffic grooming.
The static traffic grooming problem can be conceptually decomposed into three sub-problems: (i) the virtual topology design subproblem, (ii) the routing and wavelength assignment (RWA) subproblem, and (iii) the routing of traffic demands on the lightpaths, or grooming, subproblem. Fig. 1 shows the layered nature of these subproblems. Briefly, the network physical topology of optical fibers is an input to the problem, as is the set of traffic demands to be satisfied. The networks designer must decide what set of lightpaths to implement in the network; this is called the virtual topology subproblem. Having decided the virtual topology, the designer must specify a physical route for a lightpath from each source to each destination and assign to each lightpath a wavelength out of a given set, such that no more than one lightpath of a given wavelength traverses
each link, and the wavelength assigned to each lightpath is the same on all physical links. This is the Routing and Wavelength Assignment (RWA) problem, which has been extensively discussed and studied in optical networking literature; see [3] and references thereof for a detailed discussion. Finally, the subwavelength traffic demands must be routed over the lightpaths formed, so that each traffic demand is carried by a sequence of lightpaths that form a path in the virtual topology which carries the traffic from its source to its destination. Traffic is transferred from one lightpath to the next in the sequence by OEO routing. Global minimization of OEO routing or OEO equipment required at network nodes is often the goal of static traffic grooming, as mentioned above.
The dynamic traffic grooming problem can be understood in terms of exactly the same subproblems. However, the objective of grooming must be seen in a new light. Unlike the static problem, in the dynamic traffic grooming problem the solutions to these subproblems need to satisfy time-varying traffic. Thus the solution itself must vary with time. At the least, the mapping of traffic demands to the virtual topology must change. Also, network designers can take the advantage offered by reconfigurable optical switches to dynamically adjust the virtual topology in response to traffic demand changes, in that case the RWA must also be readjusted to map the changed virtual topology onto the unchanging physical topology.
It is important to note that the focus of grooming traffic shifts as a consequence of the above. Reduction of OEO costs may continue to be an objective of traffic grooming. But the primary objective may now well be a minimization of the blocking behavior of the network; this is not particularly relevant in static traffic grooming because with good planning the entire traffic matrix is expected to be carried by the network, but making a similar 100% guarantee under statistically described dynamic traffic may be prohibitive in cost and not desirable. Similarly, the consideration of fairness is not relevant for the static problem, but may become an important one for the dynamic case.
Another change of focus relates to the complexity of the grooming solution. In static grooming, solution approaches of significant computational complexity may be practical, since such solutions are expected to be computed off-line, with a given estimate of traffic that is expected to be valid for a reasonably long time. For the dynamic case, the solution will be computed on-line, and recomputed over normal network time scales. Thus it is essential that the algorithms to compute new solutions be of low computational complexity. Similarly, an algorithm that can be computed
in a distributed manner is likely to be of far more practical use in the dynamic context than one that requires a centralized approach; this distinction is less significant in the static case. Thus in various ways, the goal and priorities of grooming changes in the dynamic traffic context, and this is what we refer to as the changing role of traffic grooming. Finally, as the field evolves, it is likely to come to be perceived as a general class of network design problems where the cost component is largely concentrated into specialized network node equipment that will enter the mainstream in the future, such as optical drop-and-continue, wavelength converters, or OTDM switches.
The connection with the work in the Internet Engineering Task Force (IETF) in the GMPLS context is worth remarking upon. The original definition of Multi-Protocol Label Switching (MPLS) in the Networking Working Group of the IETF, building on earlier paradigms of tag switching and cut-through switching, was motivated by the need to reduce the forwarding burden on core routers. In label switching, an additional header is attached to Internet packets that carry information regarding flows to which each packet belongs. Once a flow, called a Label Switched Path (LSP) in MPLS, is set up, a Label Switching Router (LSR) in the path can forward packets bearing the label corresponding to the flow with much less processing than for a normal packet. In Generalized MPLS (GMPLS) [1, 2], time slot positions for TDM transport and wavelength channels for optical transport can also act as labels. It was soon realized by the networking community that label switch routing could also serve as an enabling mechanism for traffic engineering (TE), and flow-level QoS, because it allowed the identification of flows to routers. There has been significant recent work in defining extensions and signaling for the interaction of GMPLS and underlying networking layers, including SONET and other optical transports, and the communication of traffic engineering information between underlying networks and GMPLS [4, 5, 6].
However, these developments have focused (as appropriate for the role of the IETF) on enabling technology rather than design strategies. In keeping with the original guiding principles of the Internet, the network administrator is provided mechanisms to set up TE or QoS actions; but what actions are to be taken is left up to the administrator, who must look elsewhere for algorithms that provide policy or strategy decisions. To put it simply, all the mechanisms to set up LSPs is provided, but what LSPs to set up must be decided by the network administrator or operator. It is in this sense that research work such as traffic grooming provides a necessary complement to the development of enabling technology.
Because of the wide deployment of WDM networks, efficient operation under dynamic traffic is an area of practical interest to service providers. Efforts at different layers have already started in the arena of enabling technology to make the network friendly to dynamic traffic. At the lower layer, in the legacy Synchronous Optical Network (SONET) networks, the hierarchical rates defined for multiplexing/demultiplexing make it inefficient to carry dynamic traffic requests. To overcome this intrinsic inefficiency, two mechanisms, Virtual Concatenation (VCAT) (as defined by the International Telecommunication Union in its recommendation [ITU-T G.707]) and the Link Capacity Adjustment Scheme (LCAS) (as defined in [ITU-T G.7042]) have been developed for Next Generation SONET. At the higher layer, part of the motivation to generalize MPLS to GMPLS has been to provide a uniform control plane to LSRs that operate at IP/MPLS level as well as network equipment that operate at fiber, wavelength and circuit level. Dynamic traffic grooming is thus a timely and emerging research area. Our focus in this survey is this research area, which is expected to provide algorithms that supply designs or policies for network operation.
While a significant number of studies have appeared recently on dynamic traffic grooming, there is as yet no single resource that provides a comprehensive introduction to the problem as well as to the literature. In this paper, we hope to fill this void by providing an insight into the factors that must be considered in formulating a dynamic traffic grooming problem, and presenting a survey of the literature.
The rest of the paper is organized as follows. In Section 2, we briefly discuss network node architectures for traffic grooming networks, because it is an important factor in dictating the goals of the network design problem. We provide discussion regarding the formulation of the dynamic traffic grooming problem either as a resource allocation problem or a policy design problem in Section 3. This also allows us to present a classification of the literature. Section 4 presents a detailed literature survey according to our classification. We conclude with a few remarks on future directions in Section 5.
2 Node Architectures
The extent to which subwavelength traffic components may be manipulated (and thus what grooming actions may be performed) is determined by the network equipment that are available at the nodes. Accordingly, in this section, we provide a brief overview of nodal capabilities. A more detailed discussion, with some discussion of future switch capabilities, may be found in [7].
Generally speaking, the traffic entering/leaving a node equipment can be described by a tuple (optical fiber, wavelength, time-slot). Thus, a “perfect” switching node would perform a complete permutation, i.e. the traffic from any fiber, any wavelength, and any time slot would be possible to switch to any other fiber, wavelength, time slot. However, due to considerations of cost and scalability, different node architectures are deployed in reality that have less than perfect switching capability. These impose different constraints on the grooming problem. We will show in Section 3.3.1 how a mathematical formulation for the dynamic traffic grooming problem requires careful examination of the node architectures. A generic modeling of the constraints that applies to different architectures is also an interesting problem.
The basic conceptual building blocks of such switches can be broadly divided into optical components, which manipulate optical signals, and thus operate at the level of entire wavelength channels, and electronic or digital components, which are capable of manipulating individual bytes and packets as electronic signals, as in traditional routers and electronic computers. Optical networking switches will in general have some of each type of component, and can be characterized by the capabilities of each of these. When a number of signals are multiplexed into a carrier, multiplexers (MUX) and de-multiplexers (DEMUX) are required at the sender and receiver respectively. If an equipment has the capability to de-multiplex signals, then selectively switch some of them to another switching equipment at the same node, while passing others through to a multiplexer for outgoing signals, it is called an Add-Drop-Multiplexer (ADM). Such an equipment performs only one decision for each de-multiplexed flow (whether to drop it or to pass it through). If, in addition, the equipment has the capability to choose which of several outgoing ports a signal is passed through to, it is called a Cross-connect (XC).
SONET ring networks were one of the first optical networking architectures to be used in practice, and continue to be important today. In SONET rings, only one optical channel on each fiber is used. Fibers are usually interconnected by SONET Add-Drop-Multiplexers (SADMs), which are digital equipment that have the capability to switch traffic at time-slot level. Thus the MUX/DEMUX refers to individual traffic streams time-division multiplexed in the optical signal. At a ring node, there is only one other node from which an incoming link exists, and only one other node to which an outgoing link exists. Thus Add-Drop functionality is all that is required. In SONET mesh networks, fibers are interconnected by Digital Cross-Connects (DXCs or DCSs), which, unlike ADMs, handle multiple input and output fiber ports. DXCs, which perform switching at time-slot level, can be characterized by $p/q$, where $p$ represents the port bit rate and $q$ represents the bit rate that is switched as an entity. For a comprehensive description of SONET, see [8].
For WDM networks, multiple wavelength channels are frequency multiplexed in each fiber links, and lower rate traffic streams are time division multiplexed in each wavelength channel. The digital equipment at the node can perform switching actions on the lower rate traffic streams by utilizing Synchronous Transport Signal (STS) in the optical signal. In WDM ring networks, an Add-Drop method as above can be used, but now *Optical ADMs* (OADMs) are used to selectively by-pass some wavelengths along the ring, while others are dropped into digital equipment, which may be SADMs. This forms the simplest node structure that can be used in optical grooming networks, and is shown in Fig. 2. The various wavelength channels frequency multiplexed in the fiber are represented by $\lambda_1 \ldots \lambda_n$. The by-passing of wavelength channels creates *lightpaths*, channels that are optically continuous over multiple physical fiber links. In Fig. 2, the first four wavelengths are by-passed in this fashion, whereas the last two are dropped (and added, at the output). It is possible to re-generate a lightpath signal on a different wavelength entirely by optical hardware (without converting the signal into the digital electronic plane), this is called wavelength conversion. However, such equipment is quite costly, and in many cases practical node architectures may not include such converters. Without wavelength conversion capability, lightpaths must obey the *wavelength-continuity constraint*, i.e. a lightpath must be assigned the same wavelength on the fiber links it traverses. For each added/dropped wavelength, an SADM is dedicated to process the traffic the wavelength carries electronically. The number of SADMs at a node determine the number of wavelength channels for which traffic can be switched at the timeslot level, thus this number characterizes in part the switching power of the node. It is well recognized that the cost of transceivers is the main contributor to the network cost, therefore the number of SADMs available at an OADM is usually either the objective to minimize, or a constraint to which the optimization problem is subject. This problem is referred to as ADM constrained grooming in [9]. Furthermore, if the SADMs on the different wavelengths are isolated (as shown in Fig. reffig:oadm), not only lightpaths but traffic components need to obey the wavelength-continuity constraint because the traffic dropped at a wavelength has to be sent onto the same wavelength in order to be forwarded to its destination, as in [9]. This constraint can be relaxed if a digital switching fabric is available such that the traffic added/dropped by SADMs can be reshuffled and re-injected into other SADMs, resulting in a more powerful switching node. Fig. 3 shows an example of such a node, with optical MUX/DEMUX and OADM, and SADMs on each dropped wavelength connected by a DXC.
In contrast to OADMs, which usually have predetermined add/drop wavelengths, *Reconfigurable OADMS* (ROADMs) allow a network administrator or operator to dynamically select what wavelengths to drop or by-pass. The reconfigurability does not represent an increase in the power of the
switch in terms of how much traffic can be switched, but introduces more flexibility. The number of maximum wavelengths that can be dropped characterizes the power of the switch, as well as the digital switching capability (as before). For a comparison of different ROADM architectures, refer to [10]. An example is shown in Fig. 4.
In all the above, the optical part of the switch is only an ADM, and the electronic part is an ADM or an XC. These can all be viewed as a special case of *Optical Cross-Connects* (OXC)s, the most general class of grooming switches, which are widely expected to be deployed in realistic mesh topologies. In such switches, the optical ADM is replaced by an optical XC. Thus wavelength channels can not only be by-passed to form lightpaths, but these lightpaths can be switched to specific output ports. An OXC is similar to an ROADM, but can accommodate incoming fibers from multiple nodes, similarly outgoing fibers to multiple nodes. Three broad classes of OXCs have been defined (refer to Telcordia’s Optical Cross-connect Generic Requirements GR-3009-CORE):
- **Fiber switch cross-connect**: the entire signal carried by an incoming fiber is switched to an outgoing fiber, cannot perform different actions for different wavelength channels of timeslots.
- **Wavelength Selective Cross-connect**: can switch a subset of the wavelengths from an input fiber to an output fiber, obeying wavelength continuity constraint.
- **Wavelength Interchanging Cross-connect**: WSXC with wavelength conversion capability.
In addition, time slot multiplexing/demultiplexing and grooming can be performed by a DXC if it is incorporated in the node. Fig. 5 shows an example of OXC that has grooming capability,
with $m$ input and $m$ output fiber ports. (Usually, the number of input fiber ports is equal to the number of output fiber ports; however, in [11], the design of strictly non-blocking OXCs with different numbers of input and output fibers has been studied.) An OXC usually has two separate switching fabrics, the wavelength switching fabric that switches traffic at the wavelength level, and the grooming fabric that switches traffic at the time-slot level [12]. Since the grooming fabric can be viewed as a DXC, to avoid confusion, the cost is usually modeled in terms of the number of transceivers, instead of SADMs as in SONET ring networks. Note that both the transceiver and the SADM can be seen as terminating a lightpath into digital equipment; thus this cost measure can be generalized as the number of LTEs required.
Other node capabilities related to the ones described above are possible. A node intermediate in power between an ROADM and an OXC called *Optical Add-Drop Switch* (OADX) has also been defined and is commercially available; however, we do not discuss it here because from the grooming point of view such as node is equivalent either to an ROADM or an OXC. In [13], a node is modeled as trunk-switched and a generalized framework for analyzing *Trunk-Switched Networks* is addressed. The authors introduce the concepts of trunks and channels, whose definitions are node architecture dependent. Trunks can be viewed as forming a virtual layer, and an input channel can be switched to any output channel at a full-permutation node, as long as both channels are within the same trunk. For instance, without wavelength converters, a wavelength can be viewed as a trunk and if time-slot switching is permitted, a time-slot can be viewed as a channel. In [14], by the same authors, a network with heterogeneous node architecture is studied. However, this framework does not address the case of a node which combines different node architectures. For example, while the added/dropped wavelengths can interchange time slots through the switching fabric, an OADM has some bypassing wavelengths (trunks) in which time slots can not be switched.
As the above discussion shows, depending on the node architecture, a node can operate at the fiber, wavelength, or time-slot level, and at each level, it may have full or limited functionality. In addition, some variants are worth mentioning. For instance, to avoid the cost of full grooming DXCs, the grooming functionality can be separated into two levels, where the higher level is a coarse groomer that deals with high speed traffic streams and the lower level is a finer groomer that deals with low speed traffic streams. The authors of [15] consider such a situation, and remark that using the proposed mixed-groomer node architecture is beneficial in terms of reducing both the switching cost and the number of wavelengths required. In the node architecture introduced in [16], an extra waveband layer is inserted between the wavelength and the fiber layer. In [17], the authors describe a node architecture, Multicast-Capable Grooming Optical Cross-connect. Using the embedded strictly non-blocking splitter-and-delivery (SaD) switches, the proposed dynamic tree grooming algorithm can be supported. In [18], another Multicast-Capable Optical-Grooming
Switch architecture is introduced. Instead of using SaD switches, it has two stages of optical switching. Multicast traffic leaving the first stage optical switch is sent to a splitter bank and then switched by the second stage optical switch.
While different node designs afford different flexibilities in grooming solution design, some general conclusions regarding the cost points can be made. Clearly the distinction between OADM and OXC is dictated by considerations of the physical topology - an OADM is useful only in a ring. Other than that, the optical part of the switch is characterized by the number of wavelengths, which in turn is characterized by the transmission system being adopted. However, in the electronic part of the switch, there is room for more fine-grained design decisions. The DXC may typically have less capacity than that of every lightpath of every fiber port of the OXC. Thus the number of LTEs that form the ports of the DXC is often a good measure of the grooming capability, and also the cost, of the switch. For a ring node, the SADMs embody the LTEs, whereas for general topologies, the number of optical transceivers or transponders are the equivalent quantity.
3 The Dynamic Traffic Grooming Problem
In this section, we make some general observations to indicate the scope of problems dealt with in literature that we consider as coming under the umbrella of dynamic traffic grooming. Broadly, we include both problems that take an essentially dynamic approach to changing traffic and problems that convert this changing nature into a static design problem. However, the underlying problem should be motivated by the changing nature of traffic. Also, we consider the study to come under grooming only if the multiplexing of subwavelength traffic is considered to contribute to the cost model or constraints in some manner. We exclude literature from our scope if the only consequence of subwavelength traffic is seen to be the required multiplexing, because such studies are more appropriately considered to fall under the more established research areas of routing design and resource allocation with multiplexing. These considerations prompt us to consider out of scope studies such as [19], which is in effect static grooming study; or [20], which is more appropriately considered a restoration strategy design at the lightpath level. Finally, we use the concepts developed in this section to present a categorization of the literature on this topic, which we go on to survey in detail in Section 4.
3.1 Design and Analysis Problems
In [21], we classified the dynamic traffic grooming problem into two broad categories: the *design problem* and the *analysis problem*. The distinction, while not an absolute, is a practically useful one in understanding approaches to the problem and categorizing them.
- The network *design* problem focuses on the state space; a time-varying one for the dynamic problem. Given a model of behavior of the network and some quantities of interest to optimize, the design problem attempts to find optimal settings of controllable parameters.
- The network *analysis* problem focuses on modeling the behavior. Given an *a priori* policy of network control under dynamic traffic events, such as arrival, departure, increment, decrement; the analysis problem attempts to develop a predictive model of some quantities of interest, under changing values of input parameters, such as arrival rates.
The two problems are complementary, because the design problem presupposes a model that allows computation of the goal under specific resource allocation and policy, and the analysis
problem presupposes an existing policy and resource allocation under given traffic conditions. In the area of dynamic traffic grooming, analysis problems considered in literature generally address the blocking performance of the network under some given grooming policy, as experienced by arriving subwavelength traffic components. The design problems considered in literature show a larger variety both in the problems formulated as well as the approaches taken, and we discuss more of them in the rest of this section. At the end of this section, in Table 1, we use the distinction between design and analysis problems as our first categorization of literature on the dynamic traffic grooming problem. In Section 4, we include surveys of both categories of literature.
### 3.2 Quantities of Interest in Design
We briefly list the basic quantities in terms of which the design problem is defined, with accompanying notation.
- Let $N$ be the set of nodes and $A$ be the set of directed fiber links in the physical topology graph. We assume that the physical topology does not change with time.
- Let $S$ be the set of traffic demands denoted by the source-to-destination node pairs in the network; $S$ may consist of all distinct ordered pair of nodes, but may also be a subset of it because some node pairs do not have traffic between them.
- Let $\Lambda_{|N|\times|S|} = [\lambda_n^s(t)]$ be the traffic matrix, where $\lambda_n^s(t)$ is the time varying traffic flow for the node-flow pair $(n, s)$. Specifically, $\lambda_n^s(t)$ is $\lambda_s$, if at time $t$, the traffic demand $s$ is sourced from node $n$, and has magnitude $\lambda_s$. Similarly, $\lambda_n^s(t)$ is $-\lambda_s$, if the traffic demand $s$ is destined to node $n$, and 0 if $n$ is neither the source nor the destination of the traffic demand $s$. We suppose $\lambda_s$ for every $s$ is in units of a basic rate, and the capacity of a wavelength is $C$, in the same units.
- Let the number of wavelength channels available on each physical fiber link by $W$, wavelengths are numbered from 1 to $W$ on each fiber.
- Let the matrix of the physical topology be $P_{|N|\times|A|} = [p_n^{(a)}]$, where $p_n^{(a)}$ is 1 if the fiber $a$ is sourced from node $n$, $-1$ if it is destined to $n$, 0 otherwise.
- Let $L$ be the set of lightpaths, and let $V_{|N|\times|L|\times W} = [v_{n,w}^{(l)}(t)]$ be the matrix of wavelength layered virtual topology, where $v_{n,w}^{(l)}(t)$ is 1 if at time $t$, lightpath $l$ is sourced from node $n$ and uses wavelength $w$, $-1$ if it is destined to $n$ and uses wavelength $w$, 0 otherwise.
- Let $R_{|A|\times|L|\times W} = [r_{a,w}^{(l)}(t)]$ represent how the virtual topology is routed on the physical topology and assigned wavelengths, where $r_{a,w}^{(l)}(t)$ is 1 if lightpath $l$ uses the wavelength $w$ on fiber link $a$ at time t, 0 otherwise.
- Let $G_{|L|\times|S|} = [g_l^{(s)}(t)]$ represent how the traffic demands are routed on the virtual topology, where $g_l^{(s)}(t)$ is $\lambda_s$ if the traffic demand $s$ traverses lightpath $l$ at time $t$, 0 otherwise. This represents the case that traffic bifurcation is not allowed; additional variables can be introduced to represent bifurcated or diverse routing of traffic demands.
In general terms, the *input* to the dynamic traffic grooming problem are:
(i) the traffic demand matrix $\Lambda$, a function of time,
(ii) the resource availability (includes physical topology $P$, number of wavelength channels $W$, etc.), generally not varying with time, and
(iii) the node architecture (limits to grooming capability, etc.), also generally not varying with time.
The output of the dynamic traffic grooming problem are:
(i) the virtual topology $V$,
(ii) the routing and wavelength assignment $V$ for the virtual topology on the physical topology $P$, and
(iii) the routing $G$ of the traffic demands on the lightpaths of the virtual topology.
In general, all of the outputs are functions of time.
### 3.3 Basic Constraints
#### 3.3.1 Constraints on the node architecture
- As we observed in Section 2, the total OEO processing capability of a node is directly constrained by the finite number of LTEs at the node. This is expressed as:
\[
\max \left( \sum_{l: v^{(l)}_{n,w} > 0} w v^{(l)}_{n,w}(t), \sum_{l: v^{(l)}_{n,w} < 0} w -v^{(l)}_{n,w}(t) \right) \leq \text{LTE}_n \quad \forall n
\]
where $\text{LTE}_n$ is the number of LTEs available at node $n$.
- In Section 2, we have shown that different node architectures may also result in different constraints on the feasible grooming solutions. For instance, the unavailability of wavelength converters imposes the wavelength continuity constraint in the RWA problem. Because the wavelength converters are expensive, most researchers assume that they are absent in the network. Consequently, lightpaths must obey the wavelength continuity constraint. That is; $v^{(l)}_{n,w}(t)$ is 1 if at time $t$ lightpath $l$ is sourced from node $n$, $-1$ if at time $t$ lightpath $l$ is destined to node $n$, 0 otherwise.
Depending on the node architecture, there may be further constraints on the set of wavelengths a local transmitter can be tuned to. For example, practically, transmitters may be equipped with lasers with limited tunability (e.g., a recent OADM card provided by a major vendor can only be tuned to a band that has two predetermined wavelengths). However, if the wavelengths that are dropped/added are reconfigurable and completely selective, such a constraint is not required.
#### 3.3.2 Constraints on the RWA problem
To ensure correct RWA, we can use the following constraint or similar:
\[
P_{|N| \times |A|} R_{|A| \times |L| \times W} = V_{|N| \times |L| \times W}
\]
To ensure one wavelength on a fiber is assigned to at most one lightpath, we can use:
\[
\sum_l r^{(l)}_{a,w}(t) \leq 1 \quad \forall a, w
\]
3.3.3 Constraints on the traffic routing
We use $V_{|N| \times |L|} = [v_n^{(l)}(t)]$ to denote the virtual topology at time $t$. $v_n^{(l)}(t)$ is 1 if the lightpath $l$ is sourced from node $n$ at time $t$, $-1$ if the lightpath $l$ is destined to node $n$, 0 otherwise. Note that the virtual topology is the sum of the wavelength layered virtual topology, that is:
$$V_{|N| \times |L|} = \sum_w V_{|N| \times |L| \times W}$$ \hspace{1cm} (4)
The following constraint ensures the traffic demands are properly routed on the virtual topology.
$$V_{|N| \times |L|} G_{|L| \times |S|} = \Lambda_{|N| \times |S|}$$ \hspace{1cm} (5)
To ensure the capacity of a lightpath is obeyed, we have:
$$\sum_s g_l^{(s)}(t) \leq C \quad \forall l$$ \hspace{1cm} (6)
3.4 Static and Dynamic Formulations of Design
3.4.1 Static Formulation - Resource Allocation
While traffic demands change with time, the change may be partly or wholly predictable. As an extreme case, the nature of variation of traffic with time may be completely deterministic. If the value of the traffic demands at all times (over a period of interest) is known with certainty beforehand, the problem can be seen as some variation of a general resource allocation problem, and a static formulation of the problem is most appropriate.
In this model, the traffic is deterministically given over some period of interest, possibly as a sequence of traffic matrices, $\Lambda(t_0), \ldots, \Lambda(t_n)$. The period may be infinite, by specifying that the pattern of traffic matrices repeats; this is essentially a scheduling problem. This model is amenable to an ILP formulation [56]. One obvious approach to such a problem is to eliminate the effects of time-variation altogether by simply designing for the peak values each traffic component assumes in the entire set of matrices. However, as shown in [23, 24], using the traffic matrix formed by the peak rates may result in requiring an unnecessarily large amount of resources. The reason is the space-time nature of the dynamic traffic grooming problem, which is left out of consideration in this approach. The traffic matrix of peak rates is an overestimation of the traffic demands, because the dynamic nature of traffic spreads peak rates out along the time dimension. Thus this problem, while a static problem, is distinct from the static grooming problem.
3.4.2 Dynamic Formulation - Policy Design
On the other hand, unpredictability or uncertainty may be seen as an essential characteristic of the traffic model. In such cases, the dynamic nature of the problem needs to be explicit in the problem formulation. The problem must be seen as one of supplying a policy design for the network, that is an algorithm that the network control plane can employ to make decisions in response to traffic change events, with state and action space defined as follows:
State space: Since traffic events can occur and network actions can be taken only at discrete points in time, we represent $\Lambda(t)$ as a discrete-time temporal process, $\Lambda_i$ is the traffic matrix at time epoch $t_i$ (a time epoch is defined as an instant at which a dynamic traffic event occurs). Then, each $\Lambda_i$ is associated with a virtual topology $V_i$, a routing and wavelength assignment $R_i$, and a
traffic routing $G_i$. The tuple $\{V_i, R_i, G_i\}$ is referred to as the grooming solution at time $t_i$. Then, the network state at time $t_i$ can be described by the tuple $\{\Lambda_i, V_i, R_i, G_i\}$.
**Action space:** According to the layer it will affect, the actions taken by the network control algorithm can be classified as follows:
- Call Admission Control (CAC) actions, where two possible actions are **reject** and **accept**. If a traffic change is accepted, actions on other layers may follow. Note that while we use the term “call”, the events may be more general ones than arrivals of entire subwavelength traffic demands; for example it may be an increment or decrement to the magnitude of a traffic connection already established. However, the network action must still start with a decision regarding whether to accept or reject the increment.
- Network layer routing actions. Once a change is accepted, the changed traffic will be either routed on the existing virtual topology, or it will trigger virtual layer actions. The actual route of the subwavelength call on the virtual topology must also be determined according to some policy. When the change is in the nature of a traffic decrease, network layer action may also be triggered to rearrange the routing of remaining traffic, see below.
- Virtual layer setup, teardown, or routing actions. To route the changed traffic component, new lightpaths may be set up. These may be either a direct lightpath, or a combination of new lightpaths, which may be further utilized in conjunction with existing lightpaths to route the changed traffic component. For new lightpaths, routing and wavelength assignment is performed. Similarly when traffic decreases, lightpaths may be also torn down in response.
- Re-routing Actions. Furthermore, if disruption of existing traffic is allowed, the actions may include rerouting (or even terminating) some existing traffic. Existing subwavelength traffic may be rerouted on the virtual topology, or existing lightpaths may be rerouted on the physical topology.
For each action, $\{V_i, R_i, G_i\}$ will change to $\{V_j, R_j, G_j\}$. The goal of the policy will be always to maximize some reward function, akin to the objective function for a static formulation; we discuss some possible goals later in this section.
Referring back to our discussion regarding Fig. 1, we see that the physical topology at the lowest layer does not change with time, and the highest layer, the traffic demands to be carried, do change with time. Thus dynamic traffic grooming strategies can be seen as the algorithms executed by the network to perform a time-varying mapping of the traffic onto the network resources, using the routing, wavelength assignment, and grooming, to satisfy the demands and satisfy some goal of network operation such as operating cost minimization or maximization of utilization.
### 3.5 Models of Non-deterministic Traffic Variation
For the dynamic formulation, traffic variations are not wholly predictable, but the time-variation of traffic may nevertheless be modeled or characterized to some extent. Different models can be designed to reflect realistic network conditions, we list a few below.
- $\Lambda(t)$ is a Poisson process, and the model is simply one of subwavelength traffic component arrival/departure. In the general context of dynamic traffic grooming, it is reasonable to assume that $|\Lambda(t) - \Lambda(t + \Delta t)|$ is small for a short time period $\Delta t$, which motivates this model.
• Traffic demands are preferred to be serviced within time windows [25]. This is a generalization of the simple arrival-departure model. Instead of each traffic component requiring to be serviced at the instant (or as soon after as possible) that it arrives, every traffic component specifies a window of time within which the traffic component must be carried. The arrival process may again be Poisson, or some other process.
• Traffic demands are restricted by specified bounds. Such bounds may be provided by the traffic components themselves, or they may be imposed by the available resources. For example, the number of SADMs available at the node (referred to as $i$-allowable traffic in [23]). Let $\text{SADM}_n$ be the number of SADMs at node $n$, then the traffic matrices must satisfy:
$$\max \left( \sum_{s: \lambda_n^{(s)} > 0} \lambda_n^{(s)}(t), \sum_{s: \lambda_n^{(s)} < 0} -\lambda_n^{(s)}(t) \right) \leq \text{SADM}_n \cdot C \quad \forall n$$
• Traffic components change in magnitude over time in increments and decrements. The process by which increments and decrements occur may be Poisson or some other.
• Entire traffic matrices are specified as in the deterministic model, but the time epochs $t_i$ are not deterministic, and varies according to some random process.
### 3.6 Design Goals
The goal of either resource allocation or policy design is to minimize some measure of cost in provisioning and operating the network, and/or to maximize the benefit from the network. This can be embedded as cost function(s) in a static formulation, and reward function(s) in a decision formulation. In the literature, different goals have been articulated, some representative ones include:
• Minimize the network cost; these are more suitable for the static, resource allocation, view:
– Number of ports at network nodes (converters, LTEs, wavelengths).
– Amount of OEO processing.
• Maximize the revenue by providing better service or better utilization of the network resource; more appropriate for the dynamic, policy design, view:
– Minimize the blocking probability.
– Minimize the provisioning time (time to setup a connection for an arrival, traffic delay, etc.)
– Minimize the disruption to traffic already being carried.
– Minimize the unfairness (e.g., traffic demands having different bandwidth requests should have approximately the same blocking probability).
These goals are usually correlated in a way that makes it impossible for them to be optimized simultaneously. Therefore, some kind of trade-off or preference must be considered. For example, in [26], the network architectures for WDM SONET rings that have the minimal SADM cost are studied, but subject to a limited number of wavelengths. In [56], an MILP for the dynamic traffic grooming problem with the objective of minimizing the SADM cost is solved by two phases, where, in the first phase, the number of wavelengths is minimized. In [27], the authors propose a
connection admission control mechanism that provides good fairness without over-penalizing the overall blocking probability. In [23, 28], the objective is to design networks with the minimal SADM costs while keeping the existing traffic undisrupted (non-blocking in the strict sense).
### 3.7 Literature Classification
Based on the observations we have made in this section, we present an organized view of the literature on the dynamic traffic grooming problem in Table 1. Because all the categories are not orthogonal, several papers appear in multiple places in this table. Thus this table should be thought of as an organization rather than a categorization.
Moreover, some studies address more than one category of problem. For example, consider the variants of blocking probability that are considered in the literature. The blocking characteristic of a network can be classified as strict-sense non-blocking, wide-sense non-blocking and rearrangeable non-blocking (*e.g.* in [23]). If the network resources can guarantee strict-sense non-blocking, then all the new arrivals will be satisfied, and the policy design problem may not be addressed since it is trivial. However, if network cost considerations dictate accepting lesser blocking performances, to design a wide-sense non-blocking or rearrangeable non-blocking network, both the problems of resource design and policy design (to route new arrivals) are likely to be addressed.
### 4 Literature organization
In this section, we present detailed surveys of the literature. Table 2 provides a quick summary of most of the papers making up the dynamic grooming literature we survey.
#### 4.1 Analysis
As we have discussed in the previous sections, the resource and policy design problems are in essence optimization problems. In order to evaluate the performance (usually, the blocking probability) of a design, practitioners often resort to massive simulations. As simulation results are generally specific to the input (arrival and departure rates, etc.) and time consuming, analytical models are not only interesting in its own right but practically meaningful. In literature, the metric of greatest interest is the blocking probability, i.e., the ratio of the number of accepted arrivals to the total number of arrivals. In order to accept an arrival, the subproblems described in Fig. 1 should be solved. We distinguish two cases, the single-hop case and the multi-hop case (referred to as dedicated-wavelength TDM and shared-wavelength TDM in [13]). In the former case, a new arrival is accepted if it can be routed on a single lightpath (either an existing one or a new one to be established) from source to destination. In the latter case, the arrival is allowed to traverse multiple lightpaths, which could be a combination of existing lightpaths and newly established lightpaths. In addition, some routing and wavelength assignment algorithms should be assumed, e.g., the shortest path routing and random wavelength assignment algorithms considered in [13, 14]. In queuing networks, we also distinguish single-rate and multi-rate requests. In the single-rate model, all traffic demands have the same magnitude. The model simplifies the analysis significantly. However, in grooming networks, the multi-rate model may be more realistic because traffic demands are usually subwavelength, thus in units of some basic rates (say, OC-3).
Another difficulty comes from the traffic model. It is well known that the Poisson model fails to capture the self-similarity of the traffic pattern in networks. In addition, in grooming networks, traffic demands usually traverse multiple physical/logical hops. Therefore, the link load correlation becomes an important issue.
| Analysis (of Blocking Probability) | Virtual topology is assumed to be ... | static, given | opaque [29, 30] |
|-----------------------------------|--------------------------------------|---------------|----------------|
| | dynamic, strategy given | single-hop [31] |
| | | multi-hop [32, 29, 27, 14, 30] [33, 34, 36] |
| Specific modeling technique ... | link load correlation | correlated [14, 30, 33, 27, 29, 35] |
| | | uncorrelated [31, 32, 34, 36] |
| | traffic rate model | multi-rate Poisson [27, 31, 32] [33, 30, 34] |
| | | single-rate Poisson [14, 29] |
| Design (Performance Optimization) | Traffic variation modeled as ... | arrival departure model | Poisson model [29, 49] |
|-----------------------------------|--------------------------------------|------------------------|------------------------|
| | | incremental [37] |
| | | elastic [61] |
| | traffic matrix constraints | peak constraint [23, 38] |
| Objective of design is ... | blocking probability | strict sense [23, 28, 39, 40, 41] [42, 43, 44, 57, 58, 59, 60, 62] [48, 63, 64, 65] |
| | | wide sense [26, 45] |
| | | rearrangeable [38, 46, 26, 45] [47, 41] |
| | fairness [48, 27, 9, 29, 49] | |
| | OEO costs | number of LTEs [37, 23, 26, 45] [39, 50, 51, 52, 54, 56, 64, 24] |
| | | number of wavelengths [26, 45] [56, 24, 34] |
| | | amount of OEO processing [55] |
| Virtual topology in solution is allowed to be ... | static [53] | |
| | one per traffic pattern | |
| | sequence, schedule of virtual topologies [38] | |
Table 1: Variants of the Dynamic Grooming Problem
All these challenges and difficulties make the exact queuing analysis intractable. Accordingly, researchers have made different assumptions and simplifications. In the following section, we survey related works in this field.
4.1.1 Multihop model with correlation
As previously mentioned in Section 2, in [13], Srinivasan et al. had presented a framework for analyzing the performance of Time-Space Switched optical networks. In [14], this framework is applied to networks with heterogeneous node architectures. Assuming a single rate model, the blocking probability for a path with $z$-links is computed recursively from a two-hop path model. The authors also assume *Markovian correlation*, i.e., the traffic on a link only depends on its previous link. In the homogeneous case, the trunk distribution is computed from the channel distribution on a two-link path, which can be characterized as a three-dimensional Markov chain. In the heterogeneous case, different nodes may have different views of the channel/trunk distribution. Specifically, the trunk distribution as viewed by the second node, given the trunk and channel distribution viewed by the first node, depends on how the channels are distributed across the trunks.
at the two nodes. Two mappings, namely architecture-independent mapping and architecture-dependent mapping, are proposed to find the conditional probability. In [33], the authors extend the work to the multi-rate case.
Washington et al. study the blocking probability on tandem networks, i.e., a unidirectional path virtual topology [30]. The authors consider the multi-rate arrival model on existing lightpaths. A path network is first decomposed into subsystems consisting of two adjacent nodes and analyzed exactly by a modification of Courtois’ method. The first step of the Courtois’ method that requires solving a system of equations is replaced by solving a multi-rate model for the exact conditional steady-state probabilities. After that, the link load correlation is considered by proposing an iterative method.
The authors of [35] study the performance of traffic grooming networks. Two types of grooming networks are distinguished, the constrained grooming networks where each node of the network is a wavelength-selective crossconnect (WSXC), and the sparse grooming networks where some nodes are wavelength-grooming crossconnect (WGXC). WSXCs are equipped with both OXCs, which perform switching at the wavelength level, and OADMs, which groom traffic streams onto the added/dropped wavelength. The authors start with a simple two-hop single-wavelength system. Arrivals are multi-rate traffic requests. Then the network state is described by \((n_1, n_2, \ldots, n_j, \ldots, n_g, m_1, m_2, \ldots, m_j, \ldots, m_g, l_1, l_2, \ldots, l_j, \ldots, l_g)\), where \(n_j\) is the number of traffic demands that traverse the first link only and ask for \(j\) capacity, \(m_j\) is the number of traffic demands that traverse the second link only and ask for \(j\) capacity, \(l_j\) is the number of traffic demands that traverse both the first and the second link and ask for \(j\) capacity. Then the steady state distribution can be obtained. Using the two-hop single wavelength capacity correlated model, a more complex and realistic multi-hop single wavelength model is solved. The application of this model in performance analysis in general networks is also demonstrated. The main novelty of the paper is taking the multi-rate requests and capacity correlation into account. However, routing and wavelength assignment is not addressed because the capacity correlation model is specified for single wavelength systems.
### 4.1.2 Uncorrelated models
In [31], Xin et al. study the blocking performance analysis problem on traffic grooming in single hop mesh networks. A closed-form formula is derived by some simplifications. For example, a single-wavelength link (SWL) blocking model is introduced and the multi-rate arrivals are converted into bulk arrivals and approximated departures. The authors also assume that overflow traffic is Poisson. Then a reduced load model is used to compute the end-to-end blocking probability.
By the same authors, the work in [32] is an extension of [31] that takes multi-hop routing into consideration. The authors propose a simple admission algorithm at a source node for each incoming traffic demand. A routing strategy is given such that the SWL model introduced in [31] can be extended to include multi-hop traffic arrivals. Instead of the sequential overflow model, a random selection of two-hop paths for the overflow multi-hop traffic demand is performed.
The blocking performance of multi-hop traffic grooming networks is studied also by Yao et al. [36]. The authors simplify the problem by decomposing it into different levels, namely the alternate path, connection route, lightpath and link levels. The proposed model works as follows. For a given source-destination pair, some link-disjoint alternate paths are pre-determined and the s-d pair is blocked if all alternate paths are unable to carry it. On an alternate path, traffic can be electronically processed at some grooming nodes. The grooming node selection defines the route of the traffic (i.e., the set of lightpaths the route consists). To select grooming nodes, the authors introduce the load sharing policy, which tries the direct route (without intermediate grooming
nodes) first and randomly select a candidate route if the direct one fails. Accordingly, a path is blocked if all candidate routes are unable to satisfy it, and respectively, a route is blocked if any of the lightpaths it consists of are unable to satisfy it. Assuming the wavelength conversion capability is absent, a lightpath can be carried if there is an available single wavelength path (i.e., an available wavelength on all the links along the lightpath). The availability of a single wavelength path is in turn decided by the availability of the set of single wavelength links it consists of, i.e., the existence of a set of common channels (time slots) that can satisfy the amount of capacity the s-d pair requires. It should be noticed that, using this model, some important assumptions have been made. First, the single wavelength links consisted in a single wavelength path are assumed to be independent. Similarly, the lightpaths consisted in a route are assumed to be independent. Meanwhile, the overflow traffic is assumed to be Poisson.
4.1.3 Other models
The study in [61] deals with the traffic models in traffic grooming networks. The aim of the paper is to investigate how traffic elasticity, the reactivity of traffic with respect to the changing environment (load, e.g.), impacts grooming. The authors argue that even in core networks, the traffic is elastic in nature. Therefore, it is inappropriate to model them as the traditional circuit switched traffic. Specifically, two traffic grooming policies, virtual-topology first (\textit{VirtFirst}) that prefers using existing lightpaths and optical-level first (\textit{OptFirst}) that prefers setting up new lightpaths, are studied under two traffic models that have some feature of elastic traffic. The first model, referred to as time-based (TB), which is less complex, captures the decrease of throughput of traffic when there is a congestion. The more complex model, referred to as data-based (DB), captures the nature that the more congested the network, the longer flows remain in the network. Different combinations of the traffic models and the grooming policies are simulated using the simulator named GANCLES and the average throughput per flow, the starvation probability, the ratio between the opening rate of optical paths and the arrival rate of flows at the IP level, and, the average number of links per optical path are compared. The simulation results show that the interaction between the IP and optical layer gives rise to some complex behaviors, which suggests that neither the \textit{OptFirst} nor \textit{VirtFirst} are suited for the management of an IP over WDM grooming network, because they do not take the interaction between the IP and optical layer into consideration.
As we have mentioned in 3.5, [25] studies the ‘sliding scheduled traffic model’. Specifically, a traffic demand is given by a tuple \{s, t, n, l, r, \tau, p\}, where \(s\) and \(t\) are the source and destination respectively, \(n\) is the bandwidth requirement, \(l\) and \(r\) are the starting and ending time respectively, \(\tau\) is the duration of the request, and \(p\) is a binary representing the priority of the demand. The traffic demand is required to be scheduled within the time window \(l\) to \(r\) (i.e., it should start between the time interval \(l\) to \(r - \tau\)); otherwise, it needs to be rearranged. Then the traffic grooming problem conceptually consists of two parts, the scheduling part and the grooming part. The scheduling part decides the starting time for each traffic demand in a manner such that the number of overlapping demand pairs in time is minimized. The grooming part then performs a time window based grooming algorithm. It first chops the time into non-overlapping time windows such that each window consists of at least two overlapping traffic demands by an adaptation of the maximum independent set algorithm over an interval graph. The traffic demands are classified into subsets according to their priorities and whether they straddle time windows or not. For each subset, a modified shortest path routing algorithm is used to groom the traffic demands. Finally, other traffic demands that cannot be satisfied due to insufficient resources are rearranged to another time in a manner such that they can be accommodated and finished as early as possible.
The space-time traffic grooming algorithm is compared with a tabu search algorithm that uses fixed alternate routing and the authors claim that the former algorithm outperforms the latter one in terms of the number of lightpaths.
4.2 Design
As a design problem, it usually consists of two phases: build a model and solve it. Therefore, two major concerns are how accurate the model is and how difficult this model can be solved. In dynamic traffic grooming networks, one important challenge that impacts the accuracy of the model is how to model traffic variations. As we have seen in Table 1, different traffic models have been proposed. As we described above, the problem can be formulated as an ILP when the traffic model is deterministic, or is treated as the traffic changing entirely to a new traffic matrix while the network is running [46].
As another concern, solving the model is also challenging. Since the general static traffic grooming problem is NP-Complete [54, 55], obviously the general dynamic traffic grooming problem with the static formulation, which has significantly more time dependant variables is also NP-Hard. In [55], we show that the static problem may be even inapproximable. Because of this, most research focuses on heuristic approaches.
4.2.1 Objective is blocking performance
As we have discussed in section 4.1, it is generally very hard (if not impossible) to find a closed-form solution to the blocking probability in grooming networks. Therefore, many researchers propose heuristic traffic grooming algorithms and compare their performance in terms of blocking probability.
220.127.116.11 Strict sense
Because of the heuristic nature of grooming algorithms, some simple policies (or rules of thumb) may provide us some insight into the whole problem. When there is a new arrival, two simple and straightforward policies are (i) setting up a new lightpath or (ii) using the existing virtual topology. In [61], these two policies are studied under two traffic models that have some feature of elastic traffic. We discuss [61] in detail in Section 4.1.3. The above grooming policies are classified as operation oriented policies in [62], which deals with the operations that will be performed to accommodate an arrival. The authors also introduce the IP Layer First (ILF), Optical Layer First (OLF) and the One Hop First (OHF) policies that fall into the same class. Another class of policies is the objective oriented policies that address some explicit optimization goals to be achieved by some combinations of basic operations. For example, the MinTHV, MinTHP, MinLP and MinWL policies of [57] fall into the objective oriented class. In [62], the authors propose a path inflation control (PIC) strategy that combines different operations by taking the link state at an instant into consideration. The network in the paper has two layers, the IP/MPLS layer and the optical layer. Path Inflation Index (PII) is used to monitor the congestion of the network. Based on the PII, the algorithm makes the choice between establishing a new lightpath or routing on the existing IP topology for an LSP request. Specifically, routing on the existing IP/MPLS layer is preferred if the route is not too much longer than the length of the shortest path. The main reason of this strategy is that, the ILF policy may result in a path much longer than the shortest path, thus significantly increasing the congestion of the network. On the other hand the OLF (OHF) may exhaust the network resources (transceivers and wavelengths) very quickly. In [63], the same authors extend the idea of PIC to provide differentiated services based on the priority. The high priority LSP requests
should have lower blocking probability than the low priority requests. Again, the PII is calculated for an LSP request. As in the previous paper, a request will be routed on a new lightpath if the route on the IP/MPLS layer is too long. For low priority requests, if no lightpath can be set up due to the wavelength or transceiver limit, they will be blocked. In contrast to low priority requests, high priority requests are routed on the IP/MPLS layer. The algorithm proposed (referred to as algorithm A in the paper) is further modified by introducing the Average Path Inflation Index (APII), which takes the holding time of LSP requests into account. This algorithm (referred to as algorithm B in the paper) gives those LSP requests that are blocked by algorithm A a chance to be routed if the APII is not too large. The authors claim that algorithm B can be extended to handle more than two priority classes, however, numerical results are provided only for two classes.
Sabella et al. propose a strategy for dynamic routing in GMPLS networks [53]. A GMPLS network is modeled as a multi-layer network consisting of an IP/MPLS layer and a logical layer. Assuming that the logical layer is given, the authors study the problem of how to route a new LSP request. The proposed strategy has two phases. First, an IP/MPLS topology is considered, where there is an MPLS link between two nodes if and only if there is at least one lightpath interconnecting them. Based on this topology, a proposed routing algorithm extends the least resistance routing weight method [44] to the multi-layer GMPLS paradigm, where subwavelength LSPs are routed. The second phase is the grooming phase, where the LSP is groomed into lightpaths. Two policies, the packing policy that prefers the most loaded lightpath and the spreading policy that prefers the least loaded lightpath, are addressed. Extensive simulation results show that the strategy named Multi Layer -Least Resistance Packing (ML-LRP) outperforms other variants.
In [28], the authors use a genetic algorithm to find a grooming solution in a strictly non-blocking manner for all-to-all traffic demands. New traffic demands are satisfied without re-routing and reconfiguration. To realize the strictly non-blocking property, the chromosome is decoded by a first fit approach incorporated with a local greedy improvement algorithm.
To enable a traffic grooming network, some kind of grooming algorithm must be implemented. Practically, we expect that the algorithms are some on-line algorithms that have a short processing time and small memory usage. Accordingly, some authors propose auxiliary graph based approaches, which can be adapted to satisfy various objectives. Based on the auxiliary graph, different grooming algorithms are proposed. This approach takes the advantage of the flexibility of an auxiliary graph and routing algorithms such that some simple algorithms can be constructed taking cross-layer information and heterogeneous node architectures into consideration. Different studies propose different auxiliary graph constructions.
Zhu et al. study the dynamic traffic grooming problem in mesh networks using a novel graph model [57]. This model creates an auxiliary graph that has an access layer, a lightpath layer and $W$ wavelengths layers, where $W$ is the number of wavelengths on a fiber. Each layer has an input port and an output port. Different edges representing different node capabilities are inserted between ports. An edge has a property tuple that states its capacity and weight, which reflects the cost of each network element (transceiver, wavelength-link, wavelength converter, etc.), and/or a certain grooming policy. Instead of solving the subproblems of the traffic grooming problem independently, an auxiliary graph based integrated algorithm is proposed. Different grooming policies, namely the Minimize the Number of Traffic Hops on the Virtual Topology (MinTHV), Minimize the Number of Traffic Hops on the Physical Topology (MinTHP), Minimize the Number of Lightpaths (MinLP) and Minimize the Number of Wavelength-Links (MinWL), are achieved by applying different weight-assignment functions to the auxiliary graph. Using different grooming policies, various objectives are evaluated. In [12], Zhu et al. study a more specific resource provisioning problem where network nodes have different grooming architectures. Without wavelength converters, the graph model proposed in [57] is simplified to consist of four layers, the access layer, the mux layer, the grooming
layer and the wavelength layer. Also note that by splitting the lightpath layer in [57] into the mux and grooming layers, the model is able to support different types of lightpaths distinguished by the source and/or destination node grooming capabilities. Using this model, the authors illustrate how different traffic engineering optimization goals can be achieved through different grooming policies.
As in [57], auxiliary graphs are constructed in [40] to solve the dynamic traffic grooming problem. The graph has two layers, the virtual topology layer and the physical topology layer. An improvement to the previous work is the introduction of the link bundling (or more accurately wavelength bundling). In particular, following constraints are taken into consideration: the transceiver constraint and the generalized wavelength continuity constraint, which allows nodes equipped with different kinds of conversion capability. The link bundled auxiliary graph (LBAG) simplifies the previous auxiliary graph representation by aggregating at most $W$, the number of wavelengths available on a link, wavelengths into one arc in the LBAG. Based on the graph, an algorithm (SAG-LB) is proposed to find a feasible path and a feasible wavelength assignment. As multiple feasible paths may exist, grooming polices, namely least resource path first (LR) and self-adaptive least resource path first (SALR), are introduced to select the preferred one. To prefer paths that consume less scarce resources, the LR and SALR policies explicitly take the wavelength and transceiver resources into consideration. The simulation results show that in some cases LR and SALR outperform the least physical hop path first and least virtual hop path first policies, in terms of the blocking probabilities (however, no conversion is assumed).
In [58], Farahmand et al. propose the Drop-and-Continue Node architecture, which, in addition to setting up some new lightpaths and/or utilizing existing lightpaths, allows two other operations, namely, drop-and-continue and lightpath extension. These two operations can reduce the network cost. If a lightpath is terminated at an intermediate node and part of the traffic goes to the local port before traversing another lightpath that will reach the destination node, two pairs of transceivers are necessary. However, if a splitter is used, one transmitter can be saved because the intermediate node can drop the local traffic without disturbing the optically bypassed signal, except for the power loss. This is a general scenario in multi-cast networks where we have one source node and many receiving nodes. Considering the node architecture and operations, the auxiliary graph method is used to solve the dynamic traffic grooming problem. The graph has a dedicated layer for each wavelength and different edges describing existing lightpaths, potential lightpaths, potential extended lightpath and sub-lightpath. By assigning them different weights using different grooming policies that are essentially the same as those in [57], a shortest path algorithm is used to find the best solution. Note that without the intermediate dropping and extension capability, this algorithm becomes identical to that of [57].
In [17], the same authors propose an auxiliary graph based tree grooming algorithm dealing with dynamic unicast traffic on mesh networks. Based on a node architecture that supports light-trees, auxiliary graphs are constructed. A difference of the constructions between this paper and [58] is the introduction of the grooming layer. Four kinds of edges are distinguished, namely, the AddEdge, the DropEdge, the pass-through edge (PTEdge) and the wavelength link edge (WLKEdge). Based on the auxiliary graph, a dynamic tree grooming algorithm (DTGA) is proposed. The DTGA has the weight assignment strategy (referred to as routing polices) as a sub-routine. As in [58], the edges are assigned weights by different policies. Accordingly, the shortest path algorithm is used by the DTGA to setup a connection for an arrival. Specifically, the connection is set up either by establishing a new light-tree along the vertices on the optical hop or extending an existing light-tree to cover the remaining vertices.
The study in [42] is again an auxiliary-graph based approach. Two graphs, namely the virtual graph and the layered graph, are introduced. In a virtual graph, the edges are the so called partially available (PAL) edges, which represent the existing lightpaths that have spare capacity. The edges
in a layered graph, which consists of wavelength planes, are fully available (FAL) edges. The significance of these two types of edges is as follows. If a traffic demand is routed on the virtual graph, i.e., routed by PAL edges, no new lightpaths will be setup, hence it will not use additional transceivers. On the other hand, if a traffic demand is routed on the layered graph, it may result in a route with less number of hops with additional transceivers. Upon these two graphs, a two-layered routing algorithm (TLRA) is proposed, which tries to route a traffic demand on the virtual graph first, then tries the layered graph if the first step fails. Obviously, the TLRA may fail to route some traffic demand that requires a route consisting of both existing lightpaths and new lightpaths. Then, a single layered routing algorithm (SLRA) based on an integrated graph is proposed. The shortcoming of SLRA, as claimed by the authors, is that using the shortest path algorithm on an integrated graph may result in a route that uses more new transceivers. The reason is that, in the integrated graph, the PAL edges and FAL edges are not distinguished. In view of the pros and cons of TLRA and SLRA, the authors propose the third algorithm, the joint routing algorithm, that combines the TLRA and SLRA. Finally, the algorithms are compared in terms of the blocking probability and it shows JRA, incurring a slight increase in complexity over TLRA, outperforms the others irrespective of whether the number of transceivers is small or large.
Although the auxiliary graph approach takes the advantage of its simplicity, because of the heuristic nature, in [59], Ho and Lee argue that, the algorithms proposed in [57] can be time-consuming in large scale mesh networks. A remedy is proposed by considering only part of the whole network when auxiliary graphs are constructed. Specifically, when a traffic demand arrives, instead of constructing an auxiliary graph with $n$ nodes, where $n$ is the number of nodes in the network, $m$ candidates that are in the physical shortest path of the traffic demand are evaluated. If no lightpath can be found, neighbor nodes of the candidates are included into the consideration. This procedure can be repeated until a lightpath is found or resources are exhausted. In [60], based on the same idea, the authors propose a dynamic traffic grooming algorithm. In the first phase, to reduce the complexity of constructing an auxiliary graph of the entire network, a reachability graph that includes all the possible logical paths between the source and the destination is constructed. Based on the graph, the second phase is to find the optimal route by a cost-constraint algorithm, where the cost of interest is the sum of the cost of grooming fabrics and the penalty paid for wasted wavelength bandwidth.
### 18.104.22.168 Rearrangable
Traffic grooming algorithms are also studied in the context of reconfiguration, where one main concern is when and how the network should be reconfigured.
Kandula and Sasaki study the dynamic traffic grooming problem with rearrangement on ring networks [38]. The authors provide a reconfiguration algorithm, called bridge-and-roll (BR), such that the number of LTEs is reduced while keeping the network as bandwidth efficient as a fully opaque network. Putting different constraints on the resources, some interesting traffic models are introduced to illustrate the algorithm. In addition, to reduce the cost of traffic disruption, bounds are provided in terms of the number of BRs.
In [47], Gencata and Mukherjee study the reconfiguration problem under real dynamic traffic. The traffic is assumed to fluctuate slowly compared to the observation period. In each observation period, the network load is monitored and compared with two watermarks ($W_L$ and $W_H$). The actions include: setup a new lightpath, tear down a lightpath or do no change during the observation period. Note that, exactly one action can be taken during an observation period. The duration of the observation period is adjustable to make a trade-off between efficiency and traffic disruption. If some links are congested (i.e., the load on the link is greater than $W_H$), one new lightpath is setup
in the observation period. If some links are underutilized (i.e., the load on the link is lower than $W_H$), one lightpath is torn down. The authors first formulate the problem as an MILP problem, with a goal to minimize the maximum load, with constraints that ensure the correct action is triggered and the virtual topology changes correspondingly. Then, the authors propose a heuristic adaptation algorithm. If some links are congested, the algorithm simply picks the link that has the maximum load and the maximum traffic component that traverses the link, then sets up a new lightpath for the selected traffic component.
As we have mentioned in section 3.3.1, different node architectures may raise different problems. In addition to the Drop-and-Continue node architecture in [58], and the splitter-and-delivery switch architecture in [17], the authors of [15] study a two-layer groomer architecture. Based on this architecture, a dynamic traffic grooming algorithm that combines both rerouting and segmented backup employing backup-backup multiplexing is proposed. Traffic requests are multi-rate requests, and may or may not require protection. Therefore, to satisfy a new arrival with protection requirement, both the primary and backup routes need to be set up. In case that rerouting existing traffic is necessary to accommodate a new arrival, end-to-end backup routes of existing traffic are considered first, in order to avoid disrupting the existing traffic. If no route that is link-disjoint with the current primary and backup routes is found, all backup routes (end-to-end or segmented) are considered. The “best” route for a backup route is found if rerouting the backup on this route can satisfy the new arrival. Finally, existing traffic without protection requirements or with end-to-end backups are considered, either a traffic without backup is rerouted on a link-disjoint new route or an end-to-end backup route is rerouted on a route that is link-disjoint with the backup (not necessarily the primary).
[41] studies the rerouting algorithms and operations for dynamic traffic requests. When a traffic request arrives, rerouting is performed only if the existing routing fails to accommodate the request. Two approaches are proposed for the rerouting, namely, rerouting at lightpath level (RRAL) and rerouting at connection level (RRAC). RRAL can be viewed as a special case of reconfiguration because the rerouting of lightpaths suggests a way to change the virtual topology. The RRAC, on the contrary, keeps the virtual topology unchanged while changing the traffic routing on the virtual topology. Both approaches have pros and cons. The RRAL may be simpler in terms of the time complexity because the input is the set of lightpaths, which are much fewer in number than the traffic requests. However, it is subject to a longer time of disruption because of the laser re-tuning time involved. The RRAC, although more complicated, provides a finer granularity of adjustment. Practically, a combination of both approaches may be more appropriate. Based on these two approaches at different layers, two algorithms, called critical-wavelength-avoiding one-lightpath limited and critical-lightpath-avoiding one-connection-limited, are proposed. The first approach initially finds the set of critical wavelength of a path (the set of wavelengths that are used on only one link alone the path), then the lightpath using this critical wavelength is rerouted such that a traffic request can traverse the path. Similarly, the latter approach finds the set of critical connections for a path and a connection request and reroutes the critical connection so that the new request can be satisfied.
### 4.2.2 Objective is fairness
Another objective of interest in traffic grooming networks is the fairness as we mentioned in section 3.6. The main concern is that, traffic with lower bandwidth requirements should not starve traffic with higher bandwidth requirements, i.e., traffic with different bandwidth requirements should experience similar blocking performances. Otherwise a user sending a big file would have to choose to request a low bandwidth and take a longer time. Indeed, fairness is one of the important metrics
of QOS, which is generally implemented by the Call Admission Control. While CAC comes under the general area of grooming policy design, it is a distinct area which has received significant attention and it is worth mentioning separately. As one of the major functionalities that the control plane needs to implement, CAC has been extensively studied in signaling-based networks (e.g., ATM), where a call is accepted or rejected with respect to a pre-established agreement between the user and the service provider or the resource availability. In the context of optical grooming networks, we expect that some “old” concepts (e.g., QOS) will be re-examined by taking the virtual layer into consideration. As we have mentioned, when a new call arrives, the basic actions to take are *accept* and *reject*. Without CAC, a call will be rejected only if the available resources are unable to accommodate it. However, in a network with service differentiations, this simple strategy may not lead to an optimal overall utilization/revenue.
In [27], a CAC algorithm is proposed to deal with the capacity fairness, which is achieved when the blocking probability of $m$ calls of line-speed $n$ is equal to the blocking probability of $n$ calls of line-speed $m$, and this is true for every pair $m,n$ of line-speed. The overall blocking probability is defined as the blocking probability per unit line-speed of the call requests. The fairness ratio $F_r$ is defined as the ratio of the estimated blocking probabilities of calls of lowest and highest line-speeds. Therefore, the goal of the CAC algorithm is to make $F_r$ as close to 1 as possible while keeping the overall blocking probability acceptable.
Mosharaf et al. address the wavelength provisioning problem in [29]. A simple 2-hop tandem network with three classes of traffic, traffic traversing the first hop only, traffic traversing the second hop only and traffic traversing both hops, is studied. The authors propose a dynamic partitioning approach. That is, the number of wavelengths allocated to each class of traffic is some function of the current state. This problem is formulated as a Markov Decision Process (MDP) problem. When a wavelength request terminates, the network decides for which class this wavelength is reserved. The best policy (the set of best actions for each possible state) is achieved by the Policy Iteration algorithm which maximizes the overall weighted utilization, using the discount cost model with infinite horizon. In [49], the same authors extend the work of [29] to grooming networks where traffic demands are usually subwavelength, with the goal to minimize the unfairness. Considering a single-hop single wavelength network, traffic is classified according to the bandwidth it requires. Thus, the network state is described by the number of existing calls of each class. Using this simple model, the optimal policy is examined. The authors also propose a heuristic to decompose tandem and ring networks using the idea of pre-allocating wavelengths for traffic with different $o-d$ pairs such that overlapping $o-d$ pairs do not share wavelengths (note that this is possible because the routing for all $o-d$ pairs are predetermined in the ring and tandem topologies). The numerical results show that substantial improvement in terms of fairness and utilization can be achieved compared to that of complete sharing policy and complete partitioning policy.
As an auxiliary graph based approach, in [48], the authors study the fairness problem based on an auxiliary graph model (AGM), which consists of wavelength planes and different kinds of edges. Wavelength link edges (WLEs) represent the availability of wavelengths, groomable link edges (GLEs) represent the availability of grooming capability, virtual link edges (VLEs) represent the availability of transceivers and directed link edges (DLEs) represent the source and destination of the traffic demand. In addition to grooming policies, two fairness policies are proposed. The fairness is evaluated in terms of the blocking probabilities of traffic demands with heterogeneous requests. The first policy is wavelength quota policy (WQP), which sets a wavelength quota for each connection class (rate). Since traffic demands requesting higher speed are more likely to be blocked, they receive more quota. Based on the quota, a dynamic grooming algorithm called wavelength quota method (WQM) is proposed. The next policy is transceiver quota policy (TQP). Instead of counting the wavelength quota, transceiver quota is used to groom heterogeneous traffic.
demands in a manner as fair as possible.
4.2.3 Objective is OEO
Since the all-optical network is still unrealistic, optical signals need to be electronically processed. Therefore, a specific objective to be optimized is the OEO cost. The OEO cost may consist of different metrics of interest, such as the number of LTEs (or SADMs, electronic ports, etc.), the number of wavelengths, and the amount of OEO processing.
In [26], Sasaki and Gerstel study the dynamic traffic grooming problem for some typical WDM SONET ring architectures that guarantee no blocking. The primary network cost is the number of SADMs while the secondary concern is the number of wavelengths. For WDM unidirectional path switched ring (UPSR) and two-fiber bidirectional line switched ring (BLSR/2) networks, both the cases of limited and unlimited number of wavelengths are studied. For UPSR networks with limited number of wavelengths, a lower bound is derived by assuming the traffic is allowed to be cross-connected at every node. Given this lower bound, a single-hub architecture that guarantees wide-sense non-blocking, as well as a node grouped architecture designed for static traffic, are compared. For the UPSR wavelength limited case, the single-hub architecture and an incremental architecture are compared. The incremental architecture is a simplified version of the incremental network described in [45], where around the ring, nodes alternate between having the maximum and minimum number of ADMs (i.e., the trivial upper and lower bounds of the number of ADMs at a node). It shows that the incremental architecture is rearrangeably non-blocking and also wide-sense non-blocking for incremental traffic [45]. For BLSR/2 networks with unlimited number of wavelengths, the single-hub architecture is wide-sense non-blocking and it leads to a lowering of the bandwidth requirements because traffic may be routed on either direction of the ring. For the BLSR/2 wavelength limited case, the double-hub network is rearrangeably non-blocking [45], and the SADM cost is close to that of the single-hub network.
In [23], Berry and Modiano also address the dynamic traffic problems in SONET ring networks. The problem is defined as minimizing the number of ADMs while being able to satisfy a set of allowable traffic requests. The authors first lower bound the number of ADMs, which corresponds to the no grooming solution. A bipartite matching approach is then proposed to combine two solutions such that any one of the traffic requests can be satisfied while keeping the number of ADMs minimized. To study a specific and realistic dynamic traffic model, the $t$-allowable traffic model is introduced (see Section 3.5). The authors lower bound the number of ADMs and model it as a bipartite matching problem. Using Hall’s theorem, a necessary and sufficient condition to support the $t$-allowable traffic is developed, and an algorithm to remove unnecessary ADMs is proposed. The authors extend the work to support dynamic traffic in a strictly non-blocking manner and show how hub nodes and tunability can further reduce the number of ADMs.
Hu study the deterministic traffic model and present an ILP formulation with the goal to minimize the number of ADMs. The authors study both unidirectional and bidirectional rings and their corresponding ILP formulations. A nice observation for unidirectional rings proved in [56] is that the integer constraint for the variable $x_{i,j,l}^r$, the number of traffic circuits from node $i$ to $j$ in the $r$th traffic requirement that are multiplexed onto wavelength $l$, can be relaxed and turn the ILP into a MILP formulation that is easier to solve. Unfortunately, this is not true for the bidirectional case. Because of the routing problem involved (clockwise or counter-clockwise), the dynamic traffic grooming problem in bidirectional rings is much harder to solve. Some heuristic methods are proposed. Keeping the same set of constraints, the cost function is slightly modified by integrating the cost of wavelength and the cost of ADMs. Using this formulation, the original problem that minimizes the number of ADMs and another problem that minimizes the number
of wavelengths are integrated and become two special cases of this general formulation. Through experiments, the authors claim that the problem that minimizes the number of wavelengths is much easier to solve in terms of the computational effort. It follows that by giving the cost of wavelengths a much larger weight and a grooming factor of 1, the modified ILP formulation can provide an initial solution relatively easily. Then, a heuristic method is used to aggregate sub-wavelength circles into wavelengths. The authors also show how to improve the solution using the simulated annealing method.
To solve the same design problem, i.e., ring networks with deterministic traffic, in [39], two traffic splitting methods, namely, traffic-cutting and traffic dividing, are proposed to manipulate the traffic matrices. Starting from the all optical (one hop) topology, the traffic-cutting method cuts the lightpath from source to destination at an intermediate node without adding additional ADMs. The benefit is that, the traffic component can change its wavelength at the dropping node, which turns out to be more efficient in terms of the number of ADMs and wavelengths required. The traffic-dividing method allows traffic bifurcation, that is, different parts of a traffic component can be routed on different lightpaths. Then, the authors propose a synthesized-splitting method that combines both the traffic-cutting method and the traffic-dividing method. A genetic algorithm is developed such that a given set of traffic matrices is satisfied in a strictly non-blocking manner.
Mesh networks with deterministic traffic are studied in [24]. The authors first present an ILP formulation that explicitly rules out cycling of lightpath and routing. The objective of the ILP is to minimize the number of transceivers. To solve the problem, a simple heuristic utilizing the time-varying state information is proposed. The sum of a traffic component’s demands in every traffic matrix is used as a metric. Based on this metric, a traffic component is selected and either routed on the existing network or on a newly established lightpath (the choice is controlled by a predetermined parameter). For the heuristic, the goal is to minimize both the number of transceivers and wavelengths.
The study in [52] addresses the problem of deciding, based on the network state, when traffic grooming should be performed. The network topology has two layers: the optical layer where optical express links (lightpaths, essentially) are connected by OXCs, and the physical layer where fiber links are connected by DXCs. Note that the OXCs and DXCs are physically decoupled. It is different from other studies where a grooming node is equipped with both an OXC and DXCs. Conceptually, this topology is formed by detaching the OXC and DXC of a grooming node by adding extra transponders. The authors consider the cost of a connection as a function of the DXC ports and OXC ports. A traffic request can be either routed on the physical topology (i.e., through DXCs) or on the logical topology (i.e., through OXCs). To decide if traffic requests should be routed on the physical topology or groomed onto some optical express links and routed on the logical topology, a parameter $\theta$ is defined that is used as follows. If the amount of traffic demands traveling from a DXC $s$ to another DXC $y$ becomes larger than the threshold $\theta$, these traffic demands will be groomed onto some optical express links connecting DXC $s$ and DXC $y$. Considering the cost function, $\theta$ should be tuned such that the cost is minimized. Both a centralized and a decentralized algorithms are proposed. To find the optimal $\theta$, both ring networks and mesh networks with different traffic intensities are simulated.
Kuri et al. study the mathematical model for *Scheduled Lightpath Demands* (SLDs) [16], which are in units of number of lightpaths. By introducing the *Multi-Granularity Switching Optical Cross-Connects* (MG-OXCs), a waveband layer is inserted between the physical layer and the traffic demands. In the multi-granularity switching network, the traffic demands are mapped into the wavebands that are routed and switched by MG-OXCs. Hence in this context, grooming refers to aggregating (disaggregating) lightpaths into waveband-switching connections of the virtual topology. Similar to the wavelength assignment problem in wavelength routed networks, SLDs
are assigned routed scheduled band groups (RSBGs). However, there is no counterpart of the wavelength continuity constraint. Then, the SLD Routing (SR) problem and the SLD Routing and Grooming (SRG) problem are formulated as combinatorial optimization problems with the objective to minimize the cost (given by a function of the number of ports). In [50], the authors extend the above work by taking subwavelength traffic demands into consideration. That is, a traffic demand can be decomposed into SLDs, that request a number of lightpaths, and a Scheduled Electrical Demand (SED), that requests part of a lightpath. The work bases on WDM networks with hybrid node architectures, i.e., a node consists of both an OXC and an EXC (same as an DXC). Then, the problem aims at finding the size of the OXCs and EXCs that allow a network of a given topology to serve a given set of Scheduled Demands (SDs) at the lowest cost, which is evaluated as a function of the number of the OXC optical ports and EXC electrical ports. To solve this problem for SEDs, the authors propose a simulation-annealing based routing and grooming strategy. It is shown that when two SEDs are groomed, these two SEDs are replaced by an aggregated demand as well as a set of additional demands caused from their traffic and route differences. Then, multiple SEDs are groomed two by two iteratively.
The study in [51] is an extension of a previous paper by the same authors, where dominating set algorithms are proposed to solve the problem of the placement of wavelength converters. In this paper, the main concern is the placement of grooming nodes (G-nodes). The traffic model studied is non-uniform. This is done by randomly assigning different nodes different weights and nodes with higher weight values generate more traffic than others with less weight values. Thus, the problem is modeled as the sparse grooming problem and formulated by the K-weighted minimum dominating set of the graph, which deals with finding the smallest set $D$ of vertices from a graph $G(V,E)$ such that every vertex $v$ not in $D$ is at distance $k$ or less from at least one node in $D$. This problem is NP-complete. A distributed voting algorithm is proposed and messages that are exchanged among nodes are introduced. Using these messages, a Master is selected and serves as the G-node. The simulation results show that by appropriately selecting the G-nodes, benefits of full grooming can be achieved with comparatively few nodes equipped with such capability.
### 4.2.4 Other approaches
In [34], the problem studied is a virtual topology design problem in mesh networks. The authors propose a formulation of the multi-hop dynamic traffic grooming problem, which aims at minimizing the network resource. The main difference of the formulation with those in other works is that the blocking probability is included as a constraint. The blocking model proposed is based on the concept of grooming links (g-links), where a g-link between two nodes is the set of possible lightpaths. A blocking model is proposed then to impose constraints on the number of lightpaths needed on g-links. The authors then present an ILP formulation that also imposes constraints on the maximum amount of by-pass traffic, the number of ports at each node, and the conversion capabilities.
In [64], the authors compare the performance and cost on different network architectures, the point-to-point network, single-hop network and multi-hop network. To take the network cost into account, the metric that is compared is in terms of the blocking probability versus the total arrival rate per dollar. The total network cost consists of three parts, the line cost, the transmitter cost and the receiver cost. The line cost and the node cost (transmitter cost and receiver cost) are correlated by a variable that can be adjusted to reflect the impact of line-node cost ratio. To decide the cost for different network architectures, two steps are performed. First, the off-line network design step determines the hardware cost (the number of wavelengths, transmitters and receivers) for each architecture. After the off-line step, the on-line connection provisioning step that determines how
the resources are used to accommodate dynamic traffic requests follows. In this step, a simple auxiliary graph based algorithm is used for each architecture. Simulation results show that multi-hop network is generally the best under a variety of cost scenarios. An interesting observation is that while the point-to-point architecture obviously has the lowest blocking probability, this is not the best choice if the cost of architecture (modeled as in the paper) is taken into account.
In [43], Elsayed addresses not only the dynamic routing and wavelength assignment problem but the fiber selection problem. The network studied has multiple fibers between each node pair. The original physical graph is folded out into $W$ copies, where $W$ is the number of wavelengths available on each fiber link. Since the nodes are wavelength-continuity constrained, these copies are isolated. Based on this layered graph, a modified Dijkstra’s algorithm with reduced complexity is proposed. The node architecture imposes the wavelength-continuity constraint; however, the routing of virtual topology and the wavelength assignment are implicitly solved. The authors propose two methods to update the link cost. One is the available shortest paths (AVSP) method, which tends to fill lowest numbered wavelength, the other is the least utilized path (LUP) method, which tends to balance the traffic across the available wavelengths. Once a path from source to destination is found, the fiber selection algorithm is called. Two selection methods, least-loaded fiber selection (LLF) and best fitting fiber selection (BFF), are discussed. Finally, algorithms are compared in terms of the blocking probability, average bandwidth of accepted connections, average path length of accepted connections, and wavelength fairness under both uniform and non-uniform traffic pattern.
Srinivasan and Somani propose an extended Dijkstra’s shortest path algorithm in WDM grooming networks [9]. Specifically, every node is assumed to be wavelength continuity constrained. The path vector is defined by the available capacity and hop-count. Two path vectors at a wavelength continuity constrained node are combined by taking the minimum capacity, which is different from the traditional Dijkstra’s algorithm where costs are linear (i.e., summable). The authors then propose different policies to select paths based on the path vectors, namely Widest-Shortest Path Routing (WSPR), Shortest-Widest Path Routing (SWPR) and Available Shortest Path Routing. Finally, the algorithm is examined in terms of the request blocking probability, network utilization, average path length of an established connection, average shortest-path length of an accepted request and average capacity of an accepted request.
The authors of [65] make a comprehensive study on the comparative performance of different dynamic routing algorithms under different node architectures. The node architectures include constrained grooming (CG), wavelength-level grooming (WG) and full grooming (FG). The metrics in WDM grooming networks are classified as concave (e.g., the capacity of a path is the minimum capacity among the corresponding links), additive (e.g., the length of a path is the sum of the length of corresponding links), and multiplicative (e.g., the reliability of a path is the product of link reliabilities). Accordingly, depending on the node architecture, the link-state vectors are combined using different operations to form the path vectors. After the data collection and construction stage, different source routing algorithms are implemented, namely, the shortest-widest path routing (SWPR), widest-shortest path routing (WSPR), and available shortest path routing (ASP). SWPR and WSPR are destination-specific, while ASP is request-specific. In addition, assuming traffic bifurcation is allowed, a dispersity routing algorithm is also evaluated. These algorithms are compared on the NSF network assuming every node is a WG node. The blocking probability, average path length of an accepted connection, average shortest-path length of an accepted request, and network utilization are compared. The performance of dispersity routing and varying grooming capability is also studied to evaluate the trade-off. A counter-intuitive result is that increasing the grooming capability in network could degrade the performance of the WSPR algorithm.
The study in [18] addresses the algorithm design problem for multicast traffic in WDM grooming networks. The authors first introduce a node architecture that supports multicast traffic. To model
the light-tree, a hypergraph logical topology is proposed, where a light-tree is represented as an arc (referred to as a hyperarc). For a multicast session, the destination nodes are represented by a supernode. Based on this hypergraph logical topology, traffic grooming approaches are proposed, namely, the single-hop grooming and the multi-hop grooming. In the single-hop grooming, the hypergraph is searched for an available hyperarc for the new multicast request. In the multi-hop grooming, a hyperarc with the same supernode as the request and a single-hop lightpath from the source node of the request to the source node of the hyperarc are found. The multicast session is established on the combination of a single-hop lightpath and a light-tree. Using these two grooming approaches, some heuristics are proposed.
5 Conclusion
The dynamic traffic grooming problem is an important area to the research community as well as to service providers. In today’s WDM networks, the increasing number of wavelengths available on an optical fiber and various optical/electronic equipment with different functionalities enable networks that are not only increasingly complex but also more and more agile. Accordingly, they provide more opportunity to balance the complexity (usually translated into cost) and the agility. In this sense, the dynamic traffic grooming problem is envisioned to be an essential area in the future.
In this paper, we have presented a literature survey of the dynamic traffic grooming area. We started from the physical layer by discussing different optical equipment and their architectures. Then we classified the dynamic traffic grooming problem into the design and analysis problems, and discuss the formulation of the design problem as optimization or decision problems. Following the classification, we surveyed the literature thoroughly.
Although the dynamic traffic grooming problem has already been extensively studied, many practically important problems worthy of study still remain open. In the analysis class, models with limited complexity are needed. Models of high complexity are theoretically useful but may not see extensive practical application. In a similar sense, the models need to take the mesh topology, multi-rate traffic model and link load correlation into consideration. Current approaches often make restrictive assumptions such as very simple topologies, or link independence, that make them less practically useful to the network designer, even though they may provide good insight into the nature of the problem. In addition, since networks generally are upgraded instead of built from scratch, we expect that the network may very often have a heterogeneous architecture. Because of the distinctions between traffic grooming networks and traditional data/circuit networks, this problem is of particular interest.
In the class of design problems, we believe that under the umbrella of the dynamic traffic grooming problem, many more interesting and practical problems remain to be discovered and solved. For example, some particular traffic models may be of practical interest. As we mentioned above, the Scheduled Lightpath Demand (SLD) traffic model has been generalized in several directions, including that of subwavelength traffic. However, some interesting and practically important generalizations (such as sliding window scheduled demands) remain unaddressed in the subwavelength context. Another interesting problem is that of translating QoS requirements from different levels in the network. It is envisioned that GMPLS will be widely deployed as a management layer in next generation networks. Therefore, approaches to dynamically groom subwavelength LSPs onto lightpaths while taking the QoS requirements (\textit{e.g.,} delay) into consideration needs to be studied.
As the field evolves, traffic grooming may be seen as a general problem of network design where the cost component is largely concentrated into specialized network node equipment (as opposed
to bandwidth, in yesteryear’s networks). In the near future, minimizing OEO may well cease to be a worthwhile goal, if device technology makes appropriate advances. However, the presence of a large amount of dark fiber in the ground makes it likely that some other nodal equipment, such as optical drop-and-continue, wavelength converters, OTDM switches, or some other emerging technology will dominate network costs.
Another interesting development is likely to come from waveband grooming; wavebands or coarse wavelengths are optical channels created by less selective optical filters and transponder equipments, so that a number of usual lightpaths can be optically forwarded with the use of a single such waveband port. Thus the waveband introduced yet a third layer of topology in the design problem, and waveband grooming has already drawn the attention of researchers in the static context. Literature is soon likely to appear on dynamic waveband grooming.
Lastly, the lessons learned from traffic grooming may be applied to other areas of research in future. The emergence of wireless networks as viable metro area networks makes such wireless networks, and heterogeneous networks formed of optical and wireless domains, an interesting area of research. Such an environment is typically more dynamic than wireline networks. The desire to provide SLAs to wireless LAN customers introduces the theme of QoS to sub-circuit flows, which is a distinguishing characteristic of traffic grooming. In short, we expect many interesting and far-reaching research results to develop out of the comparatively new research area of dynamic traffic grooming. We hope that our survey, in a modest way, will help researchers newly entering this field.
References
[1] L. Berger (Ed.), “Generalized multi-protocol label switching (GMPLS) signaling functional description.” Internet Engineering Task Force RFC, January 2003, no. 3471.
[2] E. Mannie (Ed.), “Generalized multi-protocol label switching (GMPLS) architecture.” Internet Engineering Task Force RFC, October 2004, no. 3945.
[3] R. Dutta and G. Rouskas, “Traffic grooming in WDM networks: Past and future,” IEEE Network, pp. 46–56, 2002.
[4] E. Mannie and D. Papadimitriou, “Generalized multi-protocol label switching (GMPLS) extensions for synchronous optical network (SONET) and synchronous digital hierarchy (SDH) control.” Internet Engineering Task Force RFC, October 2004, no. 3946.
[5] D. Papadimitriou, J. Drake, J. Ash, A. Farrel, and L. Ong, “Requirements for generalized mpls (GMPLS) signaling usage and extensions for automatically switched optical network (ASON).” Internet Engineering Task Force RFC, July 2005, no. 4139.
[6] K. Kompella and Y. Rekhter (Eds.), “Routing extensions in support of generalized multi-protocol label switching (GMPLS).” Internet Engineering Task Force RFC, October 2005, no. 4202.
[7] K. Zhu, H. Zang, and B. Mukherjee, “A comprehensive study on next-generation optical grooming switches,” IEEE Journal on Selected Areas in Communications, vol. 21, no. 7, pp. 1173 – 86, 2003.
[8] W. Goralski, SONET. Mc-Graw Hill, 2000.
[9] R. Srinivasan and A. Somani, “Request-specific routing in wdm grooming networks,” *2002 IEEE International Conference on Communications. Conference Proceedings. ICC 2002 (Cat. No.02CH37333)*, vol. 5, pp. 2876 – 80, 2002.
[10] B. Bacque and D. Oprea, “R-OADM architecture now you can control the light,” 2003, architectural white paper.
[11] A. Rasala and G. Wilfong, “Strictly non-blocking WDM cross-connects for heterogeneous networks,” *Proceedings of the Thirty Second Annual ACM Symposium on Theory of Computing*, pp. 514 – 23, 2000.
[12] K. Zhu, H. Zhu, and B. Mukherjee, “Traffic engineering in multigranularity heterogeneous optical wdm mesh networks through dynamic traffic grooming,” *IEEE Network*, vol. 17, no. 2, pp. 8 – 15, 2003.
[13] R. Srinivasan and A. Somani, “A generalized framework for analyzing time-space switched optical networks,” *Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213)*, vol. 1, pp. 179 – 88, 2001.
[14] S. Ramasubramanian and A. Somani, “Analysis of optical networks with heterogeneous grooming architectures,” *IEEE/ACM Transactions on Networking*, vol. 12, no. 5, pp. 931 – 43, 2004.
[15] H. Madhyastha and C. Siva Ram Murthy, “Efficient dynamic traffic grooming in service-differentiated wdm mesh networks,” *Computer Networks*, vol. 45, no. 2, pp. 221 – 35, 2004.
[16] J. Kuri, N. Puech, and M. Gagnaire, “Routing and grooming of scheduled lighpath demands in a multi-granularity switching network: a mathematical model,” *Optical Network Design and Modeling*, 2005.
[17] X. Huang, F. Farahmand, and J. Jue, “An algorithm for traffic grooming in wdm mesh networks with dynamically changing light-trees,” *GLOBECOM ’04. IEEE Global Telecommunications Conference (IEEE Cat. No.04CH37615)*, vol. 3, pp. 1813 – 17, 2004.
[18] A. Khalil, C. Assi, A. Hadjiantonis, G. Ellinas, N. Abdellatif, and M. Ali, “Multicast traffic grooming in WDM networks,” *Canadian Conference on Electrical and Computer Engineering 2004 (IEEE Cat. No.04CH37513)*, vol. 2, pp. 785 – 8, 2004.
[19] L. Zhang and G.-S. Poo, “A dynamic traffic grooming algorithm in multigranularity heterogeneous optical wdm mesh networks,” *ICICS-PCM 2003. Proceedings of the 2003 Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing and Fourth Pacific-Rim Conference on Multimedia (IEEE Cat. No.03EX758)*, vol. 2, pp. 1286 – 9, 2003.
[20] H. Kim, S. Ahn, and J. Chung, “Dynamic traffic grooming and load balancing for gmpls-centric all optical networks,” *Knowledge-Based Intelligent Information and Engineering Systems. 8th International Conference, KES 2004. Proceedings (Lecture Notes in Artificial Intelligence Vol.3215)*, vol. 3, pp. 38 – 44, 2004.
[21] S. Huang and R. Dutta, “Research problems in dynamic traffic grooming in optical networks,” in *Proceedings of First International Workshop on Traffic Grooming,(BROADNETS’04)*, 2004.
[22] C. Zhao and J.Q. Hu, “Traffic grooming for WDM rings with dynamic traffic,” manuscript, 2003.
[23] R. Berry and E. Modiano, “Reducing electronic multiplexing costs in SONET/WDM rings with dynamically changing traffic,” *IEEE Journal on Selected Areas in Communications*, vol. 18, no. 10, pp. 1961 – 71, 2000.
[24] N. Srinivas and C. Siva Ram Murthy, “Design and dimensioning of a WDM mesh network to groom dynamically varying traffic,” *Photonic Network Communications*, vol. 7, no. 2, pp. 179 – 91, 2004.
[25] B. Wang, T. Li, X. Luo, and Y. Fan, “Traffic grooming under a sliding scheduled traffic model in WDM optical networks,” in *Opticomm*, Oct 2004.
[26] G. Sasaki and O. Gerstel, “Minimal cost WDM sonet rings that guarantee no blocking,” in *Optical Networks Magazine*, vol. 4, Oct 2000.
[27] S. Thiagarajan and A. Somani, “Capacity fairness of WDM networks with grooming capabilities,” *Optical Networks Magazine*, vol. 2, no. 3, pp. 24 – 32, 2001.
[28] Y. Xu, S.-C. Xu, and B.-X. Wu, “Strictly nonblocking grooming of dynamic traffic in unidirectional SONET/WDM rings using genetic algorithms,” *Computer Networks*, vol. 41, no. 2, pp. 227 – 45, 2003.
[29] K. Mosharaf, J. Talim, and I. Lambadaris, “A markov decision process model for dynamic wavelength allocation in all-optical WDM networks,” in *GLOBECOM2003*, vol. 5, Dec 2003.
[30] A. Washington and H. Perros, “Call blocking probabilities in a traffic groomed tandem optical network,” *Special issue dedicated to the memory of Professor Olga Casals, Blondia and Stavrakakis (Eds.) Journal of Computer Networks*, vol. 45, 2004.
[31] C. Xin, C. Qiao, and S. Dixit, “Analysis of single-hop traffic grooming in mesh WDM optical networks,” in *Opticomm*, Oct 2003.
[32] C. Xin and C. Qiao, “Performance analysis of multi-hop traffic grooming in mesh WDM optical networks,” in *The 12th International Conference on Computer Communications and Networks*, 2003.
[33] R. Srinivasan and A. Somani, “Analysis of multi-rate traffic in WDM grooming networks,” *Proceedings Eleventh International Conference on Computer Communications and Networks (Cat. No.02EX594)*, pp. 296 – 301, 2002.
[34] C. Xin, B. Wang, X. Cao, and J. Li, “Formulation of multi-hop dynamic traffic grooming in WDM optical networks,” *Proceedings of the Second International IEEE/Create-Net Workshop on Traffic Grooming*, 2005.
[35] S. Thiagarajan and A. Somani, “A capacity correlation model for WDM networks with constrained grooming capabilities,” *ICC 2001*, vol. 5, pp. 1592–1596, 2001.
[36] W. Yao, G. Sahin, M. Li, and B. Rammamurthy, “Analysis of multi-hop traffic grooming in WDM mesh networks,” *Proceedings of BroadNets, the Second International IEEE/Create-Net Conference on Broadband Networks*, 2005.
[37] G. Sasaki and T. Lin, “A minimal cost WDM network for incremental traffic,” in *Information Theory and Communications Workshop, Proceedings of the 1999 IEEE*, June 1999.
[38] R. Kandula and G. Sasaki, “Grooming of dynamic tributary traffic in WDM rings with rearrangements,” Presented at the 39th Annual Allerton Conference on Communication, Control, and Computing, Monticello IL, Oct 2001.
[39] K. hong Liu and Y. Xu, “A new approach to improving the grooming performance with dynamic traffic in sonet rings,” *Computer Networks*, vol. 46, no. 2, pp. 181 – 95, 2004.
[40] W. Yao and B. Ramatnurthy, “Constrained dynamic traffic grooming in WDM mesh networks with link bundled auxiliary graph model,” *2004 Workshop on High Performance Switching and Routing (IEEE Cat. No.04TH8735)*, pp. 287 – 91, 2004.
[41] W. Yao and B. Ramamurthy, “Rerouting schemes for dynamic traffic grooming in optical WDM mesh networks,” *GLOBECOM ’04. IEEE Global Telecommunications Conference (IEEE Cat. No.04CH37615)*, vol. 3, pp. 1793 – 7, 2004.
[42] H. Wen, R. He, L. Li, and S. Wang, “Dynamic traffic-grooming algorithms in wavelength-division-multiplexing mesh networks,” *Journal of Optical networking*, vol. 2, no. 4, 2003.
[43] K. Elsayed, “Dynamic routing, wavelength, and fibre selection algorithms for multifibre WDM grooming networks,” *IEE Proceedings-Communications*, vol. 152, no. 1, pp. 119 – 27, 2005.
[44] N. Bhide, K. Sivalingam, and T. Fabry-Aztalos, “Routing mechanisms employing adaptive weight functions for shortest path routing in optical WDM netwrks,” *Photonic Networks Communications*, vol. 3, pp. 227–236, 2001.
[45] O. Gerstel, R. Ramaswami, and G. Sasaki, “Cost-effective traffic grooming in WDM rings,” *IEEE/ACM Transactions on Networking*, vol. 8, no. 5, pp. 618 – 30, 2000.
[46] S. Zhang and B. Ramamurthy, “Dynamic traffic grooming algorithms for reconfigurable SONET over WDM networks,” *IEEE Journal on Selected Areas in Communications*, vol. 21, no. 7, pp. 1165 – 72, 2003.
[47] A. Gencata and B. Mukherjee, “Virtual-topology adaptation for WDM mesh networks under dynamic traffic,” *IEEE/ACM Transactions on Networking*, 2003.
[48] R. He, H. Wen, and L. Li, “Fairness-based dynamic traffic grooming in WDM mesh networks,” *APCC/MDMC ’04. The 2004 Joint Conference of the 10th Asia-Pacific Conference on Communications and the 5th International Symposium on Multi-Dimensional Mobile Communications Proceeding*, vol. 2, pp. 602 – 6, 2004.
[49] K. Mosharaf, J. Talim, and I. Lambadaris, “A call admission control for service differentiation and fairness management in WDM grooming networks,” *Proceedings. First International Conference on Broadband Networks*, pp. 162 – 9, 2004.
[50] E. A. Doumith, M. Gagnaire, O. Audouin, and R. Douville, “Network nodes dimensioning assuming electrical traffic grooming in an hybrid OXC/EXC WDM network,” *Proceedings of the Second International IEEE/Create-Net Workshop on Traffic Grooming*, pp. 177–187, 2005.
[51] M. El Houmaid, M. Bassiouni, and G. Li, “Optimal traffic grooming in WDM mesh networks under dynamic traffic,” *Optical Fiber Communication Conference (OFC) (IEEE Cat. No.04CH37532)*, vol. 2, 2004.
[52] I. Widjaja, I. Saniee, L. Qian, A. Elwalid, J. Ellson, and L. Cheng, “A new approach for automatic grooming of SONET circuits to optical express links,” *2003 IEEE International Conference on Communications (Cat. No.03CH37441)*, vol. 2, pp. 1407 – 11, 2003.
[53] R. Sabella, P. Iovanna, G. Oriolo, and P. D’Aprile, “Strategy for dynamic routing and grooming of data flows into lightpaths in new generation network based on the GMPLS paradigm,” *Photonic Network Communications*, vol. 7, no. 2, pp. 131 – 44, 2004.
[54] A. Chiu and E. Modiano, “Traffic grooming algorithms for reducing electronic multiplexing costs in WDM ring networks,” *IEEE Journal on Lightwave Technology*, 2000.
[55] R. Dutta, S. Huang, and G. Rouskas, “Optimal traffic grooming in elemental network topologies,” *Opticomm*, pp. 46–56, 2003.
[56] J.-Q. Hu, “Traffic grooming in WDM ring networks: A linear programming solution,” *Journal of Optical Networks*, vol. 1, 2002.
[57] H. Zhu, H. Zang, K. Zhu, and B. Mukherjee, “Dynamic traffic grooming in WDM mesh networks using a novel graph model,” *Optical Networks Magazine*, vol. 4, no. 3, pp. 65 – 75, 2003.
[58] F. Farahmand, X. Huang, and J. Jue, “Efficient online traffic grooming algorithms in WDM mesh networks with drop-and-continue node architecture,” *Proceedings. First International Conference on Broadband Networks*, pp. 180 – 9, 2004.
[59] Q.-D. Ho and M.-S. Lee, “Practical dynamic traffic grooming in large WDM mesh networks,” *Proceedings of the Second International IEEE/Create-Net Workshop on Traffic Grooming*, 2005.
[60] T.-T. N. Thi, T. T. Minh, Q.-D. Ho, and M.-S. Lee, “A time and cost efficient dynamic traffic grooming algorithm for optical mesh networks,” *Proceedings of the Second International IEEE/Create-Net Workshop on Traffic Grooming*, 2005.
[61] R. Cigno, E. Salvadori, and Z. Zsoka, “Elastic traffic effects on WDM grooming algorithms,” in *Globecom*, 2004.
[62] B. Chen, W.-D. Zhong, and S. Bose, “A path inflation control strategy for dynamic traffic grooming in IP/MPLS over WDM network,” *IEEE Communications Letters*, vol. 8, no. 11, pp. 680 – 2, 2004.
[63] B. Chen, S. Bose, and W.-D. Zhong, “Priority enabled dynamic traffic grooming,” *IEEE Communications Letters*, vol. 9, no. 4, pp. 366 – 8, 2005.
[64] I. Cerutti, A. Fumagalli, and S. Sheth, “Performance versus cost analysis of WDM networks with dynamic traffic grooming capabilities,” *Proceedings. 13th International Conference on Computer Communications and Networks (IEEE Cat. No.04EX969)*, pp. 425 – 30, 2004.
[65] R. Srinivasan and A. Somani, “Dynamic routing in WDM grooming networks,” *Photonic Network Communications*, vol. 5, no. 2, pp. 123 – 35, 2003.
| Reference | Description |
|-----------|-------------|
| [61] | Traffic Modeling. Elasticity of IP traffic impacts grooming algorithms. |
| [57] | Using an auxiliary graph with variable edge weights and grooming policies to achieve multiple goals. |
| [27] | Call Admission Control algorithm dealing with capacity fairness. |
| [62] | Various grooming policies combined with path inflation control. |
| [63] | Extend path inflation control to provide differentiated services. |
| [25] | A sliding window traffic model that introduces the scheduling problem to dynamic traffic grooming. |
| [18] | Traffic grooming for multicast traffic. |
| [59] | An auxiliary graph approach with low time complexity. |
| [60] | A two-phase dynamic grooming algorithm using simplified auxiliary graphs. |
| [65] | A comprehensive study of routing algorithms in traffic grooming networks. |
| [26] | Comparison of typical ring architectures that guarantee non-blocking. |
| [23] | Minimization of the number of ADMs for $t$-allowable traffic. |
| [39] | Heuristics to solve the design problem in rings with given traffic matrices. |
| [28] | A genetic algorithm for strictly non-blocking grooming in unidirectional rings. |
| [24] | Design in mesh network using traffic time-varying state information. |
| [34] | An ILP formulation for mesh networks taking the blocking probability into consideration. |
| [16] | A mathematical model for routing and grooming with scheduled lightpath demands. |
| [50] | Extension of [16] by taking subwavelength traffic (scheduled electrical demand) into consideration. |
| [51] | The placement of grooming nodes formulated as a dominating set problem. |
| [40] | An auxiliary graph approach with link-bundling. |
| [41] | Rerouting algorithms for varying traffic demands. |
| [58] | Traffic grooming with a drop-and-continue node architecture. |
| [17] | An auxiliary graph approach that supports dynamic unicast traffic. |
| [42] | An auxiliary graph approach that introduces the virtual graph and layered graph. |
| [43] | Routing, wavelength assignment and fiber selection algorithms in multifiber WDM networks. |
| [38] | Dynamic traffic grooming with reconfiguration algorithms. |
| [52] | Making grooming decision by monitoring the link performance. |
| [47] | Reconfiguration of virtual-topology by monitoring the link load. |
| [53] | A layered dynamic routing strategy in GMPLS networks. |
| [29] | A Markov Decision Process model for dynamic wavelength allocation in 2-hop tandem networks. |
| [49] | Call Admission Control for subwavelength traffic demands formulated as Markov Decision Process Problems. |
| [48] | Auxiliary graph with fairness grooming policies. |
| [31] | Blocking probability in single-hop traffic grooming mesh networks. |
| [32] | Blocking probability in multi-hop traffic grooming mesh networks. |
| [30] | Blocking probability in tandem networks, an exact solution with multi-rate arrivals. |
Table 2: Literature Summarization |
Post-Eruption Deformation Processes Measured Using ALOS-1 and UAVSAR InSAR at Pacaya Volcano, Guatemala
Lauren N. Schaefer 1,*, Zhong Lu 2 and Thomas Oommen 1
Received: 13 October 2015; Accepted: 8 January 2016; Published: 19 January 2016
Academic Editors: Norman Kerle and Prasad S. Thenkabail
1 Department of Geological and Mining Engineering and Sciences, Michigan Technological University, Houghton, MI 49931, USA; email@example.com
2 Roy M. Huffington Department of Earth Sciences, Southern Methodist University, Dallas, TX 75205, USA; firstname.lastname@example.org
* Correspondence: email@example.com; Tel.: +1-847-902-0422
Abstract: Pacaya volcano is a persistently active basaltic cone complex located in the Central American Volcanic Arc in Guatemala. In May of 2010, violent Volcanic Explosivity Index-3 (VEI-3) eruptions caused significant topographic changes to the edifice, including a linear collapse feature 600 m long originating from the summit, the dispersion of ~20 cm of tephra and ash on the cone, the emplacement of a 5.4 km long lava flow, and ~3 m of co-eruptive movement of the southwest flank. For this study, Interferometric Synthetic Aperture Radar (InSAR) images (interferograms) processed from both spaceborne Advanced Land Observing Satellite-1 (ALOS-1) and aerial Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) data acquired between 31 May 2010 and 10 April 2014 were used to measure post-eruptive deformation events. Interferograms suggest three distinct deformation processes after the May 2010 eruptions, including: (1) subsidence of the area involved in the co-eruptive slope movement; (2) localized deformation near the summit; and (3) emplacement and subsequent subsidence of about a 5.4 km lava flow. The detection of several different geophysical signals emphasizes the utility of measuring volcanic deformation using remote sensing techniques with broad spatial coverage. Additionally, the high spatial resolution of UAVSAR has proven to be an excellent compliment to satellite data, particularly for constraining motion components. Measuring the rapid initiation and cessation of flank instability, followed by stabilization and subsequent influence on eruptive features, provides a rare glimpse into volcanic slope stability processes. Observing these and other deformation events contributes both to hazard assessment at Pacaya and to the study of the stability of stratovolcanoes.
Keywords: volcano deformation; interferometric synthetic aperture radar; ALOS-1; UAVSAR
1. Introduction
The analysis of ground deformation at volcanoes has long been considered a crucial monitoring technique. Although geodetic systems such as Global Positioning Systems (GPS) can be used to determine ground motion, the logistical challenges of field monitoring—including high costs and the vulnerability of monitoring equipment near active volcanic vents—have rendered remote sensing a valuable addition for monitoring volcanic deformation. Interferometric Synthetic Aperture Radar (InSAR), in which the differences in phase of two or more temporally spaced synthetic aperture radar (SAR) images are used to determine surface deformation, has become a highly desirable measure of centimeter-scale deformation over large areas. This has allowed scientists to make measurements of volcanic deformation at otherwise unmonitored, or unexpectedly deforming, volcanoes (e.g., [1–5]).
Using InSAR, much work has been devoted to connecting volcanic eruptions with geodetic signatures for eruption forecasting [6]. This has led to several inferences about the characteristics of magmatic plumbing systems and reservoirs (e.g., [5,7–12]). Some volcanoes have been known to show no deformation prior to eruption, while others deform without erupting, or exhibit deformation unrelated to magmatic intrusions, suggesting complex relationships between eruption behavior and resulting deformation (e.g., [13,14]). Here, we use satellite and aerial radar images to measure surface displacements following explosive eruptions in May 2010 at Pacaya volcano in Guatemala. L-band (wavelength = 23.6 cm) Advanced Land Observing Satellite-1 (ALOS-1) satellite imagery, available from 2008 to 2011, is compared to and supplemented with L-band (wavelength = 23.8 cm) Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) aerial data, available since 2011.
UAVSAR is a miniaturized polarimetric synthetic aperture radar (SAR) system designed for repeat pass deformation measurements on aerial platforms. Unique from typical Uninhabited Aerial Vehicle (UAV) sensors, which are unmanned platforms, the UAVSAR radar pod is mounted beneath a Gulfstream-III plane and flown by pilots over a specific path requested by NASA-approved researchers. While the radar is collecting data, on-board operators use Platform Precision Autopilot (PPA) to control the plane so it can fly the same flight path within a 10 m diameter tube for hundreds of kilometers at a time [15]. This is important to scientists who want to track changes on the surface of the earth from one flight to the next to study features like glaciers, earthquakes, volcanoes, and landslides using repeat-pass InSAR. The L-band radar onboard the UAVSAR operates at a central frequency of 1217.5 MHz to 1297.5 MHz, with a bandwidth of 80 MHz. The nominal flight altitude for UAVSAR is 12,500 m, and the typical image swath width is >23 km. The spatial resolution of the resulting UAVSAR single-look-complex (SLC) imagery are ~1.7 m in range and ~0.8 m in azimuth. More information on flight planning and operations is available on the UAVSAR website (http://uavsar.jpl.nasa.gov/). We discuss the utility of this precise topography change monitoring technique for volcanic activity and use it to augment standard satellite techniques.
2. Background
Pacaya volcano, located in the Central American Volcanic Arc (CAVA) (Figure 1a) is a several thousand year old basaltic complex that has experienced eruptive episodes typically lasting a few centuries, with repose intervals of similar length [16,17]. One notable event was a large (0.6–0.8 km$^3$) sector collapse that occurred between 600 and 1500 years ago [18,19], the collapse scarp of which is still visible today (Figure 1b). The current eruptive episode began in 1961, with interspersed Strombolian eruptions, ash plumes, and effusive lava flows depositing material primarily on the west and southwest flank within the collapse scarp [20]. The majority of lava flows and explosive activity originates from the summit of the active cone; however, many of the high-volume lava flows tend to erupt from vents lower on the volcano’s flanks [21].
Using ALOS-1 data from 2007 to early 2010, an InSAR survey of the CAVA found no deformation faster than 27 mm/year that could be attributed to magmatic processes at any of the 20 historically active volcanoes [14]. At Pacaya, this was attributed to a high proportion of basalt ascending directly from the base of the crust without a period of crustal storage [14]. An InSAR investigation into explosive eruptions rated as VEI=3 eruptions on the Volcanic Explosivity Index (VEI) on the 27 and 28 of May 2010 found that the southwest flank of the edifice moved ~3 m to the southwest during this two-day eruption [22]. This flank displacement, combined with historic collapse, confirms that slope instability is a serious threat at Pacaya.
Figure 1. Location and major topographical features of Pacaya Volcano. (a) Pacaya volcano (red triangle) is located in the Central American Volcanic Arc (CAVA), with other CAVA volcanoes marked with black triangles; (b) Map showing the location of topographic features, including the summit vent, ancestral collapse scarp, and the linear collapse and lava flow as a result of the May 2010 eruptions. The colored boxes show the extent of Figures 2–5.
In addition to the slope movement, eruptive activity during the May 2010 eruptions produced several topographic changes to the edifice. On 27 May, about a 600 m long linear collapse initiated at the summit vent oriented NNW (Figure 1b), the origin of which is still debated [23]. On 28 May, a large lava flow erupted from 12 clustered vents to the SE of the summit around 1800 m above sea level (asl), being the first historic lava flow to originate outside of the collapse scarp (Figure 1b; [20]). According to reports by the National Coordinator for Disaster Reduction [24], the flow originally moved at a rate of 100 m/h. The flow rate slowed in the following days to 15 m/h by 6 June and 1 m/h by 8 June, until ceasing on 30 June. This is one of the largest lava flows that has erupted in the last 55 years, which flowed to the south for ~3 km before curving west around 1300 m asl, following the local gradient. Ultimately, the flow reached ~5.4 km in length, covering an area of $1.6 \times 10^6$ m$^2$. Thickness estimates of 1–4 m result in a volume range of $1.6–6.4 \times 10^6$ m$^3$ [20]. With continual eruptive activity and evidence of slope instability, it is imperative to monitor deformation events at Pacaya volcano.
3. Radar Intensity Images
The intensity of each pixel in a single radar image represents the proportion of microwave backscattered from that area on the ground, giving information of the type, shape, roughness, orientation, and moisture content of the target area. At Pacaya, radar intensity images can be used
to discern features such as the summit crater area, 2010 collapse, ancestral scarp, and 2010 lava flow (Figure 2). The 2010 lava flow appears brighter than its surroundings due to its rough surface and its emplacement on the vegetated slope outside of the ancestral collapse scarp, which has different backscattering properties. The southern part of the lava flow becomes more difficult to distinguish as it moves into the lava flow fields within the ancestral scarp, suggesting that past and current flows have similar roughness (Figure 2b). The different orientations of the sloping walls of the 2010 collapse and ancestral collapse scarp make these features easily distinguishable. Additionally, the change in crater size can be seen after the eruption, which increases nearly six times to ~600 m in diameter (Figure 2b). Single SAR images from the ALOS-1 satellite are very noisy due to speckle, a phenomenon in which the resolution of the SAR sensor is not sufficient to resolve individual scatterers. This speckle noise can often be reduced by averaging several SAR images. However, it is clear by comparing the averaged ALOS-1 scene (Figure 2c) to the UAVSAR images (Figure 2a,b) that a satellite sensor with a finer resolution is more desirable for distinguishing geological features (pixel size of ~0.6 m in azimuth and ~2.2 m in ground range for the original UAVSAR imagery, vs. ~3.2 m in azimuth and ~6.5 m in ground range for ALOS-1 SLC imagery over Pacaya volcano).

**Figure 2.** UAVSAR amplitude images of Pacaya volcano (a) before and (b) after the May 2010 eruptions clearly show changes to the edifice as a result of the eruption, including the emplacement of the 5.4 km long lava flow, an increase in the summit crater size, and the linear collapse oriented NNW from the summit; Panel (c) is an averaged image produced from eight ALOS-1 acquisitions between 2010 and 2011.
### 4. InSAR Observations
Using the two-pass InSAR approach (e.g., [25,26]), six interferograms of reasonably good coherence (where 50% of the study area maintains coherence larger than 0.3 in the unfiltered interferograms) were produced from satellite-based ALOS-1 SAR scenes from May 2010 to April 2011 (Table 1). NASA’s Shuttle Radar Topography Mission (SRTM) 1-arc-second global digital elevation model (DEM) was used to correct for topography, which was interpolated and re-sampled to a spacing of ~6 m. Additionally, four interferograms from 2010 to 2014 were analyzed from aerial UAVSAR, processed by the NASA Jet Propulsion Laboratory (JPL) but cropped to Pacaya’s extent for this study. Interferograms were filtered and unwrapped using minimum cost flow and triangulation [27]. However, all interferograms are shown as wrapped phase, in which each fringe or color cycle (e.g., magenta-magenta), represents 11.8 cm or 11.9 cm of displacement in ALOS-1 and UAVSAR interferograms, respectively. Deformation measurements are assumed to be primarily vertical (see justification later), calculated using geometric relations between the incident angle of the radar to the surface (reported in Table 1), and the line-of-sight (LOS) displacement, or direction between
the SAR antenna and the point on the ground surface. UAVSAR images have nearly the opposite LOS direction from ALOS-1 images (Table 1). Fringe patterns were compared to other interferograms with larger baselines to discard any topography-related errors. As perpendicular baselines of interferograms used in this study are less than 341 m (Table 1), the altitude of ambiguity [25] is larger than ~190 m. Hence, topography-induced errors in these interferograms can be neglected. Currently, no advanced analysis of GPS data is available for comparing the deformation measured here [22].
**Table 1.** Interferogram pairs used in this study.
| Satellite | Flight Direction | Look direction | Acquisition 1 | Acquisition 2 | Perpendicular Baseline (m) | Incidence Angle (°) | Temporal Baseline (days) | Figure |
|-----------|------------------|----------------|---------------|---------------|---------------------------|---------------------|--------------------------|--------|
| | | | Year | Month | Day | Year | Month | Day | | | |
| UAVSAR | S68° E to N48° W | left | 2010 | 1 | 29 | 2010 | 2 | 11 | – | 48 | 13 | Figure 3a |
| UAVSAR | S68° E to N48° W | right | 2010 | 1 | 29 | 2010 | 2 | 11 | – | 48 | 13 | Figure 3b |
| ALOS-1 | S31° E to N21° W | right | 2010 | 5 | 31 | 2010 | 7 | 16 | 91 | 42 | 40 | Figure 3c; Figure 4a, Figure 5a |
| ALOS-1 | S31° E to N21° W | right | 2010 | 5 | 31 | 2010 | 7 | 16 | 91 | 42 | 40 | Figure 3d; Figure 4b, Figure 5b |
| ALOS-1 | S31° E to N21° W | right | 2010 | 8 | 14 | 2010 | 9 | 29 | -44 | 38 | 46 | Figure 5c |
| ALOS-1 | S31° E to N21° W | right | 2010 | 8 | 14 | 2010 | 9 | 29 | -44 | 38 | 46 | Figure 5d |
| ALOS-1 | S31° E to N21° W | right | 2010 | 12 | 30 | 2011 | 2 | 14 | 541 | 38 | 46 | Figure 5e; Figure 5f; Figure 5g |
| ALOS-1 | S31° E to N21° W | right | 2011 | 2 | 14 | 2011 | 4 | 1 | 38 | 38 | 46 | Figure 5h; Figure 5i |
| UAVSAR | S68° E to N48° W | left | 2013 | 4 | 2 | 2014 | 4 | 10 | – | 48 | 682 | Figure 6a; Figure 7a; Figure 7g |
| UAVSAR | S68° E to N48° W | left | 2013 | 4 | 2 | 2014 | 4 | 10 | – | 48 | 373 | Figure 6b; Figure 7b |
After the co-eruptive slope movement of the southwest flank on 27 May and 28 May (Figure 3b), flank-wide subsidence appeared in the first SAR image acquired on 31 May 2010, three days after the eruptions. This deformation is not seen in ALOS-1 interferograms before the eruptions (Figure 3a). The spatial extent of deformation is very similar to the original slope movement, marked with a dashed line in Figure 3a–f. Additionally, the pattern is similar between ALOS-1 (Figure 3c) and UAVSAR (Figure 3f) interferograms, which have nearly opposite looking geometries. This suggests that the deformation observed after the slide was likely dominated by vertical subsidence. The greatest magnitude of subsidence, ~18 cm, occurs near the summit, which reduces along the slope to ~2 cm at the toe of the slide. These values subsequently decrease in magnitude and area through early 2011 (Figure 3d,e). A UAVSAR interferogram that spans 26 April 2011–8 March 2013, shown with magnified fringe values in Figure 3f, shows that this area was still subsiding at slow rates at least into the spring 2011.
In addition to the subsidence of the southwest flank, a localized deformation event on the cone appears after the May 2010 eruptions (Figure 4). This feature, elongated NW-SE, appears north of the summit and was contained mostly within the ancestral collapse scarp except for a portion to the east. These fringes are difficult to differentiate from the flank subsidence in the ALOS-1 interferograms immediately post-eruption (Figure 4a) but become clearer in subsequent months as flank subsidence diminishes (Figure 4b,c). Interferograms measure a maximum of ~25 cm of deflation between 31 May 2010 and 16 July 2010 (Figure 4a), which reduces in magnitude to ~10 cm over 682 days between 2011–2013 (Figure 4d). This subsidence differs in magnitude from that measured on the flank, and continues well beyond post-slide settlement, emphasizing a separate source. Several lava flows were deposited to the north of the summit, with selected flows shown in Figure 4e, which cover some of the spatial extent of the deformation described.
Figure 3. Slope deformation surrounding the May 2010 eruptions, with the black dashed line marking the outline of the slope instability in panel 3b. (a) No deformation of the southwest slope is seen prior to the eruption; (b) During the eruptions on the 27 and 28 of May, ~3 m of LOS slope displacement can be seen on the southwest sector of the edifice (modified from [22]); (c) Immediately after the eruption, subsidence encompasses a similar extent of the southwest flank that moved during the slide; (d,e) This deformation decreases in magnitude and spatial extent until late December; (f) Deformation measured using the first available UAVSAR data after the May 2010 eruption spanning 26 April 2011–8 March 2013 is magnified to show the similarities in the fringe pattern over the southwest flank of the volcano. Similar fringe patterns between this UAVSAR interferogram and the ALOS-1 interferogram in Figure 3c suggests that the deformation observed after the slide is likely dominated by vertical subsidence, as the sensors have nearly opposite looking geometries.
Figure 4. Localized deformation to the north of the summit. (a) Several fringes to the north of the summit are difficult to discern from subsidence of the southwest flank in the ALOS-1 interferogram immediately after the eruptions on 27 and 28 May; (b,c) This signal becomes the only deformation event in subsequent months, reducing in magnitude and spatial extent; (d) Fringes are more distinct in a later UAVSAR interferogram, which shows that the area was still subsiding at very slow rates (~10 cm over 682 days) between 2011 and 2013; (e) Several recent lava flows (outlined from Landsat satellite images and [20]) have flowed to the north of the summit, the contraction of which could account for some of the measured deformation.
A final discernible deformation process is shown in eight interferograms in Figure 5, which depict the emplacement (Figure 5a) and subsequent subsidence (Figure 5b–h) of the 2010 lava flow (outlined in Figure 1b). The interferogram covering the time interval immediately after the eruption (31 May–16 July) additionally contains several ovoid fringes of inflation oriented NNW-SSE near the 2010 lava flow vents (zoomed view, Figure 5a). The fringes, which represent ~16 cm of LOS range shortening (corresponding to inflation if the deformation is assumed vertical), extend from the SE slope over the ancestral collapse scarp around 1700–1900 m asl. This area of inflation is directly related to lava flow emplacement, as they are not seen in ALOS-1 interferograms prior to (Figure 3a) or after (Figure 3d) the effusive eruption. The elongated shape suggests a diking event forced magma to the surface, resulting in inflation near the vents.
Much of the lava flow becomes coherent in July (Figure 5b), which agrees with CONRED reports of emplacement that state that the flow began on 28 May and ceased movement 33 days on 30 June. Thus, from July 2010 to April 2014, the lava flow shows persistent subsidence that varies in rate but
decreases over time (Figure 6 and Table 2). The rate of subsidence is assumed to be constant over the time spanned by each interferometric pair, and the motion is presumed to be primarily vertical as the interferograms of opposite geometries (ALOS vs. UAVSAR) show essentially the same sense of motion. The deformation is contained within the boundaries of the flow and interferograms with smaller baselines in previous studies show no deformation in this area outside the scarp from 2007 to 2010 [22]. Thus, deformation can be directly related to subsidence of the lava flow. The area of maximum subsidence (cross section A-A’, Figures 5d and 6) is likely the thickest area of the flow [28]. This area corresponds with a decrease in the topographic gradient, which could cause lava pooling. The profiles of range change rate (Figure 6) also show that there are higher subsidence rates in the middle of the flow, which implies it is thicker in the center than at the edges. The difference in subsidence rates along the flow suggests that the thickness is not uniform. UAVSAR data shows that the lava flow was still subsiding in the thickest part between 2013 and 2014, several years after emplacement.

**Figure 5.** InSAR images showing post-emplacement surface movement of the lava flow that began on 28 May 2010, with the approximate extent of the lava flow outlined. (a) Emplacement of the flow continues until 30 June, and then subsides at a decreasing rate over time, shown in the following time frames: (b) 29 June 2010–14 August 2010; (c) 14 August 2010–29 September 2010; (d) 29 September 2010–30 December 2010; (e) 30 December 2010–14 February 2011; (f) 14 February 2011–1 April 2011; (g) 26 April 2011–8 March 2013; and (h) 2 April 2013–10 April 2014. Line A-A’ in 5d crosses the approximate area of maximum subsidence, with values in Table 1 and a time series of profiles of range change rate deformation rates along this profile in Figure 6.
Table 2. Maximum deformation rate (in cm/day) of the 2010 lava flow shows that the rate of subsidence, and thus rate of cooling, decreases over time. Loss of coherence due to emplacement does not allow measurements to be made until 78 days after the initiation of the flow.
| i.d. (see Figure 6) | Cumulative Days Post Emplacement *¹ | Maximum Deformation (cm/day) |
|---------------------|-------------------------------------|-----------------------------|
| a | 49 | N/A |
| b | 78 | N/A |
| c | 124 | −0.11 |
| d | 216 | −0.06 |
| e | 262 | −0.02 |
| f | 308 | −0.02 |
| g | 1015 | −0.02 |
| h | 1413 | −0.006 |
* end of time interval spanned by each interferogram; † start of emplacement: 28 May 2010.
Figure 6. Deformation rate of cross section A-A' in Figure 5d shows that lava flow subsidence is greatest in the center, which can be inferred to be the thickest part of the flow.
5. Discussion
Three distinct deformation processes have been measured after the 2010 eruption: subsidence of material involved in the co-eruptive landslide event, localized deformation near the summit, and emplacement and subsequent subsidence of the 2010 lava flow. These will be discussed in turn.
5.1. Post-Sliding Flank Deformation
Interferometric signals due to degassing or magma withdrawal are typically a broader, more symmetrical fringe pattern, indicative of a relatively deep source (i.e., the depth of the magma reservoir, e.g., [12]). This is not what we observe after the eruption; instead, the fringe patterns suggest that the source causing deformation is rather surficial (Figure 3c). Thus, we can discount degassing and/or magma withdrawal as the causes of the subsidence measured on the southwest flank. A lack of inflation prior to the eruption additionally suggests that either magma was transported directly from depth (i.e., [14,29]) or that it was already present in the shallow crust prior to the eruption. The latter has been suggested by several authors. Eggers [16] originally proposed that the petrographic and chemical uniformity of Pacaya’s lavas through time implied a continuous supply of magma to an
open conduit. This has been further proven by excessive degassing at the volcano [30–34], which implies that Pacaya has a substantial convecting and circulating magma body near the surface that is degassing but not erupting completely on the surface [21,33]. Additionally, gravity changes measured by Eggers [35] were accounted for by density changes caused by very shallow magma bodies (depths of 100 to 200 m below the surface).
The measured subsidence is confined to the southwest flank and covers the same extent of material involved in the co-eruptive sliding event. During the May 2010 eruptions, the majority of tephra was distributed to the north due to prevailing winds, as evidenced by coherence of the lower flanks of the edifice in the interferogram that spans the eruption (Figure 3b). This confirms that erupted material did not reach some areas of post-eruption subsidence, making it unlikely that measured subsidence on the southwest flank is due to thermal contraction of hot erupted material alone (i.e., [36]). This subsidence continues for several months, however it decreases in rate and becomes confined to the upper flanks of the cone (Figure 3d,e). Therefore, it is likely due to mechanical clast repacking in response to co-eruptive displacement as material stabilizes and consolidates, with higher levels of repacking corresponding to areas of greater levels of prior slope displacement. However, we cannot discount some component of thermoelastic contraction on the upper flanks due to cooling of erupted material.
Pre- and post-eruptive interferograms reveal that flank instability is confined to the May eruptions [22]. This is unique from other measurements of volcanic flank movement, which have typically been attributed to relatively slow (10 cm or less) and steady (up to tens of years) gravitational creep (i.e., [37] and references therein). Instead, movement seems to both initiate and cease rapidly, followed by dominantly vertical subsidence as the slope stabilizes. The lack of gravitational instability fits with numerical models of flank collapse at Pacaya, which show that the edifice is stable under gravity forces alone but could be destabilized by magmatic or seismic forces [25]. Thus, either the edifice load is not great enough to induce gravitational creep, or the failure plane is not well established. Instead, flank movement seems to be directly linked to overpressure associated with a magmatic intrusion (i.e., Piton de la Fournaise [29]). The spatial variability in subsidence rates probably relates to sliding movement heterogeneity. There is no obvious thrust feature at base of the southwest slope to accommodate the large motion, which is likely obscured due to the re-organization of the relatively fresh deposits in this area (the map of which is available in [20]).
5.2. Localized Deflation
Immediately after the eruption, fringes to the north of the summit appear to be a continuation of the material subsidence (Figure 4a), suggesting that this area may also have been involved in co-eruptive flank displacement (this information being lost in co-eruptive interferograms due to incoherence, [22]). In later interferograms, this signal becomes an elongated feature oriented NW-SE. In addition to becoming constrained to the upper flanks of the northern sector, it also has different subsidence rates from surrounding areas, including the southwest flank. Thus, we can consider this continuing subsidence feature as a distinct event from the initial southwest flank displacement as time progresses. Deformation appears as separate lobes which have different rates (Figure 4b,c), making it likely that these lobes reflect the subsidence of historic lava flows deposited to the north of the cone (Figure 4e). However, the complicated signal makes it difficult to discern specific flows or measure their compaction rates. An additional possibility is that this area is experiencing substrate compaction due to continued emplacement of lava and tephra [38]). Alternatively, continuing subsidence may indicate crystallization and cooling of newly intruded magma in the shallow sub-surface [10,39]).
5.3. Lava Flow Emplacement and Subsidence
The rate of subsidence on the 2010 flow varied both spatially and temporally, with the maximum subsidence rate initially at ~0.1 cm/year after emplacement, reducing to ~0.02 cm/year between 2013 and 2014. This subsidence rate is low compared to other measurements made using InSAR, such as the 10 cm of subsidence measured in 44 days at Okmok [28]. This is likely due to differences in lava
flow thickness estimates; the Okmok flow is estimated to be 20 m thick [40], while the 2010 flow at Pacaya is estimated to be 1–4 m [20]. Incoherence within the flow boundaries of ALOS-1 interferogram immediately after the eruptions, spanning from 31 May through 16 July (Figure 5a), shows that the flow surface was changing enough to destroy coherence during the first ~1.5 months after emplacement, presumably as a result of the mechanical movement of unstable blocks. This fits field reports by CONRED, which state that the flow ceased on 30 June.
The thickest area of the flow becomes coherent 78 days after initiation (Figure 5c), with the majority of the flow becoming coherent only 32 days after initiation (Figure 5b). Due to the temporal resolution of ALOS-1 data (46 days), these dates cannot be further constrained. However, this timeframe falls on the lower end of the range of other InSAR studies; Kīlauea flows reached coherence 13 to 903 days after emplacement [41], while Okmok flows reached full coherence ~3 years after emplacement [28]. At Pacaya, fast coherence rates are likely due to the fast emplacement and relative thinness (1–4 m) of the 2010 flow.
Post-emplacement lava flow deformation has been attributed to several processes, including thermal contraction and consolidation [42], substrate compaction [38], and clast repacking or gravity-driven compaction [43]. It is difficult to determine the exact cause of subsidence for the 2010 lava flow without more frequent temporal sampling of subsidence and temperature measurements, but the measured deformation is likely related to both thermal and mechanical processes shortly after the emplacement and then primarily cooling of lava flows afterwards. Given the full spatial coverage of the flow provided by InSAR, thickness estimates of the flow, which currently range from 1 m to 4 m [21], could be constrained (i.e., [43]), ultimately leading to remote monitoring of lava flow extrusion rates. Given the high effusive activity since 1961 [20], this could lead to knowledge of effusive eruption dynamics over time.
Mapping of discontinuities along the ancestral collapse scarp has shown that Pacaya is subjected to both an E-W extensional regime of the Guatemala City graben to the north and the NW-striking Jalpatagua shear zone, resulting in a transtensional local stress regime with an ENE-WSW $\sigma_3$ component [23]. Regional stress regimes have been known to favor vertical magma migration by propagation of dikes through pre-existing cracks or faults, which reduce overpressure [44]. This could help explain the dike intrusion elongated NNW-SSE (Figure 5a), which stretches over the ancestral collapse scarp following the edge of the slope displacement. This tells us that propagation of the dike was controlled by both the local transtensional stress regime and the reduction of overpressure as a result of southwest flank displacement. This also suggests that the topographic anomaly of the ancestral collapse scarp is not the primary mechanism controlling vent emplacement.
A NNW dike system is further evidenced by cracking along this theoretical weakness zone as seen after the May 2010 eruptions and continued emplacement of vents aligned in a NNN-orientation over time (Figure 7a). Thus, injection of magma along this orientation during the 2010 eruptions could have caused the force needed to induce the perpendicularly-oriented slope movement. The 2010 collapse could have been a result of either the intrusion or the slope movement. In 2014, lava flow vents continued to open along this trend to the NNN of the 2010 clustered vents (Figure 7a), resulting in the emplacement of a 4.3 km-long lava flow (Figure 7b). As this system continues to “unzip,” these intrusions and subsequent vent openings could facilitate future slope movement to the SW. Geomechanical studies have found that continued dike intrusions have the possibility of imparting mechanical damage to the rock [45], leading to edifice weakening and a higher probability of slope instability. Additionally, if several pathways bleed the main conduit, the lack of localized accumulation of degassed magma may not cause deformation measurable above InSAR detection limits until magma reaches the surface [46], such as the diking event near the 2010 vents (Figure 5a). This could account for a lack of measurable inflation due to magma intrusion into the cone prior to the 2010 eruptions (Figure 3a, [22]).
Eruptions of high-volume lava flows from lower flank vents is a recurring event at Pacaya (Figure 7b, [21]), which have been attributed to the existence of shallow magma stored in the cone [20].
High discharge effusion rates of these flank lava flows (~5 m$^3$/s, [47]) suggest that a hydrostatic effect causes the upper portion of the conduit to drain (the steady supply rate of magma to the conduit being ~0.09 m$^3$/s). Therefore, material that did not erupt through the summit vent in the May 2010 eruptions could have partially drained into the 2010 lava flow. Although a loss of coherence of the summit area prevents us from seeing any co-eruptive deflation related to this type of event (Figure 3b), it is likely that the high magnitude of flank displacement would overshadow this signal.
**Figure 7.** Eruptive trends at Pacaya. (a) A continued NNW-oriented pattern of vents and cracks is seen post-2010 eruption. Intrusions along this weakness zone could have provided the push to induce slope displacement during the May 2010 eruption. Cracks were mapped using Google Earth aerial images (inset image taken in December 2010, courtesy of Google 2015 DigitalGlobe); (b) High volume lava flows tend to erupt from lower flank vents such as in 1961, 1975, 2010, and 2014, suggesting a cyclical draining of a shallow magma system in the cone. Outlines for the 1961 and 1973 lava flows from [21].
### 6. Conclusions
InSAR has revealed several deformation events at Pacaya Volcano, Guatemala, after eruptions on the 27 and 28 of May 2010. Three meters of co-eruptive displacement of the southwest flank is followed by several months of material consolidation, giving a rare glimpse of flank stabilization processes. The rapid initiation and cessation of flank displacement is unique from other measurements of volcanic flank movement, which have typically been attributed to gravitational creep. Localized deformation near the summit likely reflects the subsidence of historic lava flows deposited to the north of the cone, although the complicated signal makes it difficult to discern specific flows. Alternatively, this feature may indicate slow crystallization and cooling of newly intruded magma, which implies a complex conduit system. Interferograms also reveal the emplacement and subsequent subsidence of a 5.4-km long lava flow, which is the first historic flow to erupt outside of the ancestral collapse scarp. We attribute its eruption location to both the local transtensional stress regime and the structurally weak zone as a result of southwest flank displacement. The recurrence of these high-volume lava flows erupting from flank vents additionally suggests a cyclical pattern of high-level magma systems draining into lower vents. The existence of shallow magma within the cone has serious implications for the already unstable volcano. This is particularly true given the repeated pattern of magma intrusion along a NNW-orientation, perpendicular to the flank displacement. The utility of InSAR for measuring the complex geophysical signals at Pacaya is evident given the variety of deformation measurements revealing both eruptive and non-eruptive behavior. In particular, the high spatial resolution of aerial
UAVSAR has proven to be an excellent compliment to satellite data, particularly when it comes to constraining motion components. Although direct measurement comparisons between the two sensors is limited due to the lack of temporal overlap, future ALOS-2 missions could benefit from UAVSAR acquisitions. The deformation detected at Pacaya promotes the future use of InSAR at this and other potentially unstable volcanoes.
**Acknowledgments:** Lauren N. Schaefer acknowledges support provided by the NASA Earth and Space Science Fellowship Program (NNX13AO50H) and the AEG (Association of Environmental and Engineering Geologists) Foundation. Zhong Lu acknowledges support from the NASA Earth Surface and Interior Program (NNX14AQ95G) and the Shuler-Foscue Endowment at Southern Methodist University (Dallas, TX, USA). ALOS-1 (Advanced Land Observing Satellite) data were acquired from the Japan Aerospace Exploration Agency via the Alaska Satellite Facility through the WinSAR (Western North America Interferometric Synthetic Aperture Radar) Consortium, and were processed using Gamma software (http://www.gamma-rs.ch/). UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar) interferograms were courtesy of the NASA Jet Propulsion Laboratory-California Institute of Technology. The authors wish to thank three anonymous reviewers for helping to improve this manuscript.
**Author Contributions:** Lauren N. Schaefer, Zhong Lu, and Thomas Oommen conceptualized the manuscript and provided revision throughout the study. Lauren N. Schaefer drafted the manuscript. Zhong Lu and Lauren N. Schaefer processed InSAR data.
**Conflicts of Interest:** The authors declare no conflict of interest.
**References**
1. Biggs, J.; Anthony, E.; Ebinger, C. Multiple inflation and deflation events at Kenyan volcanoes, East African Rift. *Geology* **2009**, *37*, 979–982. [CrossRef]
2. Lu, Z.; Wicks, C.; Dzurisin, D.; Power, J.A.; Moran, S.C.; Thatcher, W. Magmatic inflation at a dormant stratovolcano: 1996–1998 activity at Mount Peulik volcano, Alaska, revealed by satellite radar interferometry. *J. Geophys. Res. B Solid Earth (1978–2012)* **2002**, *107*. [CrossRef]
3. Amelung, F.; Jonsson, S.; Zebker, H.; Segall, P. Widespread uplift and “trapdoor” faulting on Galapagos volcanoes observed with radar interferometry. *Nature* **2000**, *407*, 993–996. [PubMed]
4. Wicks, C.W.; Dzurisin, D.; Ingebritsen, S.; Thatcher, W.; Lu, Z.; Iverson, J. Magmatic activity beneath the quiescent Three Sisters volcanic center, central Oregon Cascade Range, USA. *Geophys. Res. Lett.* **2002**, *29*. [CrossRef]
5. Pritchard, M.E.; Simons, M. A satellite geodetic survey of large-scale deformation of volcanic centres in the central Andes. *Nature* **2002**, *418*, 167–171. [CrossRef] [PubMed]
6. Segall, P. Volcano deformation and eruption forecasting. *Geol. Soc. Lond. Spec. Publ.* **2013**, *380*, 85–106. [CrossRef]
7. Massonnet, D.; Briole, P.; Arnaud, A. Deflation of Mount Etna monitored by spaceborne radar interferometry. *Nature* **1995**, *375*, 567–570. [CrossRef]
8. Wicks, C.; Thatcher, W.; Dzurisin, D. Migration of fluids beneath Yellowstone caldera inferred from satellite radar interferometry. *Science* **1998**, *282*, 458–462. [CrossRef] [PubMed]
9. Zebker, H.A.; Amelung, F.; Jonsson, S. Remote sensing of volcano surface and internal processes using radar interferometry. In *Remote Sensing of Active Volcanism*; American Geophysical Union: Washington, DC, USA, 2013; pp. 179–205.
10. Lu, Z.; Dzurisin, D. InSAR imaging of Aleutian Volcanoes. In *InSAR Imaging of Aleutian Volcanoes*; Springer: Berlin, Germany, 2014; pp. 87–345.
11. Fournier, T.; Pritchard, M.; Riddick, S. Duration, magnitude, and frequency of subaerial volcano deformation events: New results from Latin America using InSAR and a global synthesis. *Geochem. Geophys. Geosyst.* **2010**, *11*. [CrossRef]
12. Pritchard, M.E.; Simons, M. An InSAR-based survey of volcanic deformation in the southern Andes. *Geophys. Res. Lett.* **2004**, *31*. [CrossRef]
13. Biggs, J.; Ebmeier, S.; Aspinall, W.; Lu, Z.; Pritchard, M.; Sparks, R.; Mather, T. Global link between deformation and volcanic eruption quantified by satellite imagery. *Nat. Commun.* **2014**, *5*. [CrossRef] [PubMed]
14. Ebmeier, S.; Biggs, J.; Mather, T.; Amelung, F. On the lack of InSAR observations of magmatic deformation at Central American volcanoes. *J. Geophys. Res. Solid Earth* **2013**, *118*, 2571–2585. [CrossRef]
15. Hensley, S.; Wheeler, K.; Sadowy, G.; Miller, T.; Shaffer, S.; Muellerschoen, R.; Jones, C.; Zebker, H.; Madsen, S.; Rosen, P. Status of a UAVSAR designed for repeat pass interferometry for deformation measurements. In Proceedings of the 2005 IEEE MTT-S International Conference on Microwave Symposium Digest, Long Beach, CA, USA, 12–17 June 2005.
16. Eggers, A.A. *The Geology and Petrology of the Amatitlán Quadrangle, Guatemala*; Dartmouth College: Hanover, NH, USA, 1971.
17. Conway, F.M.; Diehl, J.F.; Matias, O. Paleomagnetic constraints on eruption patterns at the Pacaya composite volcano, Guatemala. *Bull. Volcanol.* **1992**, *55*, 25–32. [CrossRef]
18. Kitamura, S.; Matias, O. Tephra stratigraphic approach to the eruptive history of Pacaya volcano, Guatemala. *Sci. Rep., Tohoku Univ. Seventh Ser. Geogr.* **1995**, *45*, 1–41.
19. Vallance, J.W.; Siebert, L.; Rose, W.I.; Girón, J.R.; Banks, N.G. Edifice collapse and related hazards in Guatemala. *J. Volcanol. Geotherm. Res.* **1995**, *66*, 337–355. [CrossRef]
20. Matías Gómez, R.O.; Rose, W.I.; Palma, J.L.; Escobar-Wolf, R. Notes on a Map of the 1961–2010 Eruptions of Volcán de Pacaya, Guatemala. *Geol. Soc. Am. Digit. Map Chart Ser.* **2012**, *10*. [CrossRef]
21. Rose, W.I.; Palma, J.L.; Wolf, R.E.; Gomez, R.O.M. A 50 year eruption of a basaltic composite cone: Pacaya, Guatemala. *Geol. Soc. Am. Spec. Pap.* **2013**, *498*, 1–21.
22. Schaefer, L.; Lu, Z.; Oommen, T. Dramatic volcanic instability revealed by InSAR. *Geology* **2015**, *43*, 743–746. [CrossRef]
23. Schaefer, L.N.; Oommen, T.; Corazzato, C.; Tibaldi, A.; Escobar-Wolf, R.; Rose, W.I. An integrated field-numerical approach to assess slope stability hazards at volcanoes: the example of Pacaya, Guatemala. *Bull. Volcanol.* **2013**, *75*, 1–18. [CrossRef]
24. CONRED. Coordinadora Nacional Para la Reducción de Desastres. Available online: http://www.conred.gob.gt/ (accessed on 1 October 2015).
25. Massonnet, D.; Feigl, K.L. Radar interferometry and its application to changes in the Earth’s surface. *Rev. Geophys.* **1998**, *36*, 441–500. [CrossRef]
26. Rosen, P.; Hensley, S.; Joughin, I.R.; Li, F.K.; Madsen, S.N.; Rodriguez, E.; Goldstein, R.M. Synthetic aperture radar interferometry. *Proc. IEEE* **2000**, *88*, 333–382. [CrossRef]
27. Costantini, M. A novel phase unwrapping method based on network programming. *IEEE Trans. Geosci. Remote Sens.* **1998**, *36*, 813–821. [CrossRef]
28. Lu, Z.; Masterlark, T.; Dzurisin, D. Interferometric synthetic aperture radar study of Okmok volcano, Alaska, 1992–2003: Magma supply dynamics and postplacement lava flow deformation. *J. Geophys. Res. Solid Earth (1978-2012)* **2005**, *110*. [CrossRef]
29. Sigmundsson, F.; Durand, P.; Massonnet, D. Opening of an eruptive fissure and seaward displacement at Piton de la Fournaise volcano measured by RADARSAT satellite radar interferometry. *Geophys. Res. Lett.* **1999**, *26*, 533–536. [CrossRef]
30. Wallace, P.J. Volatiles in subduction zone magmas: concentrations and fluxes based on melt inclusion and volcanic gas data. *J. Volcanol. Geotherm. Res.* **2005**, *140*, 217–240. [CrossRef]
31. Shinohara, H. Excess degassing from volcanoes and its role on eruptive and intrusive activity. *Rev. Geophys.* **2008**, *46*. [CrossRef]
32. Andres, R.; Rose, W.; Stoiber, R.; Williams, S.; Matias, O.; Morales, R. A summary of sulfur dioxide emission rate measurements from Guatemalan volcanoes. *Bull. Volcanol.* **1993**, *55*, 379–388. [CrossRef]
33. Rodriguez, L.A.; Watson, I.M.; Rose, W.I.; Branan, Y.K.; Bluth, G.J.; Chigna, G.; Matias, O.; Escobar, D.; Carn, S.A.; Fischer, T.P. SO$_2$ emissions to the atmosphere from active volcanoes in Guatemala and El Salvador, 1999–2002. *J. Volcanol. Geotherm. Res.* **2004**, *138*, 325–344. [CrossRef]
34. Walker, J.A.; Roggensack, K.; Patino, L.C.; Cameron, B.I.; Matias, O. The water and trace element contents of melt inclusions across an active subduction zone. *Contrib. Mineral. Petrol* **2003**, *146*, 62–77. [CrossRef]
35. Eggers, A.A. Temporal gravity and elevation changes at Pacaya volcano, Guatemala. *J. Volcanol. Geotherm. Res.* **1983**, *19*, 223–237. [CrossRef]
36. Masterlark, T.; Lu, Z.; Rykhus, R. Thickness distribution of a cooling pyroclastic flow deposit on Augustine Volcano, Alaska: Optimization using InSAR, FEMs, and an adaptive mesh algorithm. *J. Volcanol. Geotherm. Res.* **2006**, *150*, 186–201. [CrossRef]
37. Ebmeier, S.; Biggs, J.; Mather, T.; Wadge, G.; Amelung, F. Steady downslope movement on the western flank of Arenal volcano, Costa Rica. *Geochem. Geophys. Geosyst.* **2010**, *11*. [CrossRef]
38. Briole, P.; Massonnet, D.; Delacourt, C. Post-eruptive deformation associated with the 1986–87 and 1989 lava flows of Etna detected by radar interferometry. *Geophys. Res. Lett.* **1997**, *24*, 37–40. [CrossRef]
39. Caricchi, L.; Biggs, J.; Annen, C.; Ebmeier, S. The influence of cooling, crystallisation and re-melting on the interpretation of geodetic signals in volcanic systems. *Earth Planet. Sci. Lett.* **2014**, *388*, 166–174. [CrossRef]
40. Lu, Z.; Fielding, E.; Patrick, M.R.; Trautwein, C.M. Estimating lava volume by precision combination of multiple baseline spaceborne and airborne interferometric synthetic aperture radar: The 1997 eruption of Okmok volcano, Alaska. *IEEE Trans. Geosci. Remote Sens.* **2003**, *41*, 1428–1436.
41. Dietterich, H.R.; Poland, M.P.; Schmidt, D.A.; Cashman, K.V.; Sherrod, D.R.; Espinosa, A.T. Tracking lava flow emplacement on the east rift zone of Kīlauea, Hawai‘i, with synthetic aperture radar coherence. *Geochem. Geophys. Geosyst.* **2012**, *13*. [CrossRef]
42. Lu, Z.; Dzurisin, D.; Biggs, J.; Wicks, C.; McNutt, S. Ground surface deformation patterns, magma supply, and magma storage at Okmok volcano, Alaska, from InSAR analysis: 1. Interruption deformation, 1997–2008. *J. Geophys. Res. Solid Earth (1978–2012)* **2010**, *115*. [CrossRef]
43. Ebmeier, S.; Biggs, J.; Mather, T.; Elliott, J.; Wadge, G.; Amelung, F. Measuring large topographic change with InSAR: Lava thicknesses, extrusion rate and subsidence rate at Santiaguito volcano, Guatemala. *Earth Planet. Sci. Lett.* **2012**, *335*, 216–225. [CrossRef]
44. Gudmundsson, A. How local stresses control magma-chamber ruptures, dyke injections, and eruptions in composite volcanoes. *Earth Sci. Rev.* **2006**, *79*, 1–31. [CrossRef]
45. Schaefler, L.N.; Kendrick, J.E.; Lavallée, Y.; Oommen, T.; Chigna, G. Geomechanical rock properties of a basaltic volcano. *Front. Earth Sci.* **2015**, *3*. [CrossRef]
46. Salzer, J.T.; Nikkhoo, M.; Walter, T.R.; Sudhaus, H.; Reyes-Dávila, G.; Bretón, M.; Arámbula, R. Satellite radar data reveal short-term pre-explosive displacements and a complex conduit system at Volcán de Colima, Mexico. *Front. Earth Sci.* **2014**, *2*. [CrossRef]
47. Morgan, H.A.; Harris, A.J.; Gurioli, L. Lava discharge rate estimates from thermal infrared satellite data for Pacaya Volcano during 2004–2010. *J. Volcanol. Geotherm. Res.* **2013**, *264*, 1–11. [CrossRef] |
The 24th Philippine Nihongo Teachers’ Forum
Through the cooperation of The Japan Foundation, Manila and the Association of Filipino Nihongo Teachers (AFINITE), the 24th Philippine Nihongo Teachers’ Forum was successfully conducted at Casa San Pablo, Laguna on November 12 & 13, 2016. Around 70 Filipino Nihongo teachers, several of whom have traveled all the way from Baguio, Cebu, Davao and other provinces, gathered for the 2-day forum entitled “Enhancing Nihongo Teaching: The Relevance of the TESDA Approach”. The program included lectures, workshop, presentations, and sharing of ideas about the topic. (Please refer to page 2 for more details on the program.)
JLPT Interactive Lecture & Exercises in BAGUIO
The Japan Foundation, Manila conducted the first JLPT Interactive Lecture & Exercises in Baguio at the Filipino-Japanese Foundation of Northern Luzon, Inc. (ABONG) on October 29, 2016 (Saturday). The N5 session was held in the morning, while the N4 session was held in the afternoon. The course participants were not limited to Japanese language learners as a few Japanese language lecturers also attended to observe the class proceedings for future reference.
Message From A Participant
“Otsukaresamadeshital” and “Omedetō!” to all fellow participants of the recently concluded 24th Philippine Nihongo Teachers’ Forum held at Casa San Pablo in Laguna. To the organizers, AFINITE and The Japan Foundation Manila, our warmest gratitude for consistently providing Filipino Nihongo Teachers the needed exposure to further advance our young but promising careers in the field of Japanese Language Education in the country.
Having come all the way from Cebu, it was my first time to attend a forum/workshop organized by AFINITE. I am impressed with the level of facilitation, from preparation to the choice of topic, to the conduct of the forum for those two days! Strict observance of the schedule only proves the focus and professionalism of the program committee with small details – a well-known yet often underrated Japanese trait. One will definitely go a long way emulating such practice.
It was indeed high time that the theme, “Enhancing Nihongo Teaching: The Relevance of the TESDA Approach,” come to fore in what could arguably be the most proper venue. For one, controversial topics such as the Trainor’s Methodology I (TM I) requirement for language training providers, and the requisite Competency Based Curriculum / Competency Based Training (CBC/CBT), all regulated by the Technical Education and Skills Development Agency (TESDA) were explained more clearly, primarily since both information and opinion were relayed by the pioneers and experts of the topics themselves. Such were vital and timely insights, and could serve as a “guide” to Japanese Language educators to further enhance teaching methods, while at the same time be “in tune” with government regulatory requisites. I also thought opinion-sharing during the forum is healthy for us educators because it indicates commitment to our craft.
All that being said, the venue and the food were superb and I believe everybody had a great time with acquaintances old and new. The facilitators treated us like family, and there was an air of positivity the entire forum amidst the many challenges we are about to face.
May “the force” be with us all… and see you hopefully at the next forum!
DADITO “Dads” RODRIGO
Mr. Rodrigo recently founded and is a trustee of Japa-Phil Center for Cultural Exchange, Inc., a non-stock corporation in Mandaue City, Cebu that aims to be one of the leading sources of Japanese Language proficiency acquisition, as well as a melting pot for cultural exchanges mutually beneficial for both Filipinos and Japanese. He is also a part-time lecturer of Japanese Language courses at the University of the Philippines – Cebu, and Cebu Doctors’ University. He is originally from Makati, and has lived in Hiroshima, Japan for nine years.
PROGRAM
DAY 1 AM
- **Keynote Lecture**
Mr. Hiroyuki Enoki
First Secretary and Labor Attaché, Embassy of Japan
- **Topic 1: Technical Vocational Education and Training**
Mr. Francisco J. Reyes
Supervisor, TESDA Laguna Provincial Office, Los Baños
- **Topic 2: Competency-Based Curriculum (CBC)**
Ms. Emmie B. Miyagawa
Head Instructor, Japanese Language Research Center, Inc. and Next Bridge Language Expert, TESDA PaMaMaRiSan and TESDA Quezon City
- **Topic 3: Training Based Program – TESDA Approach**
Ms. Mary Clare L. Samadan
Director/Assessment Center Manager, YWA Trade Test & Training Center, Inc.
- **Topic 4: Trainers’ Methodology Experience**
Ms. Maria Eleanor B. Tanteo
Adviser, AFINITE
Freelance Instructor, Interpreter, Translator and Adviser to YWA and TNNA
- **Q&A**
- **Promotion: Ishikawa Japanese Studies Program**
Mr. Takeshi Imai, Ms. Midori Kano and Mr. Daisuke Sugino
- **Announcements**
The annual Teachers’ Forum organized by AFINITE in cooperation with The Japan Foundation, Manila plays an important role in the development of Filipino teachers of the Japanese Language. It is not only a venue to enhance knowledge and teaching techniques but also to disseminate facts about the current trends in Japanese Language Education in the Philippines.
The 24th Teachers’ Forum aimed to introduce the TESDA approach in Japanese Language teaching and the components of the Competency-Based Curriculum (CBC), as well as the effectiveness of the “Direct Teaching Method”. The participants were able to come up with their own, albeit simple, CBC during the workshop, and I hope it comes in handy should they find themselves creating their own curriculum or when they decide to take up TESDA’s Trainer’s Methodology I course.
Indeed, Japanese Language Education in the Philippines has come a long way. This is evident in the rapid increase of Filipinos interested to learn the Japanese language whether for personal interests or employment opportunities. Vis-à-vis the demand there is a need for us, Japanese language teachers, to be equipped with adequate knowledge and teaching techniques in line with current trends. Let’s do our best and continue to improve our craft.
JPEPA Batch 9 Training Started in November 2016
Preparatory Japanese-Language Training for the Filipino Candidates of Nurses and Certified Care Workers under the Japan-Philippines Economic Partnership Agreement (JPEPA) Fiscal Year 2016
For fiscal year 2016, The Japan Foundation, Manila (JFM) is once again conducting the preparatory Japanese-Language training on behalf of the Japanese Government for the participants in the JPEPA program, after having been successfully matched with Japanese hospitals and caregiving facilities. In November last year, a total of three hundred twenty-three (323) candidates, consisting of thirty-eight (38) nurse candidates and two hundred eighty-five (285) care worker candidates started the training at 3 (three) different venues: Language Skills Institute of the Technical Education and Skills Development Authority (TESDA, Taguig City), Nihongo Center Foundation, Inc. (NCF, Manila), and the Personal Ability Development Foundation, Inc. (PAD, Alabang, Muntinlupa City).
The training will continue for six months until May 19, 2017; the target level of the training is for each candidate to reach the N4 level of the Japanese Language Proficiency Test (JLPT). Besides learning “Comprehensive Japanese-Language”, they also study specific vocabulary and essential expressions for nursing and caregiving. In addition, they will be given lectures on Japan – “General Life Culture” and “Things Japanese”, as well as “Medical Care in Japan.” They will also learn how to get into the habit of self-learning (autonomous learning), so that they can continue to study Japanese on their own after the training. Those who will complete the training are scheduled to leave for Japan by June 2017.
What I learned in the Philippines...
I learned the importance of loving family and friends here in the Philippines. I felt that the distance between Filipinos is closer than that of the Japanese. I really like how Filipinos bond with their families as they go to church on Sunday, and how they sometimes call their friends “sister” instead of their actual name. This is a wonderful Filipino custom that I cannot experience in Japan. I have decided to cherish my family and friends more like the Filipinos. (Akiko Usui / Pangasinan)
フィリピン人は「家族」をとても大切にします。時には自分よりも家族を優先します。「核家族」が広まる日本で、私たちがフィリピンから学ばなければならないことは「家族愛」だと思いました。フィリピンの家族を見ていると、私も家族に会いたくなりました。(Kanae Itoh / Davao)
Filipinos are very friendly. They keep on smiling everyday despite the difficulties and hardships they encountered in their lives. They don’t forget to smile and laugh anytime, anywhere. I’m always encouraged by them, which is very helpful to me as a foreigner in the Philippines. (Nao Yoshimoto / Cebu)
What I shared to the Philippines...
はじめてのclass activityは、しょどう(Japanese calligraphy)でした。さいしょに私が字を書きました。みんながしんけんな顔で見ていました。わたしはとてもきんちょうしました。筆をつかって、たのしそうに字を書くみんなの顔、わすれません。(Ayako Tomihara / NCR)
わたしは10月にかるたをしょうかいしました。かるたis a Japanese traditional card game. In the game, players compete to get the card that matches the one the host reads out. We made our originalかるたand played it together. I was happy to see that they were enjoying Japanese culture. (Chiharu Takehara / NCR)
私の印象に残っているアクティビティーは「あやとり」です。まだアクティビティーに慣れていない時だったので、とても緊張したのを覚えています。子供の頃に遊んだものが、今フィリピンで活躍できることを嬉しい思います。(Keiko Watanabe / NCR)
What surprised me in the Philippines...
私が、フィリピンにきて、びっくりしたのは、ティーチャーズデイです。
みなさんが知っているように、先生にいつもの、感謝(かんしゃ)を言う日ですが、日本にはこのようなイベントはありません。
この日、せいとたちは、私にもお花をくれて、日本のうたを歌いしてくれていました。
どこをあろいていても、Happy Teachers Day ♡と言われ、私を学校の仲間(なかま)にいれててくれて、とてもうれしい気持ちになりました。
(Aki Tahara / Pangasinan)
9月某日、ショッピングモールをウロウロしていたらクリスマスソングが聞こえてきました。一瞬聞き間違えか、フィリピンでは9月にクリスマスを祝うのかと思いすぐにグーグルで「フィリピン クリスマス 時期」と検索したところ、この国では9月頃から着々とクリスマス当日に向けて盛り上がり、準備を進めていることが分かりました。
なんと気の早い人たちなんだろうと初めは思っていましたが、クリスマスが近づくにつれて先生や生徒たちのワクワク感が伝わり、クリスマスが一年で最高の日なんだということがわかりました。
そして私は今回クリスマスパーティーに計3回参加しました。
浄土真宗の大学に所属している私にとって全く縁のなかった「クリスマス」をこんなに肌で感じることができてとても貴重な瞬間であり、忘れられない思い出となりました。
(Rina Yamaguchi / Cebu)
私のお気に入りのCulture Activityは“たなばた”です。おりがみでかざりを作ることや、ねがいごとを書くことをおしえるのはむずかしくて、たいへんでした。でもたくさんのえがおと“ありがとう”をもらって、私もいっしょにハッピーになりました。みんなの作ったかざりとたんざくで、カラフルになった教室はとってもステキでした!
(Shoko Takahashi / NCR)
My life in Cagayan de Oro is interesting and exciting. What surprised me the most were the tricycles and jeepneys used for transportation. In CDO, the tricycles have platforms behind them. The driver will also take detours at a customer’s request. I am lucky to go around during these trips. :) During the Christmas season, they are beautifully decorated with Christmas ornaments.とても便利でおもしろい乗り物ですね!(Yui Akamine / Cagayan de Oro)
『にほんご人フォーラム2016(日本)』or the Japanese Speakers’ Forum 2016 (Japan) which was held from August 22 to September 3 at the Japanese-Language Institute, Urawa was its 4th international forum since it started inviting high school Japanese language teachers and high school Japanese language learners from 5 ASEAN countries (Indonesia, Malaysia, Philippines, Thailand, and Vietnam) and Japan in 2013. In the Teachers’ Program, teachers demonstrated the lesson plans they designed to nurture the needed 21st century skills in students, particularly the Collaborative Skill and Creative Skill in a Japanese language class, which they then evaluated using the rubric they themselves created to assess the said skills. The Students’ Program, on the other hand, did not only let the students experience different aspects of Japanese Culture, but they also let the students investigate and report about the questions they had about Japanese Culture.
What is a 21st Century Language Teaching and Learning Approach? How would you know that you have achieved a 21st Century Language Classroom? Read on and find out what teachers and students experienced and how they felt after joining an event that promised a 21st century approach to Japanese language teaching and learning through photos, which for them, best describes 『にほんご人フォーラム2016(日本)』.
LHEANE MARIE M. DIZON
Student, Lagro High School
JS Forum 2016 in a nutshell was quite similar to this photo. It was an adventure and an experience as exciting as the sea. And it was also liberating but unifying, just like the Japanese flag. I’ll never forget my ride on this boat called the Japanese Speakers’ Forum 2016!
A Trip on a Pirate Ship in Lake Ashino, Hakone
FRANKLIN DUANE A. MADRIÑAN
Student, Valenzuela City School of Mathematics and Science
“Pride, Honor and Glory”; this is ValMaSci’s tagline. This may be a simple photo, but it means a lot to me. To wear the school’s identity in a foreign place is indeed an honor. Also, to be with these great people who I barely knew at the start became part of my life’s greatest achievements. Without these friends of mine, I would not have overcome the fear of being the weakest among the delegates, and without them, Japan could have not been memorable.
Graduation Dinner* in our School Uniforms
JONEL G. PANUNCIO
Student, Jose Abad Santos High School
A big change came to my life. The Nihongojin Forum had a big part in helping every one of us not only to learn Japanese language and culture, but also to gain the right conduct in interacting with other people. It connects not only our minds, but also our hearts to make a wonderful presentation and action. This photo shows that this program gave us a big opportunity to use, show, and enhance our different skills and talents.
An experience wearing a Yukata
YVETTE KAYLE E. TACADENA
Student, Juan G. Macaraeg National High School
Language Barrier? Nihongo slashed it out! Six countries came together as one! Thanks to the Japanese Speakers Forum, I gained a lot of new friends and a memorable experience with these people whom I wouldn’t forget. We inspired each other as we shared our knowledge to create a splendid presentation. This photo speaks of how the JS Forum unites Japan with other countries to overcome language barriers.
Reading feedback from audience about our group presentation; 6 students from different countries and 1 Japanese University student guiding us
CJH Update
The enTree 1 Course (E1), being participated by the CJH 4th batch of teacher ends on March 4, 2017.
Other CJH-related Activities (2016-2017)
- **May to July 2016**: 2-month training in Japan
- **August 6, 2016**: CJH Pedagogy Seminar - "Flip Learning 101: Let's try FLIPPING our Nihongo Classrooms!"
- **December 10, 2016**: CJH Pedagogy Seminar - "Learning Styles & Multiple Intelligences-Matching Teaching Style with Students' Learning Styles"
- **April 2017 (Tentative)**: enTree 2 Course (E2) Batch 4
* CJH: Course on Japan for High School Classroom Instruction; Teacher Training Program for Public High School Teachers under the Special Program in Foreign Language: Japanese of DepED-BCD
H.S. Nihongojin
This corner aims to introduce high school students who are studying Nihongo. Let’s expand our Nihongojin* network
HIGH SCHOOL NIHONGOJIN 23
**Student name:** Jana Beatrice P. Jugulion
**Year and Section:** IX- Faraday
**Suki na koto:** e o kaku, anime to asian dorama o miru, internet o suru
Ever since I was little, I have always wanted to be multilingual. I watched movies and dramas of different languages. I was happy when I found out that our class will be studying Nihongo. I looked at it as an opportunity for me to fulfill my dream. Watching anime and Japanese dramas helped me understand more about the language and culture of Japan. My favorites are *Itazura na Kiss*, *Ao Haru Ride* and *Sword Art Online*. I am very thankful and lucky to be given the chance to learn another language and explore new things at the same time.
**School:** MANGALDAN NATIONAL HIGH SCHOOL
**Principal:** Dr. Rebecca E. Cansino
**Teachers:** Mrs. Jocelyn C. Trinidad, Ms. Marliza L. Gutong, Dr. Salome C. Cruz
*Nihongojin is a term coined from the words ‘Nihongo’ and ‘jin’, which mean ‘Japanese Language’ and ‘person’, thereby giving it meaning ‘people who are involved in Japanese Language, both native and non-native, regardless of their level of proficiency’. The concept was created to give learners a sense of belonging to a growing international community of Japanese speakers all over the world.*
CHRISTINE JOY C. CABAHUG
Teacher, Davao City National High School
The [にほんごフォーラム] 2016 was an eye-opener for me because I was able to explore the historical background and current situation and innovation of the Japanese Language Education in the Philippines and other countries like Thailand, Malaysia, Indonesia, Vietnam and Japan. I am greatly humbled by this opportunity that the Japan Foundation, Manila gave me, for I was able to challenge myself to use the Japanese Language in Japan, as well as to interact with other Japanese Language Teachers in Southeast Asia. I am honored and thankful to represent フィリピン during the forum, and I will definitely share the best practices and innovative teaching strategies, particularly in the enhancement of the Learners’ 21st Century Skills, to all the Japanese Language Teachers in the Department of Education’s Special Program in Foreign Language Nihongo, and to other Japanese Language Teachers in the Philippines.
EDUARDO B. TAN
Teacher, Florentino Torres High School
The International Forum gave each participant an opportunity to do a demonstration teaching. It was a tremendous task doing the demo-teaching not only in Japanese, but also doing it in front of some respected people in the field of Japanese language. However, the demonstration teaching was a platform to showcase how Filipino teachers teach the Japanese language. That is, “Teaching with ENTHUSIASM.”
So, what did you discover from their testimonies? Why don’t you try asking your students or trainees and find out how they would describe what they learned from your class? If they also talk about learning things, which they think will be helpful in their future jobs, in fulfilling their dreams or about things which changed their perspectives and broadened their horizon, then maybe you can say that you have also achieved a 21st Century Japanese Language Classroom.
Are you familiar with Ninjas? The Ninja world is very mysterious. Their main tasks are to gather information for their lord, to deliver secret documents safely, to scout enemies in order to protect their lord, and so on. However, their lives are wrapped in deep mystery. Ninjas are known to receive strict training within their group. Nowadays, the interest in Ninja is increasing all around the world. One of the reasons for its growing popularity is that there are several Ninja-themed manga books which have caught the eye of Manga-enthusiasts from various countries. One of which is Naruto, the manga that is arguably the most representative one in terms of Ninja-themed mangas. It was a great hit not only among Japanese young people, but also among manga readers throughout the globe. Some students in your class might be interested about the story of Ninjas as well. So today, I will introduce an effective class activity using one of the Ninja’s special items called the bar of secret code.
A long time ago, we hand carried secret documents since advanced corresponding systems like those we use today did not exist yet. So when a person wants someone to read a secret document, the Ninjas sometimes used this bar to deliver the messages safely.
Sample Lesson Plan
(Before the lesson)
1. The teacher or the students gather a tube of plastic wrap or certificates. (Please refer to ①)
(Introduction) 10min.
2. The teacher introduces the Ninja world to the students, especially their daily lives. If necessary, the teacher may explain using the references listed below.
(Activity) 40min.
3. Students are divided into two groups. Group A goes out of the classroom while Group B stays inside the classroom to work on the secret message. First, wind a piece of thin long paper over the tube ②③ and write a short message. For example, the students may hide a gift in the classroom in advance, and write briefly about its hiding place. During that time, Group B must write vertically using the Japanese writing system. ④ After finishing their work, the students would then unwind the paper. ⑤
4. Group A enters the classroom. They guess the meaning of the message by looking at the paper. If they cannot understand it, they can wind the paper over the tube and read it. ⑥
5. Please break the secret code, and let’s go find the gift!
6. After getting the gift, Group A and Group B would switch roles, and restart from number 3.
(Reflection) 10min.
7. Please share with your seatmates about today’s activity and discuss about the Ninja’s task.
References
1. Naruto official site in English https://www.viz.com/naruto
2. Ninja MUSEUM of Igaryu http://iganinja.jp/en/index.html
3. Koka Ninja House (Koka-ryu Ninjutsu Yashiki) http://www.kouka-ninjya.com/la_en/
4. Japan Ninja Council https://ninja-official.com/?lang=en
Hello!!!
YASUJIRO TAKEI
はじめまして。武井康次郎(たけいやすじろう)です。
2016年8月25日にマニラへ来ました。私のフィリピンの
イメージは、人々は明るいし、シシグはおいしいし、サンミゲル
もおいしいです。でも渋滞(じゅうたい)はひどいですが…。
どうぞよろしくお願いします!
5th Japanese Language Education Conference
June 4-5, 2016
DepEd Ecotech Center, Lahug, Cebu City
Discussion on the current issues in their respective Nihongo classes
The participants of the 5th JLEC on “Making A Livelier Nihongo Class With Better Student Involvement Through Active Learning”
JPEPA Instructor’s Report
When I returned to the Philippines after having worked for three years in Japan, I never thought I could find use for my Nihongo skills. That is until the opportunity to teach JPEPA candidates beckoned after being invited to speak at the graduation of Batch 7.
I never thought I could teach. I was very apprehensive. I did not know about Team Teaching either (having worked as a School Nurse with DepEd once), but the methodology of teaming up Japanese and Filipino teachers did wonders for my confidence. So did the seminars and trainings given by The Japan Foundation, Manila, which were extensive enough that I learned the nitty-gritties – from lesson plan preparation to classroom management to teaching techniques.
As teaching started, I learned much from observing senior Nihongo teachers in their classes. There were also the Team Meetings where Japanese and Filipino teachers discuss their concerns about students, teaching, materials, etc. Best of all, there were the one-on-one 面談 with the Japanese supervisors wherein they inquired how I was doing and if I was having problems. These made me feel that I was not alone in the seemingly difficult task of mentoring future 看護師 and 介護福祉士 because I myself am being mentored.
Now, I barhe in pride seeing my students advance in their fluency in Nihongo. I feel very happy being gifted the chance to inspire them with stories of my own adventures in Japan and exhorting them 頑張ってください! I also feel supreme joy whenever I learn that my former students are well on their way to reaching their dreams.
It has been a rewarding journey for me as a Japanese Language Lecturer.
どうもありがとうございます!
Janice F. Dais, R.N.
Ms. Dais is a graduate of BS Nursing from Bicol University. She worked as a 看護師候補者 at 沼隈病院 in 福山 広島 under JPEPA Batch 1, the pioneer batch of Filipino nurses and caregivers to Japan. Since then, she has worked as Japanese Language Lecturer for Batch 8 and presently, Batch 9 of EPA candidates.
Sensei no Wa
“Sensei no Wa” is open to both experienced and neophyte Japanese-language teachers, and offers a platform for information exchange with one’s peers. It is for the further encouragement of Japanese Language Education and aims to support professional enrichment and network expansion through interactive learning.
Let’s join Sensei no Wa
Three Factors of Communication and Related Classroom Activities
Mr. Carlos Luis Santos
(Lecturer, Ateneo de Manila University; Trainee, Japan Foundation Long-term Training Program for Foreign Teachers of the Japanese Language in Urawa, Saitama, Japan, September 2015 – March 2016)
July 30, 2016
Let’s challenge GUNDOKU (群読) together!
Mr. Mamoru Morita
(Japanese Language Education Adviser, The Japan Foundation, Manila)
October 28, 2016
Oshaberi Salon
“Oshaberi Salon” is a free event for Nihon enthusiasts held at the Japan Foundation, Manila. During each session the participants try to complete a task on their own or collaborate with others using Nihongo. The participants not only discover something new about Nihon or Nihongo, but they can also try their Nihongo, get a lot of inspiration, and form a new network. If you know someone who is a Nihon enthusiast, “Oshaberi Salon” might be ideal!
七夕
Tanabata / Star Festival
July 15, 2016
怪談
Ghost Stories
September 2, 2016
アイドル
Idol / J-Pop Music
November 11, 2016
お正月
New Year
January 6, 2017
Practice Teaching Course in Manila
September 24 & 25, 2016
Every year, JFM offers this course to active or aspiring Filipino Nihongo teachers or those who wish to take basic training in teaching Japanese.
Participants take up instructions on the skills in classroom teaching. Also, participants are given the opportunity to do practice teaching.
おせち料理
by Yoshiko Morokuma
みなさんはクリスマスやお正月(New Year)にどんな料理を食べますか。
お正月の代表的な料理として日本では「おせち料理」を食べます。
おせち料理は重箱という箱にいろいろな食べ物を入れた料理です。
おせち料理の食べ物にはそれぞれ意味がありますが、どんな意味があるか知っていますか?
次の1)〜4)の食べ物の意味をA〜Dから選んでみてください。
1)えび
2)数の子(ニシンという魚の卵)
3)黒豆(黒い豆)
4)昆布巻き
他にもおせち料理には、いろいろな食べ物が入っています。家庭や地域でもそれぞれ違いますので、調べてみるとおもしろいですよ😊
From The JFM LIBRARY
Be part of the growing family of the JFM library; sign up now for membership!
The library is open to researchers/borrowers from 10:00 a.m. - 7:00 p.m., Mondays to Fridays, and from 9:00 a.m. - 1:00 p.m. on Saturdays. It is closed on Sundays & Holidays. Please present an ID card at the Charging Desk.
For those who wish to become Library members or want to know more about the library, visit www.jfmo.org.ph/about_us_library or call (02) 811-6155 to 58.
「しごとの日本語 メールの書き方編」奥村真希、金渕優子 アルク
「仕事で使う!日本語ビジネス文書マニュアル」奥村真希、安河内貴子 アスク出版
E-mail has become an important part of our life nowadays. Many Japanese learners work in or are affiliated with Japanese companies. Do you know how to write a business e-mail in Japanese? You might have the impression that writing a Japanese e-mail is difficult. It's important to know the basic rules and honorific expressions. You will be able to write a business e-mail using several patterns as a reference.
「Nihongo Notes Vol.1 Language and Culture」
「Nihongo Notes Vol.2 Language and Communication」
Osamu Mizutani, Nobuko Mizutani The Japan times
Nihongo Notes by The Japan Times is a two volume set of selected Japanese essays published in the long-running "Nihongo Notes" column of the Japan Times newspaper. Even if you know the Japanese word, you can't use it correctly if you do not know what kind of situation it would be appropriate for. Knowledge of Japanese culture and the way the Japanese people think would help you understand the Japanese language. These books introduce the nuances of the Japanese language while offering insight into Japanese culture and society. It is written in English and Japanese.
「みんなの日本語 中級Ⅰ 本冊」
「みんなの日本語 中級Ⅰ 翻訳・文法解説英語版」
「みんなの日本語 中級Ⅰ 教え方の手引き」
「みんなの日本語 中級Ⅰ 繰り返して覚える単語帳」
「みんなの日本語 中級Ⅰ 標準問題集」
「みんなの日本語 中級Ⅱ 本冊」
「みんなの日本語 中級Ⅱ 教え方の手引き」
スリーエーネットワーク
Minna no Nihongo Chūkyū I & II series are now available. The books were edited to develop integrated Japanese language competence and self-education ability.
JFM Courses & Workshops
February to June 2017
COURSES FOR NIHONGO TEACHERS
日本語教師のための初中級日本語2
Pre-Intermediate Japanese for Nihongo Teachers 2
May 31 – June 28 (Wednesdays)
6:20 – 8:30 p.m. (10 hrs.)
Tuition fee: Php 750
COURSES FOR NIHONGO LEARNERS
Marugoto Writing (Moji) Course
February 14 – March 16 (Tuesdays & Thursdays)
6:20 – 8:00 p.m. (15 hrs.)
Tuition fee: Php 2,400
Marugoto Starter (A1) Module 1
March 6 – April 24 (Mondays & Wednesdays)
6:20 – 8:30 p.m. (24 hrs.)
Tuition fee: Php 4,400 (Inclusive of textbook)
Marugoto Elementary 2 (A2) Module 1
February 27 – April 6 (Mondays & Thursdays)
6:20 – 8:30 p.m. (24 hrs.)
Tuition fee: Php 4,500 (Inclusive of textbook)
Basic Conversational Japanese for Travelers”
March 18 – April 11 (Saturdays)
10:00 a.m. – 12:00 p.m.
Tuition fee: Php 900
Marugoto Elementary 1 (A2) Module 1
April 25 – June 6 (Tuesdays & Thursdays)
6:20 – 8:30 p.m. (24 hrs.)
Tuition fee: Php 4,500 (Inclusive of textbook)
Marugoto Starter (A1) Module 2
May 15 – June 26 (Mondays & Wednesdays)
6:20 – 8:30 p.m. (24 hrs.)
Tuition fee: Php 3,800 (For those without textbook, + Php 600)
JLPT Interactive Lecture & Exercises
*Registration is separate per session / per level.
Contents are the same.
N5 April 8, April 22, May 6 (Saturday) 9:00 a.m. – 12:30 p.m.
N4 April 8, April 22, May 6 (Saturday) 1:00 – 4:30 p.m.
N3 April 29 (Saturday) 9:00 a.m. – 12:30 p.m.
Tuition fee: Php 200
REGULAR EVENTS (FREE ADMISSION)
Sensei no Wa
April 21, June 2 (Fridays)
6:30 – 8:00 p.m.
Oshaberi Salon
March 3 (Friday)
6:20 – 8:00 p.m.
Introduction to Japanese Culture: Calligraphy
March 11 (Saturday)
10:00 a.m. – 12:30 p.m.
The above schedules are tentative.
Please check the JFM Facebook page (www.facebook.com/jfmanila) or The Japan Foundation, Manila (http://www.jfmo.org.ph) for updates.
2016 JAPANESE LANGUAGE PROFICIENCY TEST
(December 4, 2016)
Number of Applicants
| | N1 | N2 | N3 | N4 | N5 | Total |
|-------|-----|-----|-----|-----|-----|-------|
| Manila| 153 | 414 | 608 | 2,878 | 1,720 | 5,773 |
| Cebu | 16 | 48 | 81 | 156 | 233 | 534 |
| Davao | 15 | 40 | 65 | 209 | 347 | 676 |
| Total | 184 | 502 | 754 | 3,243 | 2,300 | 6,983 |
THE 1st JAPANESE LANGUAGE PROFICIENCY TEST 2017
(July 2, 2017)
Manila, Cebu, Davao
Manila & Cebu – Online registration period:
February 1 to March 8
Davao – Paper type registration period: March 8 to April 7
Please check The Japan Foundation, Manila website (http://www.jfmo.org.ph) for more details.
*The program is subject to change without prior notice. For latest updates, please check The Japan Foundation, Manila Facebook page. (www.facebook.com/jfmanila)
The 25th Philippine Nihongo Teachers’ Forum
May 20, 2017
WATCH OUT FOR DETAILS!
Merienda!
みりえんだ
The Japan Foundation, Manila Nihongo Teachers’ Newsletter
EDITORIAL STAFF
KIMY TAMMO
KELI BISCARRA
MICHIKO KOBAYASHI
MAAMORI MORA
SAKICHO KUIWANO
FLORINDA PALMA GIL
YOSHIKO MOROKUMA
C.E.J. AQUINO
MUTSURO IKEDA
SHIGERU KAWAKAMI
SAKAYA WATANABE
KOZUE TAKASU
FIONA TINDUGAN
Published by The Japan Foundation, Manila (JFM) located at the 23rd Floor, Pacific Star Building, Sen. Gil Puyat Avenue, cor. Makati Avenue, Makati City 1226, with telephone numbers (632) 811-6155 to 58, fax number (632) 811-6153, and email address at firstname.lastname@example.org - www.jfmo.org.ph |
Differential effects of Chinese high-fat dietary habits on lipid metabolism: mechanisms and health implications
Sisi Yan¹, Huijuan Zhou¹, Shuiping Liu¹, Ji Wang¹², Yu Zeng¹, Froilan Bernard Matias³ and Lixin Wen¹⁴*
Abstract
Background: The traditional Chinese diet blends lard with vegetable oil, keeping the fatty acid balance intake ratio of saturated fatty acids, monounsaturated fatty acids, and polyunsaturated fatty acids at nearly 1:1:1. However, the effects of a mixture of lard and vegetable oil on lipid metabolism have never been researched. In the present study, by simulating Chinese high-fat dietary habits, we explored the effects of a mixture of lard and vegetable oil on lipid metabolism.
Methods: We randomly assigned 50 male C57BL/6J mice to 5 groups (10 in each group) and fed them lard, sunflower oil (SFO), soybean oil (SBO), lard blended with sunflower oil (L-SFO), or lard blended with soybean oil (L-SBO) for 12 weeks.
Results: We found that the final body weights of mice in the lard group were significantly higher than those of mice in the SFO and SBO groups. Body fat rate and volume of fat cell of the lard group were significantly higher than those of the SFO, SBO, and L-SBO groups. Liver triglyceride level of the lard group increased significantly compared to the other groups. Although body fat rate and liver triglyceride level in the SBO and SFO groups decreased compared to those in the other groups, the high-density lipoprotein cholesterol/low-density lipoprotein cholesterol ratio were also significantly decreased in the SBO and SFO groups.
Conclusions: We found that a lard diet induced accumulation of body fat, liver and serum lipids, which can increase the risk of obesity, non-alcoholic fatty acid liver disease, and atherosclerosis. The vegetable oil diet resulted in cholesterol metabolism disorders even though it did not lead to obesity. The mixed oil diet induced body fat accumulation, but did not cause lipid accumulation in the liver and serum. Thus, differential oil/fat diets have an impact on differential aspects in mouse lipid metabolism.
Keywords: Lard, Sunflower oil, Soybean oil, Obesity, Non-alcoholic fatty liver disease, Atherosclerosis
Background
Obesity has become a public health concern worldwide. Obesity is highly associated with the development of hyperlipidemia, non-alcoholic fatty liver disease (NAFLD), and cardiovascular disease (CVD) [1]. Obesity leads to increased accumulation of free fatty acids (FFAs) and triacylglycerol (TG) in the serum, which are risk factors for the development of CVD [2]. Excessive TG accumulation in hepatocytes is a key feature in the development of NAFLD [3].
Western dietary habits typically involve high-fat consumption. Due to westernization over the past few years, the typical Chinese diet now also contains high fat [4, 5]. According to the Nutrition and Health Status of Chinese residents’ survey, the average daily intake of cooking oil or fat among Chinese residents were 42.1 g/day (37.3 g vegetable oil, 4.8 g lard) and 41.4 g/day (32.7 g vegetable oil and 8.7 g lard) in 2012 and 2002, respectively [6]. The Dietary Guidelines for Chinese residents (2016) indicate that more than 5% of Chinese residents have a daily consumption of cooking fat/oil that exceeds 95 g/day, with fat energy of diet up to 35~40% [7, 8]. Moreover, the intake of lard is decreasing due to negative reports concerning lard.
According to the World Health Organization (WHO), daily intake of energy obtained from fat/oil should be less than 30% and that from saturated fatty acids (SFAs) should be less than 10% [9].
The traditional Chinese diet blends lard with vegetable oil, which maintains the fatty acid balance intake ratio of SFAs, monounsaturated fatty acids (MUFA), and polyunsaturated fatty acids (PUFAs) at nearly 1:1:1. However, the effect of mixing lard and vegetable oil on lipid metabolism has not been investigated. Previous research has focused on single oil/fat or oil mixtures containing either different vegetable oils or fatty acids [10, 11]. Vegetable oils rich in unsaturated fatty acids are usually regarded more beneficial than animal-derived fat rich in SFAs. Beef tallow diet reportedly led to greater body fat accumulation than olive oil and soybean oil (SBO) [12, 13]. Lard was reported to induce more body fat accumulation than safflower oil and linseed oil [14]. However, lard is often used in Chinese cooking [15, 16]. It was recorded that lard can relief liver poisoning according to the *Compendium of Material Medical*. The stereospecific position of fatty acid in lard is similar to milk fat, where palmitic acids are primarily in the sn-2 position, which benefits the absorption of Ca$^{2+}$ [17]. Lard has higher content of α-tocotrienol than soybean oil, rice brain oil, and olive oil [18]. SFA diet can reduce competes with n-3 PUFA incorporation into tissue phospholipids compare to oleic diet [19]. Studies have found that soybean oil is more obesogenic than coconut oil rich in SFAs [20]. High fat diet with soybean oil induced high body weight more than high fat diet with palm oil and lard, which are both rich in SFAs [21]. In our previous study, traditional Chinese dietary habits of blending lard with SBO were proven to have anti-obesity effects when stimulated average oil intake of urban and rural residents in China [22]. This study aimed to investigate the effects of different fat/oil mixtures on lipid metabolism in mice when stimulated with typical Chinese residents’ high fat diet.
Methods
Animals, diets, and experimental design
Fifty male C57BL/6J 6 weeks old mice were purchased from Hunan Silake Laboratory Animal Co., Ltd. (Changsha, China). SBO and sunflower oil (SFO) were purchased from China Oil & Foodstuffs Co., Ltd. (Beijing, China), FuLinMen, and First Degree Press Oil. Leaf lard was purchased from a local supermarket, TangRenShen Co., Ltd. All mice were provided with food and water ad libitum and were kept under 12-h light-dark cycles at a temperature of 22 ± 1 °C and relative humidity of 65 ± 5%. After 1 week of acclimatization, the mice were randomly divided into five groups and fed different diets: lard, SFO, SBO, lard blended with SFO (L-SFO), and lard blended with SBO (L-SBO) for 12 weeks. The composition of the diets is shown in Table S1 while the fatty acid composition of the fat/oils is shown in Table S2. At the end of the feeding period, all mice were fasted for 12 h and anesthetized before being sacrificed. The blood and organs required for the study procedures were then collected.
Sample collection and preparation
Blood samples were collected from the retro orbital plexus and were left standing overnight at 4 °C, The serum was isolated by centrifugation at 3500 g for 10 min at 4 °C and was immediately stored at −80 °C until further analysis. Liver, epididymal adipose tissues, and perirenal adipose tissues were collected and weighed. Liver and epididymal adipose tissues were cut into five parts and washed with saline. One part was fixed in 10% neutral buffered formalin while the remaining parts were immediately frozen at −80 °C until analysis.
Measurements of lipid in plasma and liver
The levels of serum TG, total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C) were measured using a Mindray Biochemical Analyzer BS-190 (Shenzhen, China). Serum FFAs, TG and TC were determined using an assay kit acquired from Nanjing Jiancheng Bioengineering Institute (Nanjing, China).
**Histological analysis**
The epididymal white adipose tissues (WAT) and the left lateral lobe of the liver were fixed in 4% paraformaldehyde for 24 h. WAT was then stained with hematoxylin and eosin (H&E) and liver tissue was stained with Oil Red O (Sigma, USA). Stained areas were observed using an Olympus Photomicroscope (Olympus Inc., Tokyo, Japan) at a magnification of 400× for WAT and 200× for the liver tissue. The epididymal adipocyte area was measured using five fields of five individual fat cells, and epididymal adipocyte cross-section area (CSA) was calculated using Image-Pro Plus 5.1 (Media Cybernetics, Inc. Silver Spring, Maryland, USA). Liver Oil Red O-stained area was also measured using five fields of five individual samples in each group and was calculated using Image-Pro Plus 5.1.
**Western blotting analysis**
The method of western blotting analysis of liver used was like that used in a previous study [22]. This method used antibodies including sterol regulatory-element binding proteins (SREBP)-1c (Biosynthesis Biotechnology Co., Ltd., Beijing, China), fatty acid synthase (FAS) (Epitomics, Inc. USA), peroxisome proliferator-activated receptor alpha (PPARα) (Epitomics, Inc. USA), hormone-sensitive lipase (HSL) (Santa Cruz, Inc. USA) glyceraldehyde 3-phosphate dehydrogenase (Proteintech, Inc. USA), and horseradish peroxidase-conjugated secondary antibodies (Proteintech, Inc. USA).
**Statistical analysis**
The Feed efficiency ratio (FER) was computed by dividing the total weight gain (g) by the food intake (g) × 100. The collected dates were expressed as mean ± standard error of the mean (SEM). Mean differences between groups were analyzed using one-way analysis of variance (ANOVA) followed by least significant difference (LSD) post hoc analysis using SPSS 17.0 (SPSS Inc., Chicago, USA) software. A $P$-value < 0.05 was considered statistically significant. Graphical data presentations were created using Prism GraphPad version 5 (Graph Pad Software, San Diego, CA, USA).
---
**Fig. 1** Effects of different dietary fat/oil on FER and body weight and body fat accumulation. Mice were fed different dietary fats/oils: lard, sunflower oil (SFO), soybean oil (SBO), lard blended with SFO (L-SFO), and lard blended with SBO (L-SBO). **a** Feed efficiency ratio (FER) = [weight gain (g)/food intake (g)] × 100; **b** initial body weight; **c** Final body weight; **d** epididymal white adipose tissue (WAT); **e** perirenal WAT; **f** body fat mass = epididymal WAT weight (g) + perirenal WAT weight (g); **g** body fat rate = [epididymal WAT weight (g) + perirenal WAT weight (g)]/final body weight×100; **h** cross-section area (CSA) of epididymal adipocyte; and (**i**) section of epididymal adipose tissue stained with **h** and **e**. Data were expressed as mean ± standard error of the mean, $n = 9–10$ per group except for (**a**), (**h**) and (**i**), $n = 5$ per group. Values with different superscript letters (a, b, c, and d) are significantly different at $P < 0.05$.
Results
Body weight, feed efficiency ratio and body fat accumulation
There was no significant difference in the initial body weights between the groups (Fig. 1b). After 12 weeks of the experimental diet, the final body weights of the SFO and SBO groups were significantly lower than those in the lard group (Fig. 1c). The L-SFO and L-SBO groups showed a significantly higher final body weight compared to the SFO and SBO groups (Fig. 1c). However, the feed efficiency ratio did not differ between the groups (Fig. 1a). The intake of lard significantly increased the weight of the epididymal WAT, perirenal WAT, body fat mass and body fat rate compared to the intake of SFO and SBO (Fig. 1d-g). The SFO and SBO groups showed a significantly lower epididymal adipocyte CSA than the group fed with lard alone (Fig. 1h). The SFO and SBO groups showed a markedly lower epididymal adipocyte CSA than L-SFO and L-SBO groups (Fig. 1h).
TC accumulation in the serum and liver
The levels of serum TC and HDL-C were significantly lower in the L-SFO and L-SBO groups compared to the group fed with lard alone (Fig. 2a, b). When comparing the ‘mixed oil’ groups to the ‘vegetable oil’ groups, LDL-C serum levels were significantly lower in the L-SFO and L-SBO groups than those in the other three groups; however, no difference was observed when comparing the SFO and SBO groups with the lard group (Fig. 2c). These results indicate that the intake of an oil mixture could reduce levels of serum TC and LDL-C compared to the intake of lard alone. In addition, a noticeable decrease in TC level, as observed in the mice fed with vegetable oil, was mainly attributed to the reduced HDL-C level. Thus, the HDL-C/LDL-C ratio in the SFO and SBO groups were significantly lower than the other three groups (Fig. 2d). Liver TC levels in the L-SFO and L-SBO groups were also lower than those in the SFO and SBO groups (Fig. 2e).
TG accumulation in the serum and liver
Levels of serum TG, FFA, and liver TG in the group fed with lard alone were markedly higher than those in the other four groups, indicating that a lard diet could result in TG accumulation both in the serum and liver (Fig. 3a-c). No significant difference was observed in the liver TG values between the SFO, SBO, L-SFO, and L-SBO groups (Fig. 3c). Oil Red O staining result verified the TG content of the liver (Fig. 3d). Thus, our results demonstrated that a mixed oil diet does not cause lipid accumulation in the serum and liver despite increasing the body weight.
Expression of related proteins in the liver of mice fed experimental diets
Compared to the lard diet, the mixed oil diet increased expression of the SREBP-1c and FAS proteins, while simultaneously up-regulating the PPARα and HSL protein expression. Compared to the lard diet, the vegetable oil diet down-regulated expression of the SREBP-1c and FAS proteins and increased expression of the PPARα and HSL proteins. These findings illustrate that fatty...
acid synthesis was inhibited and hydrolysis of TGs was promoted by vegetable oil, contributing to the lower lipid accumulation compared to the lard diet (Fig. 4).
**Discussion**
In this study, by simulating Chinese high-fat dietary habits, we explored the effects of an oil mixture (lard and vegetable oil) on lipid metabolism in mice. Our results showed that the lard diet led to the highest fat mass, followed by the mixture of lard and vegetable oil, and then vegetable oil. On the other hand, the vegetable oil diet resulted in disorders of cholesterol metabolism even with the lowest fat mass.
Lard, which is rich in SFA easily results in fat accumulation compared to vegetable oils, such as SBO, SFO, and corn oil [23–26]. This was verified in both our study and other studies. The ability to store fat may be more related to the source of dietary fat than to the total caloric intake [27]. SFA is a contributing factor to obesity; in the literature, edible beef tallow, which is rich in SFA, resulted in a larger amount of body fat accumulation than safflower oil, which is rich in n-6 fatty acid [28]. Body fat accumulation in SFA-rich diets is caused by lower oxygen consumption and decreased thermogenesis. SFA-rich diets affect membrane fatty acid composition. The metabolic rate is altered and in conjunction with the modification of membrane phospholipids, which induces a decrease in metabolic rate [29]. In addition, high lard diet (45% fat energy) was reported to up-regulated the expression of interleukin-6 and monocyte chemoattractant protein-1 in the retroperitoneal adipose tissue of mice, which promoted the development of inflammation that contribute to obesity [30, 31]. The palmitic acid in lard distributed in the Sn-2 position of the TG, making palmitic acid in lard more easily absorbable [32]. To sum up, it was inferred that palmitic acid, a source of SFA and rich in lard, may contribute to fat accumulation.
However, our results in this study conflict our previous research findings [22]. This may be due to differences in fat energy, as our previous study supplied 25% fat energy compared to 35% fat energy that was supplied in the present study. In general, a fat energy composition of up to 50–60% is observed in a high fat diet mouse model. Most researchers use these values to establish an obesity model [33] or a diabetic model [34]. According to Catta-Preta et al. [23], in a 60% fat energy diet (lard, olive oil, SFO, and canola oil separately), only lard contributes to fat mass (10% fat energy); In our study, mice were supplied with 35% fat energy are consistent with this report. Bargut et al. showed that the body fat mass
of mice varied if the mice were fed different types of high-fat diets (50% fat energy), with the highest body fat mass being gained from lard and the lowest from fish oil [35]. Basically, essential nutrients should be consumed above a minimal level to avoid deficiency and below a maximal level to avoid toxicity. A U-shaped association is logical between nutrients and health. However, an extreme intake of oil is always applied in research when assessing its health effect [36].
Body fat accumulation rate in the L-SBO group was lower than in the L-SFO group. The proportion of n-3/n-6 PUFAs is an important factor in lipid metabolism. Studies have shown that a high n-3/n-6 PUFA ratio in dietary oil may improve the strength of oxidative stress through reductions in serum content of FFA [37]. The proportion of n-3/n-6 PUFA in L-SBO was higher than that in L-SFO.
In our study, HDL-C was lowest in mice fed with soybean oil. A randomized crossover studied two orally administered vitamin A-fat loads consisting of either 20% (wt:vol) soybean oil of 17% olive oil plus 3% soybean oil found that soybean oil induced postprandial decreases in HDL-C due to failed competition between soybean oil chylomicron remnants and HDL for hepatic lipase [38]. Besides, LDL-C was highest in mice fed with SFO and SBO. Mara et al. compared rats fed with cholesterol + olive oil or cholesterol + soybean oil and results showed that there was no significant difference in the final body weights of the groups, but the LDL-C level of rats fed with cholesterol + soybean oil was over 2 times higher than that of rats fed with cholesterol + olive oil [39]. In the present study, mice fed with SFO and SBO showed lowest HDL-C/LDL-C ratios, suggesting that SFO and SBO diets could lead to cholesterol disorders. However, a lack of initial HDL-C and LDL-C values and soya bean meal in fodder were limitations to support it. The proportion of MUFA may be a factor that influences the metabolism of...
cholesterol. Duavy et al. (2017) showed that the intake of MUFA-rich olive oil reduced serum LDL-C levels compared to a SFO diet [39]. Although similar results were observed in the present study, the mechanisms underlying these results still need to be investigated further.
In this study, there was a significant increase in SREBP-1c in vegetable oil-fed mice. Tao Jiang et al. [40] found that SREBP-1c was up-regulated in mice that were fed lard with 60% fat energy, while in SREBP-1c knocked-out mice, renal lipid accumulation improved. SREBPs are the predominant isoforms expressed in most tissues and they control lipogenic gene expression [41]. Furthermore, they control the transcription of fatty acid synthase (FAS) which is a key component in the lipid synthesis pathway [42]. Endogenous fatty acids are mainly synthesized by FAS which synthesizes acetyl-CoA and malonyl-CoA into long-chain fatty acids [43]. These findings suggest that lard promotes the synthesis of fatty acids.
PPARα is a transcription factor that belongs to the nuclear hormone receptor superfamily and has been reported to induce expression of HSL and adipose triglyceride lipase, both of which contribute to the mobilization of TGs [44]. In the literature, hepatic PPARα protein increased in lard-fed mice [45]. However, there was a decrement in mice fed with lard compared to the other four groups; thus, HSL protein was lowest in mice fed with lard, indicating that lard hydrolysis capability was lowest.
Studies have shown that hypercholesterolemia is mainly caused by abnormally elevated levels of serum LDL-C [46]. High LDL-C and low HDL-C levels are associated with an increase in the risk of CVD [47]. The HDL-C/LDL-C ratio is an important indicator for the assessment of CVD risk and is more sensitive than TG and TC in predicting the risk of CVD. The HDL-C/LDL-C ratio of mice fed with vegetable oil was significantly lower than that of mice fed with oil mixture. These results indicate that the intake of vegetable oil increases the risk of CVD, compared to the intake of other oils. The proportion of MUFAs may be a factor that influences the metabolism of cholesterol. Duavy et al. (2017) showed that the intake of MUFA-rich olive oil reduced serum LDL-C levels compared to a SFO diet [48]. Although similar results were observed in the present study, the mechanisms underlying these results still need to be investigated further. The intake of lard lead to higher serum TG and FFA levels compared to the intake of vegetable oils in isolation or in an oil mixture. High serum TG and FFA levels increase the risk of atherosclerosis. This may be associated with high palmitic acid content at the Sn-2 position in lard which causes it to be directly absorbed from the intestine [49].
In the present study, the intake of lard enhanced fatty acid synthesis and attenuated mobilization of TG and compared to vegetable oil, contribute the highest fat accumulation. The oil mixture diet also enhanced fatty acid synthesis compared to vegetable oil; however, no differences in TG mobilization rate were observed between the mice that consumed the oil mixture and those that consumed the vegetable oil diets. This may be attributed to a lower liver TG content in the diet of the mice that were fed vegetable oil and oil mixture than those fed with lard.
However, this study only compared five types of oil diets, without a control group. Thus, we discussed the effects of different oil diets on lipids metabolism based on 35% fat energy consumption in the present study.
**Conclusion**
Overall, after simulating high-fat dietary habits of Chinese residents, the intake of a mixture of lard and vegetable oil did not have anti-obesity effects compared to vegetable oils. In addition, we found that intake of lard induced body fat accumulation and lipid accumulation in the liver and serum and increased risk of obesity and atherosclerosis. Intake of vegetable oil resulted in disorders pertaining to cholesterol metabolism, which advanced the risk of CVD even though it did not lead to obesity. Intake of oil mixture, despite not resulting in lipid accumulation in the liver and serum, inevitably induced body fat accumulation. Thus, differential oil/fat diets have an impact on differential aspect in mice lipid metabolism.
**Supplementary information**
Supplementary information accompanies this paper at https://doi.org/10.1186/s12944-020-01212-y.
**Additional file 1: Table S1.** Composition of the diets (g/kg). **Table S2.** Fatty acids composition of the fat/oils
**Abbreviations**
CSA: Cross-section area; FAS: Fatty acid synthase; FER: Feed efficiency ratio; FFA: Free fatty acid; H&E: Hematoxylin and eosin; HDL-C: High-density lipoprotein cholesterol; HSL: Hormone-sensitive lipase; LDL-C: Low-density lipoprotein cholesterol; L-SFO: Blended lard and sunflower oil; L-SBO: Blended lard and soybean oil; MUFA: Monounsaturated fatty acid; PPARα: Peroxisome proliferator-activated receptor alpha; PUFA: Polyunsaturated fatty acid; SBO: Soybean oil; SFA: Saturated fatty acids; SFO: Sunflower oil; SRE: Sterol regulatory-element; SREBP: Sterol regulatory-element binding protein; TBST: Tris-buffered saline and Polysorbate 20; TC: Total cholesterol; TG: Triglyceride; WAT: White adipose tissue
**Acknowledgments**
We would like to extend our gratitude to the platform and funding provided by the Hunan Collaborative Innovation Center of Animal Production Safety, the Laboratory of Animal Clinical Toxicology, Department of Veterinary, Hunan Agriculture University, and the Animal Health Care Engineering Technology Research Center of Hunan Agricultural University.
**Authors’ contributions**
LW, SY and JW conceived and designed the experiments; SY, HZ, YZ, SL and JW performed the experiments; HZ analyzed the data; SY, SL and FBM prepared the manuscript. All authors read and approved the final version of the manuscript.
Funding
This work was supported by the National Key R&D Program of China (grant no. 2016YDF0501200).
Availability of data and materials
All data generated or analyzed are included in this paper.
Ethics approval and consent to participate
I promise that all procedures were conducted according to the Guiding Principles in the Care and Use of Laboratory Animals published by the U.S. National Institutes of Health (NIH Publication No. 8023, revised 1978) and were approved by the Institutional Ethics Committee of the institution where this research was performed.
Consent for publication
‘Not applicable’.
Competing interests
The authors declare that they have no competing interests.
Author details
1Laboratory of Animal Clinical Toxicology, Department of Clinical Veterinary Medicine, College of Veterinary Medicine, Hunan Agricultural University, No. 1, Nongda Road, Changsha City 410128, Hunan Province, People’s Republic of China. 2Changsha Luye Biotechnology Co., Ltd, Changsha, Hunan Province, People’s Republic of China. 3Department of Animal Management, College of Veterinary Science and Medicine, Central Luzon State University, 3120 Science City of Muñoz, Nueva Ecija, Philippines. 4Hunan Collaborative Innovation Center of Animal Production Safety, No. 1, Nongda Road, Changsha City 410128, Hunan Province, People’s Republic of China.
Received: 20 May 2019 Accepted: 24 February 2020
Published online: 29 February 2020
References
1. Blüher M. Adipose tissue dysfunction in obesity. Exp Clin Endocrinol Diabetes. 2009;117:241–50.
2. Oyri LKL, Hansson P, Bogsrud MP, Narverud J, Florholmen G, Leder L, Byfuglien MG, Veleord MB, Ulven SM, Holven KB. Delayed postprandial TAG peak after intake of SFA compared with PUFA in subjects with and without familial hypercholesterolaemia: a randomised controlled trial. Br J Nutr. 2018;119:1142–50.
3. Weiss J, Rau M, Geier A. Non-alcoholic fatty liver disease: epidemiology, clinical course, investigation, and treatment. Dtsch Arztebl Int. 2014;111:447–52.
4. Su C, Wang H, Zhang J, Du W, Wang Z, Zhang J, Zhai F, Zhang B. Intergenerational differences on the nutritional status and lifestyle of Chinese residents. Wei Sheng Yan Jiu. 2012;41:357–62.
5. Chen Y, Lin X, Liu Y, Xie D, Fang J, Le YY, Ke ZJ, Zhai QW, Wang H, Guo FF, et al. Research advances at the Institute for Nutritional Sciences at Shanghai, China. Adv Nutr. 2011;2:428–39.
6. Jile C, Yu W. 2010–2013 comprehensive report on nutrition and health monitoring of Chinese residents. Beijing: Peking University Medical Press; 2016.
7. Zhu ZN, Zang JJ, Wang ZY, Zou SR, Jia XD, Guo CY, Ma LF, Xu D, Wu F. Dietary pattern and its seasonal characteristic in residents of Shanghai, 2012-2014. Zhonghua Liu Xing Bing Xue Za Zhi. 2018;39:880–5.
8. Society CN. Dietary guidelines of Chinese residents. Beijing: People’s Health Press; 2016.
9. Mozaffarian D. Dietary and policy priorities for cardiovascular disease, diabetes, and obesity: a comprehensive review. Circulation. 2016;133:187–225.
10. Crescenzo R, Bianco F, Mazzoli A, Giacco A, Cancellerie R, di Fabio G, Zarelli A, Liverini G, Iossa S. Fat quality influences the obesogenic effect of high fat diets. Nutrients. 2015;7:9475–91.
11. Ghosh M, Upadhyay R, Mahato DK, Mishra HN. Kinetics of lipid oxidation in omega fatty acids rich blends of sunflower and sesame oils using Rancimat. Food Chem. 2019;272:421–7.
12. Yamashita S, Hirashima A, Lin IC, Bae J, Nakahara K, Murata M, Yamada S, Kumazoe M, Yoshitomi R, Kadomatsu M, et al. Saturated fatty acid attenuates anti-obesity effect of green tea. Sci Rep. 2018;8:10023.
13. Matsuo T, Takeuchi H, Suzuki H, Suzuki M. Body fat accumulation is greater in rats fed a beef tallow diet than in rats fed a safflower or soybean oil diet. Asia Pac J Clin Nutr. 2002;11:302–8.
14. Takeuchi H, Matsuo T, Tokuyama K, Shimomura Y, Suzuki M. Diet-induced thermogenesis is lower in rats fed a lard diet than in those fed a high oleic acid safflower oil diet, a safflower oil diet or a linseed oil diet. J Nutr. 1995;125:920–5.
15. Lin JM, Liou SJ. Aliphatic aldehydes produced by heating Chinese cooking oils. Bull Environ Contam Toxicol. 2000;64:817–24.
16. Wang L, Zheng X, Stevanovic S, Wu X, Xiang Z, Yu M, Liu J. Characterization particulate matter from several Chinese cooking dishes and implications in health effects. J Environ Sci (China). 2018;72:98–106.
17. Decker EA. The role of stereospecific saturated fatty acid positions on lipid nutrition. Nutr Rev. 1996;54:108–10.
18. Li X, Shen Y, Wu G, Qi X, Zhang H, Wang L, Qian H. Determination of key active components in different edible oils affecting lipid accumulation and reactive oxygen species production in HepG2 cells. J Agric Food Chem. 2018;66:11943–56.
19. Picklo MJ, Murphy EJ. A high-fat, high-oleic diet, but not a high-fat, saturated diet, reduces hepatic alpha-linolenic acid and Eicosapentaenoic acid content in mice. Lipids. 2016;51:537–47.
20. Deol P, Evans JR, Dhabhi J, Chellappa K, Han DS, Spindler S, Sladek FM. Soybean oil is more obesogenic and Diabetogenic than coconut oil and fructose in mouse: potential role for the liver. PLoS One. 2015;10:e0132672.
21. Ikemoto S, Takahashi M, Tsunoda N, Maruyama K, Itakura H, Ezaki O. High-fat diet-induced hyperglycemia and obesity in mice: differential effects of dietary oils. Metabolism. 1996;45:1539–46.
22. Wang J, Yan S, Xiao H, Zhou H, Liu S, Zeng Y, Liu B, Li R, Yuan Z, Wu J, et al. Anti-obesity effect of a traditional Chinese dietary habit-blending lard with vegetable oil while cooking. Sci Rep. 2017;7:14869.
23. Catta-Preta M, Martins MA, Cunha Brunini TM, Mendes-Ribeiro AC, Mandarim-de-Lacerda CA, Aguiña MB. Modulation of cytokines, resistin, and distribution of adipose tissue in C57BL/6 mice by different high-fat diets. Nutrition. 2011;28:812–9.
24. Tufarelli V, Bozzo G, Perillo A, Laudadio V. Effects of feeding different lipid sources on hepatic histopathology features and growth traits of broiler chickens. Acta Histochim. 2015;117:780–3.
25. Li Y, Zhao F, Wu Q, Li M, Zhu Y, Song S, Zhu J, Ma Y, Li H, Shi X, et al. Fish oil diet may reduce inflammatory levels in the liver of middle-aged rats. Sci Rep. 2017;7:6241.
26. Pavlisova J, Bardova K, Stankova B, Tvrzicka E, Kopecky J, Rossmeisl M. Corn oil versus lard: metabolic effects of omega-3 fatty acids in mice fed obesogenic diets with different fatty acid composition. Biochimie. 2016;124:150–62.
27. Surwit RS, Feinglos MN, Rodin J, Sutherland A, Petro AE, Opera EC, Kuhn CM, Rebuffe-Scrive M. Differential effects of fat and sucrose on the development of obesity and diabetes in C57BL/6J and a/J mice. Metabolism. 1995;44:645–51.
28. Shimomura Y, Tamura T, Suzuki M. Less body fat accumulation in rats fed a safflower oil diet than in rats fed a beef tallow diet. J Nutr. 1990;120:1291–6.
29. Pan DA, Storlien LH. Dietary lipid profile is a determinant of tissue phospholipid fatty acid composition and rate of weight gain in rats. J Nutr. 1993;123:512–9.
30. Wang N, Guo J, Liu F, Wang M, Li C, Jia L, Zhai L, Wei W, Bai Y. Depot-specific inflammation with decreased expression of ATME2 in white adipose tissues induced by high-margarine/lard intake. PLoS One. 2017;12:e0188007.
31. Cox AJ, West CP, Cripps AW. Obesity, inflammation, and the gut microbiota. Lancet Diabetes Endocrinol. 2015;3:207–15.
32. Tomarelli RM, Meyer BJ, Weaver JR, Bernhart FW. Effect of positional distribution on the absorption of the fatty acids of human milk and infant formulas. J Nutr. 1968;95:583–90.
33. Seyedan A, Mohamed Z, Alshagga MA, Koosha S, Alshawish MA. Cynometra cauliflora Linn. Attenuates metabolic abnormalities in high-fat diet-induced obese mice. J Ethnopharmacol. 2019;236:173–82.
34. Xu D, Jiang Z, Sun Z, Wang L, Zhao G, Hassan HW, Fan S, Zhou W, Han S, Zhang L, Wang T. Mitochondrial dysfunction and inhibition of myoblast differentiation in mice with high-fat-diet-induced pre-diabetes. J Cell Physiol. 2019;234:7510–23.
35. Bargut TC, Souza-Mello V, Mandarim-de-Lacerda CA, Aguiña MB. Fish oil diet modulates epididymal and inguinal adipocyte metabolism in mice. Food Funct. 2016;7:1468–76.
36. Mente A, Yusuf S. Evolving evidence about diet and health. *Lancet Public Health*. 2018;3:e408–9.
37. Rudolph MC, Jackman MR, Presby DM, Houck JA, Webb PG, Johnson GC, Soderborg TK, de la Houssaye BA, Yang IV, Friedman JE, MacLean PS. Low neonatal plasma n-6/n-3 PUFA ratios regulate offspring Adipogenic potential and condition adult obesity resistance. *Diabetes*. 2018;67:651–61.
38. de Bruin TW, Brouwer CB, van Linde-Sibenius TM, Jansen H, Erkelenz DW. Different postprandial metabolism of olive oil and soybean oil: a possible mechanism of the high-density lipoprotein conserving effect of olive oil. *Am J Clin Nutr*. 1993;58:477–83.
39. Duavy SMP, Salazar GIT, Leite GD, Ecker A, Barbosa NV. Effect of dietary supplementation with olive and sunflower oils on lipid profile and liver histology in rats fed high cholesterol diet. *Asian Pac J Trop Med*. 2017;10:539–43.
40. Jiang T, Wang Z, Proctor G, Moskowitz S, Liebman SE, Rogers T, Lucia MS, Li J, Levi M. Diet-induced obesity in C57BL/6J mice causes increased renal lipid accumulation and glomerulosclerosis via a sterol regulatory element-binding protein-1c-dependent pathway. *J Biol Chem*. 2005;280:32317–25.
41. Horton JD, Goldstein JL, Brown MS. SREBPs: activators of the complete program of cholesterol and fatty acid synthesis in the liver. *J Clin Invest*. 2002;109:1125–31.
42. Shao W, Espenshade PJ. Expanding roles for SREBP in metabolism. *Cell Metab*. 2012;16:414–9.
43. Menendez JA, Lupu R. Fatty acid synthase and the lipogenic phenotype in cancer pathogenesis. *Nat Rev Cancer*. 2007;7:763–77.
44. Inagaki T, Dutchak P, Zhao GX, Ding XS, Gautron L, Parameswara V, Li Y, Goetz R, Mohammadi M, Esser V, et al. Endocrine regulation of the fasting response by PPAR alpha-mediated induction of fibroblast growth factor 21. *Cell Metab*. 2007;5:415–25.
45. Kai M, Miyoshi M, Fujiwara M, Nishiyama Y, Inoue T, Maeshige N, Hamada Y, Usami M. A lard-rich high-fat diet increases hepatic peroxisome proliferator-activated receptors in endotoxemic rats. *J Surg Res*. 2017;212:22–32.
46. Thienpont LM, Van Nieuwenhove B, Stocki D, Reinauer H, De Leenheer AP. Determination of reference method values by isotope dilution-gas chromatography/mass spectrometry: a five years’ experience of two European reference laboratories. *Eur J Clin Chem Clin Biochem*. 1996;34:853–60.
47. Kasko M, Kasko V, Oravec S. Would Janus’ view on HDL be useful? *Bratislavské lekárské listy*. 2018;119:245–8.
48. Duavy SMP, Salazar GIT, Leite GD, Ecker A, Barbosa NV. Effect of dietary supplementation with olive and sunflower oils on lipid profile and liver histology in rats fed high cholesterol diet. *Asian Pacific J Trop Med*. 2017;10:609–13.
49. Aoe S, Yamamura J, Matsuyama H, Hase M, Shiota M, Miura S. The positional distribution of dioleoyl-palmitoyl glycerol influences lymph chylomicron transport, composition and size in rats. *J Nutr*. 1997;127:1269–73.
**Publisher’s Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
高速氮化镓基可见光收发器件与芯片技术
上海低轨卫星通信与应用工程技术研究中心
复旦大学信息科学与工程学院
沈超
firstname.lastname@example.org
2021年7月
Outline
• Intro
• Transmitter Technology
• Photonics Integration
• Receiver Technology
• The end
Structure of a VLC Link
VLC link consists of
- Transmitter: high modulation speed emitter
- Channel: distance, condition like dusty or foggy, LOS or Non-LOS
- Receiver: high speed detector + demodulator
The “Blue LED” Revolution
Isamu Akasaki
Meijo University, Nagoya, Japan
Nagoya University, Japan
Hiroshi Amano
Nagoya University, Japan
Shuji Nakamura
University of California, Santa Barbara, CA, USA
Source: Wraptor
Fast, MHz
Power efficient
Compact
2014, Nobel Prize
Transmitter – Light Emitting Diode (LED)
Indoor broadcasting via white LEDs and OFDM
H. Hass et al., IEEE Transactions on Consumer Electronics, 55 (3), August 2009
H. Hass et al., in IEEE 65th Vehicular Technology Conference, 2007
682 Mbit/s phosphorescent white LED VLC system
Chen et al., Optics Communication, 354, 107-111, May 2015
+ Advantage of LED
✓ Long Lifetime: ~ tens of thousands hours
✓ Compact: ~ μm
+ Disadvantage of LED for VLC system
✓ Limited modulation bandwidth
✓ Efficiency droop for illumination
Transmitter – Micro-Light Emitting Diode (μLED)
Wafer-level micro-LED matrix delivers high brightness at 2540dpi
F. Templier, et al., Proc. SPIE 10104, Gallium Nitride Materials and Devices XII, 1010422 (2017)
SPIE OPTO, SPIE, 2018, p. 6. (Grenoble-France)
IEEE Photonics Technology Letters, 28 (19), October 2016
(Strathclyde, Glasgow & Oxford )
Improved bandwidth with smaller LED size
✓ 3.5 Gbps (PAM-4) and 5 Gbps (adaptive DCO-OFDM)
Transmitter – Nanowire LED
- Successfully growth of GaN nanowire LED on metal substrate.
- Metal serves as the device supporting material, electrical contact, heat-sink and light reflector.
- No efficiency droop is observed from these devices.
ACS Energy Letters, January 2018
Small, 12, 2312, 2016
Nano Letters, January 2016
Advanced Materials, March 2016
Nano Letters, July 2016
NW LED for multiple wavelength integration
UV, BLUE, GREEN, YELLOW, RED, & WHITE LEDs
Progress in Quantum Electronics (2018)
Nanoscale (2018)
Nano Letters (2018)
J. Nanophotonics (2018)
ACS Photonics (2017)
Applied Physics Letters (2017)
Optical Materials Express, 7, 4214 (2017)
Nanoscale 9, 7805 (2017)
RSC Advances, 7, 26665 (2017)
Optics Express, 25, 1381 (2017)
Nano Letters, 16, 1056 (2016)
Nano Letters, 16, 4616 (2016)
Advanced Materials, 28, 5106 (2016)
Small, 12, 2313 (2016)
ACS Photonics, 3, 2089–2095 (2016)
Optics Express, 17, 19928 (2016)
Progress on III-Nitride Distributed Feedback (DFB) Laser Diodes
First GaN-based DFB laser
R. Hoffmann, H. Schweizer et al.
Univ. Stuttgart
Appl. Phys. Lett. 69 (14)
1996
Electrically injected DFB laser
D. Hofstetter, M. Kneissl, et al.
Xerox Palo Alto Research Center
Appl. Phys. Lett. 73 (15)
1998
Embedded DFB gratings
A.C. Abare, M. Hansen, J.S. Speck,
S.P. DenBaars, L.A. Coldren
Univ. of Cal. Santa Barbara
Electron. Lett. 35 (18)
1999
Laterally coupled DFB gratings
H. Schweizer, et al
Univ. Stuttgart and OSRAM
Phys. Stat. Sol. (a) 192 (2)
2002
CW first-order DFB laser
S. Masui, et al., NICHIA
Jap. J. Appl. Phys. 45 (46), 2006
Beyond 20 dB Side-mode suppression ratio
J.H. Kang, T. Wernicke, M. Kneissl et al.
Ferdinand Braun Inst. Tech.Univ. Berlin,
IEEE Phot. Tech. Lett. 30 (3)
2018
Ridge sidewall gratings, CW, 35 dB SMSR
T.J. Slight, A.E. Kelly, P. Perlin, M. Leszczynsky, et al., Compound Semi. Tech. Global, Topgan, Unipress, Univ. Glasgow, Aston Univ. Appl. Phys. Exp. 11 (112/01)
2018
First InGaN green DFB laser
B.S. Ooi, et al. KAUSTAppl. Phys. Exp. 12 (042007), 2019
GaN-based DFB Laser Characterizations
First demonstration of GaN DFB emission at sky-blue and green colors.
Record fastest for any GaN DFB laser
Record side-mode suppression ratio (SMSR) for GaN DFB laser
Opt. Lett. 45(3), 742 (2020)
Appl. Phys. Express 12(4), 042007 (2019)
Proc. of SPIE 11301, 1130104 (2020)
OFC Conf. T3C.3 (2020)
InGaN VCSEL
Schematic of $m$-plane InGaN VCSEL
- $\text{Ta}_2\text{O}_5/\text{SiO}_2$ Dual-DBR mirrors.
- Al ion implanted aperture of 10 µm.
- PEC etched substrate removal.
- Flip-chip bonding.
- Tunnel junction intracavity contact.
Threshold current of 18 mA, corresponding to a threshold voltage of 7.5 V. Slope efficiency of 7.74 µW/mA.
The maximum frequency response of the presented 10-µm-aperture VCSEL is expected to exceed 1 GHz, although the current result is limited by the BW of the APD.
Sub-pF (~0.85 pF) junction capacitance
J. T. Leonard, et.al. Appl. Phys. Lett., 2015
C. Shen et. al. CLEO, 2016
GaN-based Light emitters
Light-emitting diode (LED)
- Spontaneous emission
- Efficiency droop
- 3-dB bandwidth: <100 MHz
Micro-LED
- Array integration
- High speed
- Relatively low power
Laser diode (LD)
- Stimulated emission
- Droop-free
- 3-dB bandwidth: > GHz
VCSEL
- High quality beam
- Ultra-high speed
- Process challenges
Superluminescent diode (SLD)
- Amplified spontaneous emission
- Droop-free, speckle-free
- 3-dB bandwidth: ~ GHz
Semipolar superluminescent diodes (SLDs)
- Tilted facet configuration
- Passive absorber configuration
- Optical powers of 123 mW at 600 mA and 256 mW at 700 mA.
- Onset of superluminescence at 400 mA.
- Unlike LED, SLD is free of efficiency droop.
C. Shen, et.al. Optics Express, 2016
C. Shen, et.al. Optics Letters, 2016
C. Shen, et.al. Proc. SPIE, 2017
SLD for SSL and VLC
- The first report of utilizing SLD for SSL: InGaN based SLD is feasible for white light generation.
- The first investigation of high-frequency response of InGaN-based SLDs: > 800 MHz bandwidth.
- The first report of SLD based data communication achieving 1.3 Gbps data rate using OOK.
C. Shen, et.al. Optics Express, 2016
C. Shen, et.al. Optics Letters, 2016
C. Shen, et.al. Proc. SPIE, 2017
SLDs for LiFi
- Record SLD optical peak power of 475 mW
- High-performance C-plane SLD: suitable for industry adoption
A. Alatawi, J. Holguin-Lerma, Opt. Express, (2018).
- First report on c-plane SLD + Perovskite-phosphor
- High quality white light of 88.3 CRI and 7277 K CCT
- High modulation bandwidth of 1 GHz
A. Alatawi, et al., 2019.
SLD based VLC with MPANN-aided CAP post-equalizer
• Using Memory-Polynomial-aided Neural Network to replace the traditional finite impulse response (FIR) post-equalization filters.
• 2.95-Gbit/s transmission using carrierless amplitude and phase modulation.
F. Hu et al., ECOC 2019,
F. Hu et al., Opto-Electronic Advances, 2020. (Cover.)
SLD based VLC using OFDM
Using 16 QAM DMT for high-speed data transmission
C. Shen et al., IEEE JSTQE (invited paper), 2019.
SLD based VLC using OFDM
At 3.4 Gbps, the blue SLD based VLC system shows a BER of $3.73 \times 10^{-3}$.
C. Shen et al., IEEE JSTQE (invited paper), 2019.
Outline
• Intro
• Transmitter Technology
• Photonics Integration
• Receiver Technology
• The end
InP-based PIC: Realizing low-cost, -size, -weight, and -power
Fig. 17. Directly-driven SOA-PD/SGDBR-SOA wavelength converter. Schematic and equivalent circuit, including eye patterns from input and output data at 2.5 Gb/s [49].
Fig. 19. Widely tunable, traveling-wave PD-EAM wavelength converter transparent to data format and bit rate. Photo, equivalent circuit, and eye diagrams from 5 to 40 Gb/s [12].
Larry A. Coldren et.al. Journal of Lightwave Technology, 2011
MOVPE-grown single wire InGaN/GaN p–n junction core–shell nanowires light-emitting diodes (LED) and photodetectors optically coupled by waveguides. The photodetector current trace shows signal variation correlated with the LED on/off switching with a fast transition time below 0.5 s.
M. Tchernycheva et al., CNRS, Nano Letters, 2014
Integration of a LED with waveguide and receiver
Y. Wang, et.al, J Micromech Microeng, 2018
Y. Wang, et.al, Semiconductor Science and Technology, 2017
Y. Wang, et.al, IEEE Photonics Technology Letters, 2017
Y. Wang, et.al, Optics Express, 2016
Y. Wang, et.al, Applied Physics Letters, 2016
Y. Wang, et.al, Optical Materials Express, 2016
On-chip integration of GaN-based laser, modulator and PD
Fig. 2. (a) Lasing spectrum and (b) far-field pattern picture of the as-fabricated uncoated GaN-based LD grown on Si under $U_{\text{Mod}} = +2$ V and $I_{\text{Gain}} = 800$ mA at room temperature.
Fig. 3. (a) Output power of the LD ($P_{\text{LD}}$) and (b) photocurrent of the modulator section ($I_{\text{Mod}}$) as a function of the injection current in the gain section ($I_{\text{Gain}}$) under various modulator voltages ($U_{\text{Mod}}$).
Fig. 4. (a) Photocurrent of the photodetector ($I_{\text{PD}}$) as a function of the injection current in the gain section ($I_{\text{Gain}}$) under $U_{\text{Mod}} = +2$ V. (b) Output power of the LD ($P_{\text{LD}}$) and photocurrent of the photodetector ($I_{\text{PD}}$) as a function of the applied voltage to the modulator ($U_{\text{Mod}}$) at $I_{\text{Gain}} = 1000$ mA.
M. Feng, et.al, IEEE Journal of Selected Topics in Quantum Electronics, 2018
III-Nitride laser based photonic integration
PIC at visible wavelength
Monolithic integration of III-nitride photonic devices offers a novel solution for SSL-VLC applications with advantages:
- Compact, small form factor
- Low-cost (reduced epitaxial and fabrication expense)
- Multi-functionality
Integrated waveguide modulator-laser diode (IWM-LD)
- IWM-LD at 448 nm.
- A large extinction ratio of 9.4 dB.
- A low operating voltage range of 3.5 V.
- A high modulation efficiency of 2.68 dB/V.
C. Shen, ACS Photonics, 2016
C. Shen, OECC, 2017
C. Shen, OMTA, 2020
The red-shifting clearly indicates the occurrence of an external bias-induced change in the absorption edge.
Due to a reduced polarization field in semipolar QWs, the significant shifting of absorption edges in the IM region in response to modulation bias is effective in modulating the optical output power of the IWM-LD.
Integrated waveguide modulator-laser diode (IWM-LD)
- APD-limited BW of ~ 1GHz.
- A clear open eye at 1.7 Gbit/s using on-off keying (OOK) modulation scheme.
| Data rate (Gbps) | Bit error rate (BER) |
|-----------------|----------------------|
| 0.622 | 0.00 |
| 1 | $1.1 \times 10^{-6}$ |
| 1.5 | $2.1 \times 10^{-5}$ |
| 1.7 | $3.1 \times 10^{-3}$ |
Integrated waveguide photodiode – laser diode (WPD-LD)
- Tx and Rx monolithic integration
- A 3-dB bandwidth of $\sim 230$ MHz is measured, suggesting a significantly improved modulation performance than the reported GaN p-i-n PDs.
C. Shen, et.al., APEX, 2017
C. Shen, SPIE Photonics Asia, 2020
Integrated semiconductor optical amplifier – laser diode (SOA-LD)
The first InGaN/GaN SOA-LD
- Monolithic integration of SOA with LD at 405 nm.
- Optical gain and amplification ratio measured.
- The gain increase from 0.41 dB to 5.71 dB, when VSOA increases from 4 V to 6.25 V.
- Enable high-speed modulations for VLC applications.
C. Shen, et.al. IEDM, 2016
C. Shen, et.al. Opt. Express, 2017
Outline
• Intro
• Transmitter Technology
• Photonics Integration
• Receiver Technology
• The end
Receiver – III-nitride micro-photodetectors (μPD)
- First demonstrations of InGaN-based micro-PD optical-receiver for fast VLC.
- Similar devices can potentially offer switchable functions of beam tracking, energy harvesting and parallel data transmission.
**UV-violet light detection, High selectivity**
**405-nm link**
**375-nm link over 1-m**
K-T. Ho et al., Optics Express, 26 (3), 2018
Wavelength-selective III-nitride micro-PD
Non-return-to-zero on-off keying (NRZ-OOK): 1.55 Gbit/s
16-QAM-OFDM: 7.4 Gbit/s
Spectral response
NRZ-OOK
1.55 Gbit/s
FEC limit ($3.8 \times 10^{-3}$)
200 ps
BER
Data Rate (Gbit/s)
1.40 1.45 1.50 1.55 1.60
$10^{-6}$ $10^{-5}$ $10^{-4}$ $10^{-3}$ $10^{-2}$ $10^{-1}$
Responsivity (A/W)
Wavelength (nm)
300 350 400 450 500 550 600
Bias Voltage: -10 V
80 μm
20 μm
Ti/Au
Ni/ITO
p⁺-GaN
p-GaN
InGaN/GaN MQWs
n-GaN
Semipolar (2021) GaN Substrate
Optics Express, 26(3), 3037-3045 (2018)
Appl. Phys. Express 13, 0141001 (2020)
IEEE Photonics Tech. Lett. 32(13), 767 (2020)
4-QAM-OFDM
3 Gbit/s (Error-free)
8-QAM-OFDM
4.5 Gbit/s (BER: $1.5 \times 10^{-3}$)
16-QAM-OFDM
7.4 Gbit/s (BER: $3.4 \times 10^{-3}$)
Omnidirectional Optical-Antenna for High-speed UV Communication
- Ultra-large, flexible and omnidirectional detection
- Complement UV-based NLOS communication in UWOC
- Obviates the costly development path for large-bandgap semiconductors
- Flexible and high-bandwidth devices for high-speed UV-based photodetection
250 Mbit/s over 1.5-m water channel
C.H. Kang, B. S. Ooi et al., Optics Express, 2019
C.H. Kang, B. S. Ooi et al., CLEO-Europe 2019
B. S. Ooi et al., USPTO 62/808,585
Receiver - Hybrid Perovskite-Silicon Solar-blind UVC Photoreceiver
**Technology Gap:**
Low responsivity in UV wavelengths!
(<0.1 A/W at solar blind region)
**Our Solution**
- Integrating sphere with CsPbBr$_3$ NCs on UV Quartz
- Focusing and objective lens with 500 nm long pass filter
- Si-based APD
**Enhanced responsivity**
First demonstration of UVC communication using coated-Si-based platform at 34 Mbps
- 4-fold improvement in solar-blind detection
- Obviates costly III-nitride & III-oxide development path for UVC photodetector.
*C.H. Kang, et al., Light: Science & Applications, 2019*
Outline
• Intro
• Transmitter Technology
• Photonics Integration
• Receiver Technology
• The end
## Contemporary challenges and prospective solutions
| Challenges and problems | Possible solutions |
|--------------------------------------------------------------|-------------------------------------------------------------------------------------|
| Bandwidth limitation of light sources | Super high-bandwidth light sources with new materials and new mechanisms |
| Si-based detectors are mainly sensitive to infrared waves | High responsivity detectors based on AlGaAs / single photon detectors |
| Lack of ASIC for VLC baseband processing | AFE including driver chips / TIA chips Digital chips for baseband processing |
| Point-to-point communication based on single transmitter and detector | MIMO communication based on transmitter and detector Arrays |
| Transmitting and receiving antennas require large lens groups | Fresnel lens, beam forming and steering based on nano optical antennas |
1. “Demonstration of a Low-Complexity Memory-Polynomial-aided Neural Network Equalizer for CAP Visible-Light Communication with Superluminescent Diode”, Opto-Electronic Advances 3, 200009 (2020)
2. “Blue laser diode system with an enhanced wavelength tuning range”, IEEE Photonics Journal, 12(2) 1502110 (2020)
3. “Non-line-of-sight methodology for high-speed wireless optical communication in highly turbid water”, Optics Communications, 461, 125264 (2020)
4. “Toward reliable and energy-efficient visible light communication using amorphous silicon thin-film solar cells”, Optics Express, 27(24), 34542-34551 (2019)
5. “Ultraviolet-to-blue color-converting scintillating-fibers photoreceiver for 375-nm laser-based underwater wireless optical communication”, Optics Express, 27(21), 30450-30461 (2019).
6. “On the realization of across wavy water-air-interface diffuse-line-of-sight communication based on an ultraviolet emitter”, Optics Express, 27(14), 19635-19649 (2019).
7. “Analysis of Optical Injection on Red and Blue Laser Diodes for High Bit-rate Visible Light Communication”, Optics Communications, 49, 79-85 (2019).
8. “Group-III-nitride superluminescent diodes for solid-state lighting and high-speed visible light communications”, IEEE Journal of Selected Topics in Quantum Electronics, 25(6), 2000110, Nov.-Dec. 2019 (2019)
9. “A tutorial on laser-based lighting and visible light communications: device and technology”, Chinese Optics Letters, 17(4), 040601 (2019).
1. “High-power blue superluminescent diode for high CRI lighting and high-speed visible light communication”, Optics Express 26(20), 26355-26364 (2018).
2. “Investigation of self-injection locked visible laser diodes for high bit-rate visible light communication”, IEEE Photonics Journal, 10(4) 7905511 (2018)
3. “Light based underwater wireless communications”, Japanese Journal of Applied Physics, 57(8S2), 08PA06 (2018) “375-nm ultraviolet-laser based non-line-of-sight underwater optical communication”. Optics Express, 26(10), 12870-12877 (2018).
4. “3.2 Gigabit-per-second Visible Light Communication Link with InGaN/GaN MQW Micro-Photodetector”. Optics Express, 26(3), 3037-3045 (2018).
5. “Semipolar InGaN quantum-well laser diode with integrated amplifier for visible light communications”. Optics Express 26(6), A219-A226 (2018).
6. “71-Mbit/s Ultraviolet-B LED Communication Link based on 8-QAM-OFDM Modulation”, Optics Express, 25(19), 23267-23274 (2017).
7. “Gigabit-per-second white light-based visible light communication using near-ultraviolet laser diode and RGB phosphors”. Optics Express, 25(15), 17480-17487 (2017).
8. “Gigabit-per-second white light-based visible light communication using near-ultraviolet laser diode and RGB phosphors”. Optics Express, 25(15), 17480-17487 (2017).
1. “Semipolar III-nitride quantum well waveguide photodetector integrated with laser diode for on-chip photonic system” Applied Physics Express, 10, 042201 (2017).
2. “True Yellow Light-emitting Diodes as Phosphor for Tunable Color-Rendering Index Laser-based White Light”. ACS Photonics, 3(11), 2089–2095 (2016).
3. “20-meter underwater wireless optical communication link with 1.5 Gbps data rate”, Optics Express, 24 (22), 25502-25509 (2016).
4. “High-speed 405-nm superluminescent diode (SLD) with 807-MHz modulation bandwidth”, Optics Express, 24 (18), 20281-20286 (2016).
5. “Carbon nanotube-graphene composite film as transparent conductive electrode for GaN-based light-emitting diodes”. Applied Physics Letters, 109, 081902 (2016).
6. “Ultrabroad Linewidth Orange-emitting Nanowires LED for High CRI Laser-based White Lighting and GigaHertz Communications.”, Optics Express, 24 (17), 19228-19236 (2016).
7. “Droop-Free, Reliable, and High-Power InGaN/GaN Nanowire Light-Emitting Diodes for Monolithic Metal-Optoelectronics”, Nano Letters, 16(7), 4616-4623 (2016)
8. “Perovskite Nanocrystals as a Color Converter for Visible Light Communication”, ACS Photonics, 3(7), 1150-1156 (2016)
9. “High brightness semipolar $\{20\overline{2}1\}$ blue InGaN/GaN superluminescent diodes for droop-free solid-state lighting and visible-light communications” Optics Letters, 41(11), 2608-2611 (2016).
1. "High-modulation-efficiency, integrated waveguide modulator-laser diode at 448 nm", ACS Photonics, 3 (2), pp 262–268 (2016).
2. "Facile formation of high-quality InGaN/GaN quantum-disks-in-nanowires on bulk-metal substrates for high-power light emitters", Nano Letters, 16 (2), pp 1056–1063 (2016).
3. "2 Gbits/s data transmission from an unfiltered laser-based phosphor-converted white lighting communication system", Optics Express, 23(23), 29779-29787 (2015).
4. “Achieving uniform carriers distribution in MBE grown compositionally graded InGaN multiple-quantum-well LEDs” IEEE Photonics Journal, 7(3), 2300209 (2015).
5. “Enabling area-selective potential-energy engineering in InGaN/GaN quantum wells by post-growth intermixing”. Optics Express, 23(6), 7991-7998 (2015).
6. “High-speed visible laser light communication: devices, systems and applications”, Proc. SPIE, 1171109, invited paper at SPIE Photonics West (2021).
7. “Blue Superluminescent Diodes with GHz Bandwidth Exciting Perovskite Nanocrystals for High CRI White Lighting and High-Speed VLC”, CLEO: Science and Innovations, SM3N. 4 (2019).
8. “Laser-based visible light communications and underwater wireless optical communications: a device perspective”, Proc. SPIE, 10939-13, SPIE Photonics West (2019).
9. “Study on laser-based white light sources”, Proc. SPIE, 10940-52, presented at SPIE Photonics West (2019).
10. “High power GaN-based blue superluminescent diode exceeding 450 mW”, IEEE International Semiconductor Laser Conference (ISLC 2018).
广告
• 6G光子学院士团队招聘啦
• 半导体芯片、光电子学、光通信与毫米波、半导体材料、宽禁带器件与系统人才
• 副研究员、研究员等职位等你来
• 博士研究生(直博、直研、申请-考核制)
• 博士后(博新计划、上海市超博、深圳市人才计划)
• 工程师、高级工程师
• 工作地点:北京、上海、深圳任你选!
谢谢!
沈超
13671584193
email@example.com |
Boyd’s City Express of New York City was one of the first local posts operating in the US, and endured government pressure to close down longer than any other post. As a result, many stamps and covers were issued and serviced, thereby making it easy to develop a good collection of both stamps and covers.
**Brief History of the Post**
John T. Boyd opened his post for business on June 17, 1844 at 45 William Street, next to Wall Street, in downtown Manhattan. He advertised two deliveries daily at 9am and 3pm for two cents up to 26th Street. He also advertised deliveries to Brooklyn for three cents\(^2\) and letters to the press for free.\(^3\) In addition, Boyd advertised that he would handle money deliveries only if they were registered at their office. On such covers, the signature of “J. T. Boyd” is seen as the registry agent. On Sept. 30, deliveries increased to four per day, at 9, 12, 2 and 4 o’c. Postage to Brooklyn was reduced to two cents.
During 1844, Boyd’s maintained business by delivering mail for independent mail companies, at first with Pomeroy’s Letter Express and Pullen & Co.’s Express, later with American Letter Mail Company, Well’s Letter Express, Hale & Co., and on occasion with other companies (Figure 1). Although he advertised the placement of 200 collecting stations (probably mail-boxes) from the beginning, conjunctive covers during the first few months of operation are seen more often than local delivery covers. Deliveries to the US post office for out of town delivery are rarely seen until early 1845.
Although Boyd’s initially delivered mail from out of town for independent mail companies, primarily Pomeroy’s, the Act of Congress effective July 1, 1845 largely eliminated the carriage of mail between cities except by postal workers or contractors. Boyd’s was apparently not permitted to pick up US mail from out of town for local delivery, although exceptions exist. Thus, Boyd’s postal business involved intra-city delivery of mail and delivery of letters to the Post Office (or to a PO collection box) from this time until around 1885.
---
\(^1\) Other sources provide a more detailed history of Boyd’s, notably Donald Patton’s book *The Private Local Posts of the United States* and Henry Abt’s unfinished series of articles in Robson Lowe’s *The Philatelist* in 1950. There have also been several articles in *The Penny Post* detailing Boyd’s stamps, postal history and history of operations.
\(^2\) Since Brooklyn and New York operated US post offices, the rate between the two cities was five cents for letter mail. Here Boyd’s is already competing with the post office, or at least attempting to compete. Letters carried by Boyd’s to Brooklyn in 1844-45 are scarce.
\(^3\) Greig’s 1842 New York City Despatch Post was the first to advertise free carriage of mail and newspapers to the editors of the “Public Press.”
Boyd’s increased its intricacy and to-the post-office business throughout 1845. At the same time, the government’s City Despatch Post was declining, and closed late in 1846 (Mead almost immediately re-opened the post under private management.) Other posts sprang up in the 1844-45 time period that offered some competition with Boyd’s, including Cummings’ City Post, Dupuy & Schenck’s City Dispatch Post, Hanford’s Pony Express, Barr’s Manhattan Express Post and the Franklin City Dispatch. Later, Bouton’s City Dispatch, Hall & Mills Free Dispatch Post, New York City Express Post and Stone’s City Post joined in the competition for letter mail business in New York City. In January of 1847, Aaron Swarts opened his post in the old Chatham Square Branch of the New York post office. He later bought John Bouton’s post and became Boyd’s largest competitor.

In January 1849, the government returned to the city delivery business and placed 25 “stations” for the deposit of letters, which made four daily collections and deliveries, while introducing their simply designed “U. S. Mail/One Cent/Pre-Paid” stamp (Scott No. 6LB9). Boyd quickly advertised that he had over 1000 collection boxes, one in nearly every block below 50th Street. In 1849, Boyd’s introduced diecut stamps, reportedly in small boxes at a premium above their usual charge of two cents per stamp. Boyd recognized the convenience of stamp separation for firms who needed larger quantities to prepay their mail, and the number of diecut stamps on cover is testimony to their popularity. The US government did not routinely adopt perforations until 1857.
Frustrated in its attempts to gain the revenue and control of local delivery of letter mail, the US Congress passed an act on March 3, 1851, that designated the streets of New York City to be postal routes. The Franklin and Eagle carrier stamps (Scott Nos. LO1-
LO2) were intended to permit prepayment of local mail left in US boxes or with the carrier. Letters delivered from the mails were due two cents on delivery; the New York post office rarely if ever allowed private posts to pick up mail for local delivery during this period. However, Boyd boldly advertised in August of 1851 that he would continue his local delivery services, and ignored the 1851 Act. Others like Blood’s of Philadelphia did likewise.
The depression of 1857-58 probably set Boyd’s business back, but the economy recovered in 1859. John T. Boyd died on June 8, 1859, and his 17 year-old eldest son, John T. Boyd, Jr., took over the business. Unfortunately for Boyd’s, Joseph Holt was appointed Postmaster General in March, 1859; Holt was determined to eliminate the remaining private posts. He installed locked mail boxes on the streets of New York City in November, 1859, so that citizens could drop mail in them after the drug stores, stationers, and other places that collected mail for Boyd’s had closed for the day. He also recommended to Congress that the drop letter charge be dropped in favor of only a carrier charge to, from or through the post office.
In May, 1860, Boyd Jr. reduced the rate for the first time in its history to one cent for all classes of mail. At about the same time, Kochersperger, the new proprietor of Blood’s Penny Post in Philadelphia, defied the new governmental notice concerning post roads. The government took Kochersperger to court, but lost, because carrying mail on streets not used by mail carriers was deemed legal according to the law.
Nonetheless, the young Boyd closed his post on August 1, 1860, and sold it to William and Mary Blackham late in 1860. The Blackhams announced the re-opening of the post on Dec. 24, 1860 (Figure 2). The Blackhams restored the two cent fee for local delivery of mail, but provided a one cent rate for delivery to the post office and for circulars and magazines. The Blackhams subsequently relocated the office to 39 Fulton Street late in 1862, and began delivering rail and steamer timetables to its customers free of charge, paid for by advertising. The Blackhams made a brief exploration of the philatelic market by issuing gold stamps on colored papers (Scott Nos. 20L20-20L22.)
More importantly, they turned to bulk collection and delivery of circulars, bills, notices and pamphlets, and apparently began maintaining address lists. In 1864 Boyd’s introduced its first stamped envelopes, which are rare today. For reasons that are not clear, Boyd’s was able to continue its business of local mail delivery until US Government officials raided it on May 4, 1883 along with another competitor, George Hussey. Fines were imposed but the posts carried on their business. It appears that Boyd’s local delivery of mail ended around 1885. Hussey’s closed in 1890, while Boyd’s turned to the development and sales of mailing lists and address labels, and as of today they are still in business as an alumni search service.
**Boyd’s Rates Compared with NYC Post Office Rates**
The rate for Boyd’s mail service was two cents until May of 1860, when it was reduced to one cent. On re-opening of the post by the Blackhams, letters to the mail were delivered for one cent and local deliveries were performed for two cents. Probably in 1877, Boyd’s prepared subsequent issues with no value stated, and presumably reduced its rate to one cent. However, there could have been different rates for different classes of service, but no advertising or other documentation has been found.
It is useful to compare Boyd’s rates with those of the NYC Post Office. Roth has analyzed this subject thoroughly, and the following table abstracts the data relevant to the kinds of mail Boyd’s might handle in competing with the Post Office.\(^4\) We can refer to these categories of mail handling as (1) carrier pickup and delivery of local mail, (2) drop letters “to the Mails,” (3) left at the PO “for the Mails,” (4) drop plus carrier delivery, (5) drop letters for pickup at the PO by the addressee, (6) carrier pickup plus drop, and (7) carrier delivery “from the Mails.”
The New York Post Office charged more when a letter was dropped at its office for carrier delivery than if it were placed in a collection box for local delivery (until 1860). It only cost one cent from 1849-60 to leave an addressed letter in a US collection box for carrier delivery, yet it cost two cents to put it in the box for deposit at the Post Office for the addressee to pick it up as a drop letter. These rate differences certainly suggest that the least expensive way to deliver a letter from within New York City was to place the street address on it and leave it in a collection box!
At two cents, Boyd’s rate for local mail delivery was more than the Post Office’s rate of one cent from 1849-60. Yet, Boyd’s maintained a prominent position in providing this service, as the comparatively larger number of surviving Boyd’s postal examples suggest.
\(^4\) Roth, SM. Summary of drop letter and carrier postal rates, New York City (1794-1885). *The Chronicle* 26(4): 210-212, (Nov) 1974.
| | 6/1/1794 to 6/30/1845 | 7/1/1845 to 6/30/1851 | 7/1/1851 to 4/2/1860 | 4/3/1860 to 6/30/1860 | 7/1/1860 to 6/30/1863 | 7/1/1863 to 6/30/1885 |
|----------------------|-----------------------|-----------------------|-----------------------|-----------------------|-----------------------|-----------------------|
| Placed in collection box for carrier pickup and intra-city delivery | 2c | 2c (1c in Feb 1849) | 1c | 1c | 1c | 2c |
| Placed in collection box for out of town delivery | NPR | NPR + 1c | NPR | NPR + 1c | NPR + 1c | NPR |
| Left at PO for out of town delivery | NPR | NPR | NPR | NPR | NPR | NPR |
| Drop at PO for carrier delivery | 3c | 4c | 2c | 1c | 1c | 2c |
| Drop at PO for pickup by addressee | 1c | 2c | 1c | 1c | 1c | 2c |
| Placed in collection box for pickup at PO by addressee | 1c | 2c | 2c | 2c | 1c | 2c |
| From out of town for carrier delivery | NPR + 2c | NPR + 2c (NPR + 1c in Feb 1849) | NPR + 2c | NPR + 1c | NPR + 1c | NPR |
Table 1. NYC Postal Rates (NPR = Normal Postal Rate, varying by weight until July 1, 1845, when it became 5c for distances under 300 miles and 10c for greater distances; June 30, 1851, when it became 3c for prepaid letters under 3000 miles, 6c for prepaid letters over 3000 miles, 5c for unpaid letters under 3000 miles and 10c for unpaid letters over 3000 miles; and April 1, 1855, when single letters were prepaid 3c for distances under 3000 miles and 10c for over 3000 miles.)
During the 1845-60 period of ownership by John T. Boyd and later his son, the operation must have provided advantages for his customers. These probably included more frequent deliveries and therefore speedier service, and more collection boxes and carriers than the Post Office; the use of delivery times in handstamps; the availability of stamps for prepayment at hotels, drug stores and other box locations; the innovation of diecut stamps for the convenience of large customers; providing street addresses so that customers did not have to rely on directories published annually; secure collection boxes; reliable service; and the goodwill of the citizens who may have viewed the Post Office’s attempts to close private enterprise as bureaucratic and inappropriate.
Stamps and Postage Stamped Envelopes Used by Boyd’s
John T. Boyd designed an eagle on globe design that was used from 1844 until around 1867. The first 15 issues were printed in black on green surface-colored paper, except for the “social” gold on white printings (Scott Nos. 20L5 and 20L9), which were supposedly made for wedding announcements, invitations, and the like. For a few months in 1857, printings were made in red and orange on white paper (Scott Nos. 20L12
and 20L13), but apparently the green color was preferred and a new printing was made in green in 1857 (Scott No. 20L14.)
**Figure 3. 20L1 original used.**
**Figure 4. Forgeries of 20L1 by Taylor and Scott.**
Surviving examples of the first three issues, Scott Nos. 20L1-20L3, are not common. They were used in the early 1844-45 period when Boyd’s was just beginning to obtain local delivery business. Off-cover examples are less common than those on cover. (Figure 3). Figure 4 shows Taylor and Scott forgeries of 20L1.
**Figure 5. 20L4, showing early and late (worn) impressions.**
On the other hand, examples of 20L4 (1845-48) off and on cover are much more common (Figure 5 shows early and late impressions from the plate), and show both local delivery and “To the Mails” usages, with the latter being more desirable (Figure 6). A darker dull green printing was made around 1847, although it is not listed. Scott No. 20L7 replaced this stamp in mid-1848 and was in use until 1852. It is also easily obtainable.
Late in 1852, Scott No. 20L8 was prepared (Figure 7), and in mid or late 1854, 20L10 was produced. Both issues are often available from dealers. In December, 1855, 20L11 was used, followed by 20L13 in May 1856, and 20L12 in June 1856 (Figure 8).
These were replaced by 20L14 in early to mid-1857, so that copies of 20L11-20L13 were not in use very long and are difficult to obtain. No. 20L14 is commonly found on cover. However, only a single sheet of 100 of 20L14 is known today. In fact, most of the issues to this point are scarce in unused condition, so apparently few remainders existed.
**Figure 9. 20L15 “To the Mails” with 3c 1857, July 19, 1860.**
John Boyd, Jr., modified the 20L14 two cent plates when he reduced the rate to one cent in 1860 and made 20L15, but it was a sloppy job and every position is identifiable, some with much of the “S” of “CENTS” remaining. Large quantities of remainders in unused condition exist, including perhaps 50 sheets of 100. However, used specimens are scarce, since the period of use was only about two and a half months (Figure 9).
When the Blackhams took over, they introduced the two cent 20L16 and a one cent stamp with the same design, the latter issued in several shades grouped together as 20L17 black on lilac and 20L18 black on blue gray. In preparing the plate for 20L16, the top row was inverted, resulting in ten tete-beche vertical pairs from the sheet of 100 (10x10). Full sheets of 20L16 are rare, but tete-beche pairs are not (Figure 10). This error was corrected when the one cent plate was made. Scott’s catalog continued to list tete-beche pairs for 20L17-20L18 until the 2003 edition. However, in one position on the one cent plate, the “S” of “CENTS” was not erased, creating a scarce variety. In another position the “1” is inverted. These stamps were used from 1861 until around 1865 or 1866 (Figure 11).
Although the literature suggests that Scott Nos. 20L19-20L22 comprised a “philatelic” issue for collectors, 20L19 is much scarcer than the others, and could have been intended to emulate the earlier “social” gold on white stamps. On the other hand, verified used copies may not exist, so perhaps this issue is a trial color plate proof. 20L20-20L22 exist unused in almost all cases, and they apparently were printed from the 20L16 plate with top row inverted, so that tete-beche varieties exist; these are much scarcer than those of 20L16. It is interesting to note that this so-called “philatelic issue”
was printed from the same plate as 20L16, which is known to be used in early 1861. If the original plate was altered to form the one cent 20L17-20L18 plate, when and how were the so-called “philatelic issues” printed? Either they were printed early in 1861 along with 20L16, and then the plate was altered to make the one cent stamps, or the 20L16 plate was a different plate from the one cent plate. The lack of multiples of one cent stamps makes the determination difficult, so research needs to be done comparing the one cent stamps with the two cent stamps, looking for common plate position characteristics.
**Figure 12. 20LU4 used entire.**
It appears that the Blackhams issued Boyd’s first series of postal stationery in 1864, Nos. 20LU1-20LU11A. Unused entires are occasionally found, but used entires along with used cut squares are very rare (Figure 12).
The Blackhams next issue was 20L23 in 1866, using the same stones as Boyd used to prepare 20L11-20L13. Sometime later, reprints of all of these stamps were made, but the plate for the reprints was different enough that reprints can be identified. The reprint plate is known as Plate C. In order to determine if a given stamp is a reprint or not requires checking every position of Plate C to determine if it is from the reprint plate or not.\(^5\) (Figures 13 and 14). Unused examples of 20L11 and 20L13 must always be checked, as most of those on the market are from the reprint Plate C.
At about the same time, the one cent stamps of 1861 were replaced with a new issue in two colors, Scott Nos. 20L24-20L25. These are very scarce in used condition. Reprints were made of both issues. The original 20L24 is on highly glazed paper with an ink that is grayish-black, while reprints are not very glazed and inked in a deeper black. Reprints of 20L25 are assumed to be the ungummed specimens of the original since no other identifying characteristics have been found. Gummed examples of 20L25 are originals or remainders of the originals.
Boyd’s covers from mid-1868 through most of 1877 are decidedly uncommon, and dated covers from this period are even more difficult to locate. During this time, Boyd’s issued a number of postage stamped envelopes, starting in 1867, and began to issue bank notices with their stamp design on them in 1874. These are all scarce in used condition, with the possible exceptions of 20LU13 and 20LU18. The 1880 bank notice, 20LU50, is common in unused condition due to a supply of remainders, but the rest of the notices are rare or unknown in unused condition (Figure 15).
\(^5\) These characteristics were worked out by Donald Patton in his book mentioned earlier, and reproduced by Larry Lyons in his *Identifier*, Volume I.
Figure 14. 20L13 reprint sheet, Plate C.
Figure 15. 20LU45, unused bank notice.
Figure 16. 20L44 on all-over advertising cover.
Scott No. 20L26 was adapted from the envelope design by boring out the address “39 Fulton St.” This was the first major design change from the eagle on globe theme, and has been referred to as the framed eagle design.\(^6\) A single sheet of 20L26 survives today and permits plating of individual copies, although it is doubtful that as many as 100 examples other than the sheet are known. (Hollowbush originally owned this sheet, and Perry was able to plate individual stamps from it.) In 1877, the same design but with “1 Park Place” added as the address was issued as 20L30-20L33 depending on perforation type (imperforate or perforated) and paper type (laid or wove). The first two (20L30-20L31) are imperforate, and the rest perforated in various gauges. The brown on yellow stamp (20L34) was briefly used and is hard to find in any condition. Covers of 20L34 are rare. After a short period, the denomination “2c” was removed, and the stamp was printed in various shades, perforations and papers as 20L35-20L36. Used stamps and covers from 20L26 through 20L43A almost always bear a black “PAID” in circle cancellation on the stamp. All of the issues from 20L26 through 20L43A used on cover are scarce to rare.
With the return of year-dated handstamps in 1877,\(^7\) it becomes easier to date Boyd’s stamps. The framed eagle design was replaced by the Mercury design in 1878. In all probability, 20L43A was issued before 20L43, with covers of the former known used mostly in July and August of 1878, and the latter mostly from August through October, 1878. The red and red-orange Mercury’s, as well as a dull red-brown shade first listed in the 2003 catalog, are difficult to find used or unused, particularly in sound condition.
A variety of three designs, various perforations, and the use of wove or laid papers gave rise to the many pink and blue Mercury stamps, 20L44-20L56 (Figure 16). First appearing in 1879, the Mercury’s were used until the raid in 1883, and even occasionally thereafter up until about 1885. The corresponding Mercury envelopes (20LU33-20LU44A) seem to have been used from 1879-1881.
The change in the nature of Boyd’s business in order to offset lost mail delivery revenues is readily observed in the printed bank notices for the Importers’ and Traders’ National Bank, currently listed as 20LU46-20LU53. Printed on one side only, they each carry the design of the current stamp in use from 1874 to 1885, and presumably were filled out by the bank for delivery by Boyd’s to the bank’s customers regarding transactions on their accounts. These items are more like postal cards than postal stationery. They are generally scarce and some varieties are not included in the 2003 Scott Specialized Catalogue.\(^8\) The National Park Bank notice, Scott No. 20LU54, is known only in unused condition, and is scarce. A long-unlisted but known Boyd’s
\(^6\) Bowman JD. Boyd’s framed eagles. *The Penny Post*. Vol. 8(5):4-13, October, 2000.
\(^7\) For a complete listing of Boyd’s handstamps, refer to Bowman JD and LeBel L, “A Comprehensive Survey of Boyd’s Postal Markings” in *The Penny Post*, Vol. 7(3):2-12, July, 1997, and Larry Lyons Identifier Vol. III for corrections to the article.
\(^8\) Bowman JD. Proposed Scott revisions to Boyd’s bank notices. *The Penny Post*. Vol. 9(3):4-16, July, 2001.
postcard for Gaff Fleischmann & Co. exists in used condition but is rare and desired by postcard collectors as well as locals collectors.
Later Boyd’s covers, with handstamps only or with stamps and handstamps, sometimes have address labels pasted to the cover. It is likely that these address labels were prepared by Boyd’s, as their mail business declined and they turned to preparing custom mailing lists and address labels for commercial mail.
**Common Reprints**
The collector has already been warned about reprints of 20L11-13 and 20L23-20L25 that exist and are often sold as originals. Aside from these, there are only a few other reprints of any significance for Boyd’s stamps.

Scott No. 20L8 has been reprinted in black on bright blue-green surfaced paper that is hard to separate from the originals. However, there are no dividing lines on this reprint nor other reprints, so originals should show the dividing lines between the stamps on one or more of the edges if cut large enough (Figure 17). In addition, the originals are on a true green surfaced paper. Another group of reprints of this stamp was made on a highly-glazed but dull green surfaced paper, with three distinguishing transfer varieties.
Most of the reprints of 20L11 occur on a pale green paper that seems to have faded with age and can be plated to Plate C. Harder to distinguish is another set of reprints from Plate C which are printed in colors very close to the original. These are always lighter in shade than the dull green originals.
The red on white 20L12 was not reprinted in its original color, but the dull orange on white 20L13 was. The reprint color is practically identical to originals, so that unused copies must be plated to determine if they are originals or reprints. Unused originals of 20L11-20L13 are very scarce. Used examples are also scarce, but are almost always authentic.
Fortunately for collectors, a fairly large number of original remainders of 20L23 exist in blocks and other multiples, which is not the case for 20L11-20L13, each rare as a multiple. The work of former students has elucidated that there were three plates prepared for these stamps. Plates A and B were used for originals, and Plate C was used
for reprints. The later 20L23 is most interesting, because it was printed in a work-and-turn fashion from Plates A and B in three different arrangements or settings, giving rise to a number of tete-beche pairs, blocks and larger multiples, all of which are unused remainders. The work-and-turn printing method involved placing Plates A and B as A over B, A over inverted B, and B over A, for the three settings. As each plate consisted of a pane of 25 stamps (5x5) aligned in three different arrangements or settings on the printing stone, both vertical and horizontal tete-beche examples exist. When the first half of the sheet had been printed from the two plates, the sheet of paper was turned and the other half printed. As a result of this manual turning of the paper sheet, variations in spacing and offset between panes can be found in cross-gutter examples.
Figure 18. Scott reprint sheet of 10, tete-beche horizontally.
It has been reported that J. W. Scott obtained the envelope die for the first series of postage stamped envelopes, 20LU1-20LU11A, and made reprint envelopes. He also prepared sheets of ten and of four to make the numerous unused cut square examples on papers of several colors and laid lines (Figures 18 and 19). However, an acceptable system for telling the reprint entires from the unused original entires has not been developed, so collectors should be careful of unused entires of this series. That being said, reprint entires are scarce in their own right, unlike the cut squares. The Scott reprint cut squares are common and often offered as unused original cut squares, and collectors should generally regard these as reprints.
Figure 20. Forgery G, prepared in several colors, with values of 1c, 3c, 5c, 7c and 9c.
Aside from these reprints, numerous forgeries exist of every type, including fantasy denominations of 3c to 9c (Figure 20). These are not difficult to distinguish from the original stamps. In some cases, forgeries are much scarcer than the stamps they imitated, which is true for forgeries of many other local posts.
**Collecting Boyd’s**
A collector can form the nucleus of a Boyd’s stamp collection within a short time and without expending a lot of money. With patience, most of the listed stamps, except the trial color proofs, can be obtained.
The collector desiring a postal history collection of Boyd’s can readily find a number of covers, both stamped and stampless, to add to his collection. A variety of examples can be collected; for example, various combinations of handstamps and stamps, diecut stamps on cover, covers taken to the US post office by Boyd’s, conjunctive uses with other independent mail or local companies, Western express company mail from California to NYC delivered by Boyd’s, and even conjunctive uses with US post office carriers.
An advanced collector can attempt to identify and obtain the ten transfer types known for each of the framed eagle stamps. It is a challenge just to obtain unused and used specimens for each of 20L26-20L36, especially 20L28-20L31 which are rare. In addition, there are varieties, such as double transfers, incompletely erased transfers, printed on both sides, lithographic constant flaws, multiples, perforation variations, and so on, that exist but are not commonly recognized.
The collector of forgeries can have a field day with Boyd’s. Larry Lyons’ *Identifier* enumerates over one hundred types including odd denominations, and many of these exist in two or more colors. Here again, it would be quite challenging to build a collection including even 50% of all the types and colors.
Because Boyd’s was one of the most successful local posts, and lasted longer than any other, stamps and covers are more common than many other local posts. The company has persisted until today, by changing its business tactics (Figure 21). The collector should always be careful when purchasing unused stamps, as remainders, reprints and forgeries are plentiful for some stamps. Unused remainders of items such as 20L15, 20L23, 20L25, 20L56 and 20LU50, are common and collectors should not expect to pay much for these.
AN INSTITUTION FOR THE PROMOTION OF CIRCULAR ADVERTISING AND FOR THE COMPILATION AND CLASSIFICATION OF LISTS OF ALL KINDS OF NAMES AND ADDRESSES FOR ALL PARTS OF THE CIVILIZED WORLD
CHANGES IN TITLE AND LOCATION OF THIS CONCERN DURING ITS HUNDRED YEARS OF EXISTENCE IN NEW YORK CITY 1823
Started in Business as BOYD'S CITY EXPRESS POST at 107 Broadway 1846
45 William Street 1867
BOYD'S DISPATCH POST 41 Fulton Street 1878
BOYD'S CITY DISPATCH 1 Park Place 1889
5 Madison Street 1892
16 Beekman Street 1909
19-21 Beekman Street 1911
During the years from 1846 to 1869, before the U.S. Government prohibited private posts, this concern issued its own stamps. We give attached herewith an original of interest to philatelists.
ORIGINAL BOYD STAMP 1846
Boyd's City Dispatch
DEPARTMENTS FOR
COMPILATION OF SPECIAL LISTS, ADDRESSING OF ENVELOPS, ENCLOSED & MAILING
REPRODUCTION OF NEWSPAPER LETTERS, TYPEWRITING
DELIVERY OF CIRCULAR AND SAMPLE MATTER IN NEW YORK
19-21 BEEKMAN STREET
LONG DISTANCE TELEPHONE, BEEKMAN 4540-4541
E. J. WILLIAMS
MANAGER
New York, December 5, 1911.
OUR LISTS ARE FORWARDED CHARGES PAID ON RECEIPT OF AMOUNT SPECIFIED
The Continental Iron Wks.,
Brooklyn, N. Y.
Dear Sir:
It is our desire to notify our customers of changes, Revisions and New Compilations, in which they are interested, as promptly as possible, but the great increase in our list business and number of customers, makes this more and more difficult.
The filling in and returning to us of attached blank will be appreciated by us in this respect.
We enclose advance information consisting of our Condensed Price List and State Tabulation. Our General Price List is now in preparation. Inquiries for detailed information concerning any list that may be desired, or for any other service rendered by this concern, will receive our immediate attention.
Yours very truly,
W.G. (2 enos.)
BOYD'S CITY DISPATCH. |
Bayesian species delimitation in *Pleophylla* chafers (Coleoptera) – the importance of prior choice and morphology
Jonas Eberle\textsuperscript{1}\textsuperscript{†}, Rachel C. M. Warnock\textsuperscript{2,3,4}\textsuperscript{†} and Dirk Ahrens\textsuperscript{1,2}\textsuperscript{*}
**Abstract**
**Background:** Defining species units can be challenging, especially during the earliest stages of speciation, when phylogenetic inference and delimitation methods may be compromised by incomplete lineage sorting (ILS) or secondary gene flow. Integrative approaches to taxonomy, which combine molecular and morphological evidence, have the potential to be valuable in such cases. In this study we investigated the South African scarab beetle genus *Pleophylla* using data collected from 110 individuals of eight putative morphospecies. The dataset included four molecular markers (*coxI*, 16S, *rrnL*, ITS1) and morphometric data based on male genital morphology. We applied a suite of molecular and morphological approaches to species delimitation, and implemented a novel Bayesian approach in the software iBPP, which enables continuous morphological trait and molecular data to be combined.
**Results:** Traditional morphology-based species assignments were supported quantitatively by morphometric analyses of the male genitalia (eigenshape analysis, CVA, LDA). While the ITS1-based delineation was also broadly congruent with the morphospecies, the *coxI* data resulted in over-splitting (GMYC modelling, haplotype networks, PTP, ABGD). In the most extreme case morphospecies shared identical haplotypes, which may be attributable to ILS based on statistical tests performed using the software JML. We found the strongest support for putative morphospecies based on phylogenetic evidence using the combined approach implemented in iBPP. However, support for putative species was sensitive to the use of alternative guide trees and alternative combinations of priors on the population size ($\Theta$) and rootage ($r_0$) parameters, especially when the analysis was based on molecular or morphological data alone.
**Conclusions:** We demonstrate that continuous morphological trait data can be extremely valuable in assessing competing hypotheses to species delimitation. In particular, we show that the inclusion of morphological data in an integrative Bayesian framework can improve the resolution of inferred species units. However, we also demonstrate that this approach is extremely sensitive to guide tree and prior parameter choice. These parameters should be chosen with caution – if possible – based on independent empirical evidence, or careful sensitivity analyses should be performed to assess the robustness of results. Young species provide exemplars for investigating the mechanisms of speciation and for assessing the performance of tools used to delimit species on the basis of molecular and/or morphological evidence.
**Keywords:** Scarabaeidae, Aedeagus, Eigenshape analysis, Speciation, Phylomorphospace, Integrative taxonomy, Bayesian species delimitation
\* Correspondence: email@example.com; firstname.lastname@example.org
\textsuperscript{†}Equal contributors
\textsuperscript{1}Zoologisches Forschungsmuseum Alexander Koenig Bonn, Centre of Taxonomy and Evolutionary Research, Adenauerallee 160, 53113 Bonn, Germany
\textsuperscript{2}Department of Entomology, Natural History Museum, London SW7 5BD, UK
Full list of author information is available at the end of the article
Background
The identification and delimitation of species is one of the most crucial exercises in the assessment of biodiversity and in understanding the Tree of Life, because species occupy a central role in nearly all disciplines of biology. Species delimitation therefore has broad implications, from biological and ecological conservation, to comparative evolutionary analyses [1–4]. Despite the challenge and importance of defining species units, methods for delimiting species using independent sources of data (e.g., DNA and phenetic data) have only recently been proposed (e.g., [5–15]). Nevertheless, at least since Sneath and Sokal [16], there has been an extensive use of quantitative methods to infer similarity based on morphological traits. Broadly defined as “numerical taxonomy”, or phenetics, these methods have traditionally been used (and criticized) for inferring phylogenetic relationships (e.g., [17, 18]). However, integrative approaches to taxonomy shed new light on the utility of these methods, which have the potential to offer an independent, more reproducible way of inferring species limits [19].
In addition to controversy over the application of different species concepts and their impact for delimiting species [20], delimitation is expected to be especially challenging during the earliest stages of divergence, or speciation, when both molecular and morphological characters exhibit low levels of differentiation [21]. At this stage it can be extremely difficult to detect genetic isolation (i.e., the ultimate outcome of speciation) due to gene flow among populations and incomplete lineage sorting between species [22, 23]. Although molecular data can be useful for the rapid identification and delimitation of species, these processes can compromise the interpretation of the results. Incomplete lineage sorting – shared ancestral polymorphisms between species – can lead to perceived genetic similarity among phenotypically divergent species. Consequently, gene flow and incomplete lineage sorting can result in similar patterns among inferred gene trees [24–26]. To further complicate matters, introgressive hybridization – secondary gene flow between species – can also produce similar patterns among inferred gene trees (e.g., [27–29]).
A suite of new methods have been proposed that can incorporate incomplete lineage sorting in a multilocus framework for the estimation of species trees [30–33] and/or species delimitation [20, 33, 34]. Although these methods rely on the *a priori* assignment of individuals to pre-defined units (species or populations; [20]), they can be used to test explicit hypotheses of species delimitations. However, studies of recent radiations, or speciation in a young species, will be characterized by uncertain species designations, and are likely to remain challenging.
In contrast to DNA-based taxonomy, common practise for the traditional taxonomic treatment of taxa is an assessment of the organism’s entire morphology. In most groups of insects this includes detailed examination of the copulation organs, which often undergo rapid morphological divergence, driven by sexual selection [35]. However, quantitative data on insect genitalia are rarely obtained for the purposes of integrative taxonomy, and so methods for combining this type of morphological information with molecular data are still underdeveloped [19]. Previously, the only available methods for delimiting species on the basis of morphology were clustering approaches [8, 9, 36, 37]. Unfortunately, these methods quickly loose power when too many species are included, or when dealing with specimens whose closest phylogenetic relatives are unknown [7, 14]. Here we use morphometric and molecular data in an integrative framework, to delimit species in the scarab beetle genus *Pleophylla* Erichson, 1847. Following the recommendation of Carstens et al. [6], we implemented a suite of methods, including a recently developed approach that incorporates continuous morphological trait data with the multispecies coalescent [14, 34].
*Pleophylla* is a highly conspicuous genus, found only in isolated parts of the South African escarpment and the East African highlands. The genus belongs to the tribe Sericini (Coleoptera: Scarabaeidae), a highly diverse clade of herbivorous beetles with nearly 4,000 described species. The adults feed polyphagously on a variety of angiosperms, while the larva feed on humus and plant roots in the upper soil layers. Morphological and molecular evidence has shown that the genus belongs to one of the most ancestral-branching lineages of the Sericini, together with its presumptive sister group, *Onalopia*, in the eastern Mediterranean [38, 39]. Members of the genus exhibit extreme homogeneity in external morphology, and identification of species usually relies on examination of the male genitalia – a trait used to commonly distinguish between homogenous species of insects [40], including most members of the tribe Sericini [41]. Current taxonomic classification recognises only three valid species ([42]; globalspecies.org/ntaxa/2359831; accessed Dec 13, 2015), however, an extensive survey and taxonomic revision of museum collections has identified 24 distinct morphospecies [43] (Eberle J, Beckett M, Özguel-Siemund A, Frings J, Fabrizi S, Ahrens D. Afromontane forests hide nineteen new species of ancient *Pleophylla* chafer (Coleoptera: Scarabaeidae): phylogeny and taxonomic revision, in preparation). The aim of our study was to provide a primer for the clarification of the taxonomy of this group, and to explore power and limitations of morphological, molecular and combined approaches to species delimitation in an integrative framework for an apparent “complex” case study.
Methods
Taxon sampling and molecular data collection
A total of 110 individuals of eight putative morphospecies of the genus *Pleophylla* were collected from eight localities in South Africa (Additional file 1: Table S1-S2; Fig. 1). So far, all known species are endemic to South Africa and represent a limited selection of the morphological diversity of *Pleophylla* (Eberle J, Beckett M, Özguel-Siemund A, Frings J, Fabrizi S, Ahrens D. Afrotontane forests hide nineteen new species of ancient *Pleophylla* chafer (Coleoptera: Scarabaeidae): phylogeny and taxonomic revision, in preparation). Four of these species have not been described yet, therefore we refer to all putative morphospecies using the same

**Fig. 1** **a** Maximum likelihood (RAxML) tree of *Pleophylla* for the combined molecular dataset. Specimens are colored according to morphospecies (Additional file 1: Table S1). Branch length corresponds substitutions per site. Support values for ML and Bayesian posterior probabilities are shown next to branches in grey (RAxML) or indicated below (PhyML/MrBayes). ITS1 GMYC clusters are indicated by an asterisk (*). **b** Map of South African sampling localities (Additional file 1: Table S2), **c** Bayesian species tree obtained using *BEAST*. Clade posterior probabilities are indicated next to branches. Confidence intervals (grey bars) show the upper limits of the 95 % HPDs obtained using a divergence rate for *cox1* of 2 % My-1, and the lower limits obtained using a rate of 4 %. Mean node ages arbitrarily correspond to the mean estimates obtained using a rate of 2 % My-1 (Additional file 1: Table S6). A clouddogram of 10,000 posterior samples shows the uncertainty in the inferred species tree, obtained using the program DensiTree [65]; different colours (blue, red, green) correspond to each consensus topology in the total set of trees
numerical format throughout the text for consistency. *Omaloplia nigromarginata* and *O. ruricola* from the putative sister lineage of *Pleophylla* [38] were included as outgroup taxa. We assessed support for the monophyly of putative morphospecies using standard molecular markers – the nuclear ribosomal rRNA 28S gene, the nuclear internal transcribed spacer 1 (ITS1), and the mitochondrial cytochrome oxidase subunit 1 (*cox1*) and 16S rRNA (*rrnL*) genes. Details of DNA extraction, sequencing, alignment and model selection are provided in the Additional file 1.
**Morphometric analysis**
The partial outline of the male’s left paramere (part of the intromittent genital organs, in dorsal view) (Additional file 1: Figure S1) was digitized from images captured on a microscope. The partial outline was extracted from 68 male specimens where the paramere was well preserved. The outlines were resampled as a set of 150 semi-landmarks using tpsDig 2.1 [44]. Standard eigenshape analysis [45, 46] was performed in Eigenshape 2.6, as implemented in morpho-tools [47]. Of the 67 eigenshape axes produced, further analysis was performed on the four eigenshape axes that together explained 75% of the variation in the samples. Based on these informative eigenshape axes we performed a canonical variate analysis (CVA), grouping the samples according to the morphospecies assignments.
Model-based hierarchical clustering [37, 48] was applied to identify groups of individuals that resemble each other, independent of other evidence or *a priori* assignments, using the R package mclust 4.4 [36, 37]. The function *mclust* was used to evaluate the fit of all available clustering models to the morphometric data that explained 75% (eigenshape axes 1–4) and 95% (eigenshape axes 1–14) of total paramere shape variance. This method uses expectation maximization (EM) to estimate the maximum likelihood of alternative multivariate mixture models that describe shape variation in the morphometric data [49, 50], and estimates the optimal number of clusters based on the Bayesian Information Criterion (BIC) [51]. All models were evaluated for a predefined number of 1 to 20 clusters and the best-fit result was used for further analyses.
To assess the fit of the *a priori* morphospecies assignments and the hierarchical clusters found using mclust to the data, we performed a linear discriminant analysis based on the respective specimen groupings and calculated the probability of group membership for each individual. This was done using the R package MASS 7.3.35 [52]. The prior probability that a specimen belonged to a given group was set to be equal for all individuals and groups.
Finally, to investigate the impact of phylogeny on the inferred morphospace, the RAxML tree topology (based on the partitioned combined molecular dataset) was projected onto the paramere morphospace (eigenshape axes 1 and 2) using the function *phylomorphospace* in the R package phytools [53]. This function estimates the positions of the ancestral nodes using a maximum likelihood approach [53]. In addition, a three-dimensional version of this plot was produced based on eigenshape axes 1, 2 and 3 using the function *phylomorphospace3d*. The code was modified to make coloration for species group affiliation possible.
**Phylogenetic analysis**
Phylogenetic analyses of individual and combined markers were performed using likelihood and Bayesian methods. Each analysis was run with the substitution model and partitions selected using PartitionFinder [54] (Additional file 1: Table S3). Unpartitioned maximum likelihood analysis was performed using PhyML 3.0 [55], and partitioned maximum likelihood analysis was performed using RAxML 7.3 [56, 57]. Bayesian phylogenetic analysis was performed using MrBayes 3.1.2 [58]. The default prior on branch lengths implemented in MrBayes can sometimes lead to spuriously large estimates of internal branch lengths [59, 60]. Because the GMYC approach to species delineation is sensitive to estimates of branch lengths, we ran four sets of analyses using an exponential prior on the branch lengths with mean = 0.1 (default), 0.05, 0.01 or 0.005 substitutions/site.
**Bayesian species tree estimation**
The multispecies coalescent was implemented in *BEAST* 1.75 [31, 61, 62] to co-estimate the species tree, individual (*cox1* and ITS1) gene trees and divergence times. The less informative ribosomal markers were excluded because analysis in *BEAST* that included *rrnL* and 28S failed to converge, despite extensive efforts to improve convergence diagnostics. Putative morphospecies were used to define taxonomic units *a priori* – all 14 female individuals, for which there was ambiguity regarding *a priori* species assignment, were excluded from the analyses (Additional file 1: Table S1). The mean substitution rate of *cox1* was fixed, clock model parameters were unlinked across genes, and the rate of ITS1 was estimated relative to *cox1*. Estimates for the substitution rate of *cox1* among insect species vary substantially across different studies, and are dependent on a large number of variables [63, 64]. We therefore applied a range of mean branch rates, in five independent sets of analyses (2, 2.5, 3, 3.5 or 4% My$^{-1}$). The resulting posterior sample of species trees was additionally visualized with DensiTree [65]. Further details of all phylogenetic
analyses, including prior parameter and chain settings, are provided in the Additional file 1.
**Distinguishing incomplete lineage sorting from hybridization**
To assess whether low genetic variation observed among morphospecies could be attributed to incomplete lineage sorting, we used the posterior predictive checking approach developed by Joly et al. [66] and implemented in the software JML [67]. This approach uses simulated datasets of gene trees and sequence alignments generated under a coalescent model that assumes no migration (or hybridisation) for a given species tree. The proportion of simulated datasets for which the minimum pairwise distance is lower than the observed, can be interpreted as the posterior probability ($P$) that the model is correct. A small $P$ value therefore suggests that a model that assumes no hybridization does not fit the data well (e.g., the observed minimum genetic distances are lower than expected). To account for uncertainty, simulations were performed for individual partitions using 10,000 trees from the posterior distribution of species tree output by *BEAST*, which include estimates of population size and branch lengths. Further details of the simulations are provided in the Additional file 1.
**DNA-based species delimitation**
For single marker species delimitation (*cox1* and ITS1) we used four widely implemented approaches: statistical parsimony analysis [68], automated barcode gap detection (ABGD) [69], the generalized mixed Yule-coalescent (GMYC) model [12, 70, 71], and the Poisson tree processes (PTP) model [72]. Outgroup species (*Onalopia*) and specimens with duplicate haplotypes were pruned from the dataset (or tree) prior to analysis, otherwise some methods have been shown to produce false positives [73].
Haplotype networks for each individual marker were generated using statistical parsimony analysis [68] implemented in TCS 1.2 [42]. Statistical parsimony analysis partitions the data into networks of closely related haplotypes connected by changes that are non-homoplastic with a 95 % probability; if applied to mtDNA, the inferred networks have been found to be largely congruent with Linnaean species [74]. The GMYC model [12, 70, 71] was used to estimate species boundaries with the trees obtained from MrBayes and RAxML using in the R package *splits* [70], with single and multiple threshold options. This method is based on the phylogenetic species concept and identifies species clusters by recognising the apparent increase in the branching rate from interspecific diversification to population-level coalescence, and defining the threshold based on an ultrametric tree. Trees were converted to ultrametric using PATHd8 [75] and the penalized likelihood method implemented in r8s 1.7 [76], with the optimal smoothing parameter selected using the cross-validation procedure. The age of the ingroup was assigned an arbitrary age of 1, and the resultant trees were fully resolved using TreeEdit 1.0 [77] using an arbitrary branch length of 4 x 10^{-6}. Finally, we estimated uncertainty in the number of GMYC species clusters based on the Akaike Information Criterion (AIC), using the method outlined in [78]. This approach uses a modified AIC score, corrected for sample size (AIC$_c$), to assess the relative support for alternative (single and multiple threshold) models, versus the maximum likelihood model, and the null model (no change in the branching rate). Akaike weights (the relative support for each model) are assigned to each model based on the AIC$_c$ scores. Model-averaged estimates of the number of GMYC species are obtained from the models within $\delta$AIC$_c = 2$. The phylogenetic species concept also underlies the Poisson tree processes (PTP) model for species delimitation [72]. However, in contrast to the GMYC approach, the PTP infers speciation events based on a shift in the number of substitutions at internal nodes. We employed the maximum likelihood variant of PTP using the RAxML trees. For the ABGD approach we used the online version (last modified on 10/29/2015 and accessed on 01/23/2016, http://wwwabi.snv.jussieu.fr/public/abgd/abgdweb.html, [69]). This method is based on the assumption that divergence among organisms belonging to the same species will be less than the divergence observed among organisms of different species. The first significant gap in the distribution of sequence distances beyond intraspecific sequence divergence can thus be used to infer operational taxonomic units (OTU) that may be related to species (e.g., [79]). ABGD analyses were performed on matrices of pairwise sequence divergence, calculated for each marker using MEGA (v6.06, [80]). Distances were corrected using the best fitting substitution models. Prior maximum divergence of intraspecific diversity was set to 0.01, which has previously been demonstrated to recover species accurately [69].
Finally, the results of competing approaches to species delimitation were compared using the “entities counts” (i.e., inferred species counts) and the match ratio = $2 \times N_{\text{match}} / (N_i + N_{\text{morph}})$, where $N_{\text{match}}$ is the number of species with exact matches (i.e., all specimens of a given morphospecies – and only those – belong to a single GMYC entity) and $N_i$ and $N_{\text{morph}}$ are the number of inferred molecular operational taxonomic units (MOTUs) and morphospecies, respectively [73]. If there is complete congruence between the MOTU entities and the morphospecies the match ratio = 1, otherwise the ratio will be < 1.
**Total-evidence species delimitation**
We assessed support for the *a priori* morphospecies assignments using a total-evidence-based Bayesian approach, implemented in the programs iBPP 2.1.2 [14] and BPP 3.0, [20, 34]. Briefly, this method uses a multispecies coalescent model to assess competing hypotheses of species delimitations, allowing for conflict between gene and species trees. The results are conditioned on a user specified guide tree and depend on estimates of the species divergence times ($\tau$) and population sizes ($\theta$). Individuals are assigned to independent populations and alternative delimitation hypotheses are proposed by collapsing one or more internals nodes in the guide tree. In the original implementation, the likelihood calculation is based on molecular data [34], while iBPP includes an extension of the model that allows continuous trait data to be included in the likelihood calculation [14]. This latter approach therefore enables both molecular and morphological data to be combined in the assessment of *a priori* species assignments.
It has been demonstrated that the results of this method can be sensitive to both prior parameter and guide tree choice [81]. For example, for high values of $\theta$ the model tends to (over-) split species, and for low values of $\theta$ the model tends to lump species together. To assess the robustness of our results, we compared the results obtained under variable combinations of the specified priors on the root age ($\tau_0$) and the population mutation rate ($\theta$) (Table 1). To assess the influence of the guide tree, we compared the results obtained using three alternative input trees: (a) the topology estimated using *BEAST*, (b) the topology estimated from the concatenated DNA matrix using RAxML/MrBayes, (c) a modified version of the *BEAST* topology based on morphological similarity among species (Additional file 1: Figure S2). All combinations of prior parameter (Table 1) and guide tree choices were performed in iBPP (a) without data, to evaluate the impact of the priors, and using the following three datasets: (b) molecular data only, (c) morphometric data only, and (d) molecular and morphometric data. The analysis sometimes got stuck in a single species model, resulting in poor overall convergence, and so all analyses were repeated 10 times with different random seeds to ensure stability of the results.
In an additional set of analyses, we implemented unguided species delimitation using the program BPP [20]. This method accounts for uncertainty in the guide tree, by proposing changes to the species tree topology using nearest-neighbour interchange (NNI), as well as proposing changes to species assignments. Morphometric data cannot be analysed in BPP, so this analysis was performed for the molecular dataset only. The analyses were performed using the above combinations of priors and initial guide tree choices.
To explore the impact of distinct single-marker genotypes within the same morphospecies, in combination with the morphological trait data, we also analysed an additional guide tree with guided and unguided BPP, in which *sp10* was specified as two species entities (This split received strong support in several single marker delimitations, see results).
**Results**
**Phylogenetic analysis and the monophyly of morphospecies**
Phylogenetic analysis of independent and combined datasets using different approaches and parameter choices (PhyML, RAxML, and MrBayes) produced overall similar topologies (Fig. 1, Additional file 1: Figures S3-S5). Changing the branch length prior implemented in MrBayes had no impact on the inferred topology but had a large impact on tree length (the sum of branch lengths) (Additional file 1: Table S4). Analysis of different datasets (mitochondrial, nuclear or combined) mainly differed in their degree of tree resolution, and the level of support for the monophyly of individual morphospecies and/or interspecific relationships. There is remarkably low interspecific molecular variation observed across the entire genus. The trees produced using the ribosomal markers (*rrnL* and 28S) were poorly resolved. The *cox1* data provided better resolution and supported the monophyly of two out of eight putative morphospecies. ITS1 provided the best resolution and supported the monophyly of all but two morphospecies (*sp01* and *sp02*) (Additional file 1: Figure S3-S5).
The topology obtained using the combined dataset that included all four markers was identical to the ITS1 gene tree (Fig. 1), but support values for most nodes were greater than those obtained using individual genes. In the combined analyses of all four markers, the monophyly of all putative morphospecies was strongly supported with the aforementioned exception. Morphospecies *sp01* and *sp02* were never recovered as monophyletic, although these groups occupied distinct areas of the morphospace in the
morphometric analysis of the genitalia (Fig. 2, Additional file 1: Figure S1, Additional file 2).
The Bayesian species tree estimated using *BEAST* for the combined *cox1* and ITS1 dataset resulted in strong support for the interspecific relationships estimated using the *cox1* data, rather than the ITS1 data. Although the species tree topology differed to that obtained using alternative phylogenetic methods (PhyML, RAxML and MrBayes), the individual gene trees (*cox1* and ITS1) obtained using *BEAST* were not different. The age of the most recent common ancestor of the sampled members of the genus, was estimated to be 2.64 – 35.97 and 3.69 – 17.88 Mya, based on the 95 % highest posterior density intervals for the slowest and fastest *cox1* substitution rates (2 and 4 % My$^{-1}$), respectively (Additional file 1: Table S5). The use of a higher *cox1* substitution rate produced younger and, unexpectedly, more precise posterior age estimates. The ages for the two youngest divergence events (*sp01* + *sp02* and *sp06* + *sp10*) were estimated to be no older than 0.17 Mya and 0.65 Mya, respectively (Additional file 1: Table S6).
Evidence of hybridization was assessed using the posterior predictive checking approach as implemented in the software JML (Joly, 2012), based on the minimum pairwise sequence distances among morphospecies for each marker partition (*cox1* [P1 vs. P2 vs. P3], and ITS1), and the resulting posterior probability ($P$) of observing these distances under the multispecies coalescent model assuming no hybridization (Additional file 1: Table S7). In all cases the observed pairwise distances between individuals of all morphospecies were not lower than expected at the 5 % level ($P > 0.05$), given the null model (the coalescent with no migration or hybridization) across all partitions (*cox1* P1 and P2, $P > 0.1$; P3, $P > 0.05$; ITS1, $P > 0.2$). The distances observed between individuals of the two species pairs that could not be resolved using *cox1* (*sp06* + *sp10*) or both *cox1* and ITS1 (*sp01* + *sp02*) were not lower than expected for either marker (i.e., *sp06* + *sp10*, $P > 0.2$; *sp01* + *sp02*, $P > 0.6$). The tests performed suggest that incomplete lineage sorting is sufficient to explain the observed genetic variation (although mitochondrial partition P3 produced anomalous results for *sp09* and *sp11*, see Additional file 1: Table S7).
**Molecular tree- and character-based species delimitation**
We investigated DNA-based species delimitation and associated uncertainty using (i) statistical parsimony, (ii) the GMYC model, (iii) the PTP model, and (iv) ABGD approach. The analyses using the *rrnL* and 28S data did not provide support for any of the putative morphospecies (results not shown). Of the 13 resulting *cox1* networks, three matched exclusively a single putative morphospecies. ITS1 networks provided a closer correspondence to the morphospecies. Of the 9 ITS1 networks, four matched exclusively a single putative morphospecies: *sp09*, *sp11*, *sp12* and *spX2*. Individuals of morphospecies *sp06* shared two networks, and individuals of morphospecies *sp10* shared two networks. Individuals of morphospecies *sp01* and *sp02* shared a single network. Together these results suggest that there is a higher degree of incomplete lineage sorting among *cox1* than ITS1, and that species *sp01* and *sp02* cannot be distinguished on the basis of the molecular markers used here.
The GMYC results obtained using *cox1* were very sensitive to the input tree, but there were no obvious differences in the GMYC output that could be attributable to the trees generated using MrBayes versus RAxML, or PATHd8 versus r8s (Additional file 1: Table S8). Bayesian trees with longer branch lengths tended to result in more GMYC entities (species clusters + singletons), but not ubiquitously. Consequently, the *cox1* trees produced very variable results. In most cases several (up to 8) models contributed to a majority of the Akaike weight (>0.5), suggesting that no single model best represented the data. Accounting for uncertainty in model selection resulted in the number of entities ranging between 3.00 ($\sigma^2 = 0$) and 16.54 ($\sigma^2 = 0.89$), depending on the input tree; these GMYC units were widely incongruent with the *a priori* morphospecies assignments (further details therefore not shown here). There was less variation in the GMYC results obtained using the ITS1 trees – the single threshold models were always preferred to the
Table 2 DNA based species delimitation results
| | cox1 | | ITS1 | |
|----------------|------|----------|------|----------|
| | PTP | GMYC | TCS | ABGD | PTP | GMYC | TCS | ABGD |
| Entities | 7 | 13 | 13a | 11 | 8 | 8 | 9 | 8 |
| Match ratio | 0.27 | 0.29 | 0.30 | 0.42 | 0.63 | 0.63 | 0.47 | 0.63 |
The number of delimited entities and the match ratio \((2*N_{\text{match}}/(N_{\text{GMYC}} + N_{\text{morph}}))\) [73] after removing undetermined specimens is given
*a contained one MOTU composed of only female specimens, this unit was not considered for match ratio estimation*
multiple threshold models. In the majority of cases only one single threshold model was found within \(\delta AIC_c = 2\), suggesting that the preferred model provided an appreciably better fit to the data than the alternatives. The ITS1 data resulted in a minimum of 8 \((\sigma^2 = 0)\) and a maximum of 10.99 \((\sigma^2 = 4.05)\) entities, depending on the input tree. In 8 out of 10 cases, the preferred model resulted in eight entities, corresponding to morphospecies \(sp01 + sp02\), \(sp06\), \(sp09\), \(sp11\), \(sp12\), \(spX2\), and two clusters of morphospecies \(sp10\).
In general, congruence between the inferred MOTUs and the morphospecies was more dependent on marker choice than species delimitation method (Table 2, Additional file 2: Table S9). For \(cox1\) the number of MOTUs ranged from 7 (PTP) to 13 (GMYC), while the analyses based on ITS1 resulted in 8 (GMYC, PTP, ABGD), 9 (TCS) and 10 (GMYC) entities. The PTP and ABGD analyses largely confirmed the results of the GMYC model for the ITS1 data; five of the eight MOTUs were fully congruent with the morphospecies \((sp11, spx2, sp9, sp6, sp12)\). Finally, the match ratios obtained for \(cox1\) were consistently lower (0.27–0.42) than those obtained using ITS1 (0.47–0.63) (Table 2).
Morphometric evidence for species delimitation
We first assessed quantitative support for the eight putative morphospecies assignments among *Pleophylla* based on an open shape outline of the left paramere of the male genitalia, using (i) standard eigenshape analysis, (ii) canonical variate analysis (CVA), (iii) hierarchical clustering, and (iv) linear discriminant analysis. The first four eigenshape axes represented 75 % of the cumulative variation of the outline shape (Additional file 1: Table S10, Figure S6). Eigenaxis 1, 2, 3 and 4 represented 51.5 %, 15.6 %, 6.8 % and 6.0 % of the variation, respectively. The first 14 eigenshape axes account for 95 % of the cumulative variation. The plots of the 2D and 3D phylomorphospace (Fig. 2, Additional file 5) showed clear separation between all but one of the morphospecies, with no intermediate states between the morphospecies. The only exception was \(sp12\), which overlapped in morphospace with \(sp02\). CVA on eigenshape axes 1–4 (Additional file 1: Figure S1) revealed a clear distinction between five of the eight morphospecies \((sp01, sp02, sp06, sp10\) and \(sp11\)), with the exception of those for which only one or two specimens were available for analysis \((sp09, sp12\) and \(spX2\)). This was in contrast to the DNA-based tree topology and species delimitation, where specimens of two species pairs \((sp01 + sp02\), and \(sp06 + sp10\)) could not be distinguished based on the analysis of \(cox1\) and/or ITS1.
Hierarchical model-based cluster analysis [37] can identify unique morphological clusters of individuals without requiring *a priori* species assignments (e.g., [8]). The results of this analysis were extremely sensitive to the model choice (Fig. 3). Different mixture models favoured strikingly different numbers of clusters (e.g., 9, 7, 5, and 3 clusters were found for eigenshape axes 1–4 under different models) (BIC, Fig. 3a). The best model obtained for eigenshape axes 1–4 (the ellipsoidal, equal shape model; VEV) resulted in 3 clusters, but only morphospecies \(sp11\) and \(sp10\) (with the exception of one individual) were recovered as independent unique clusters. The best-fit model obtained for eigenshape axes 1–14 (the diagonal, varying volume, equal shape model; VEI) resulted in 12 clusters (Fig. 3b), with all morphospecies recovered in more than one group, with the exception of the singletons and \(sp6\); the latter was recovered together with individuals of \(sp9\) and \(spX2\).
Linear discriminant analysis (LDA) with respect to the *a priori* defined morphospecies recovered one of the eight species \((sp11, 100 \%\) of individuals\) based on eigenshape axes 1–4 (Fig. 3, Additional file 1: Table S11). Two of the eight morphospecies were recovered with the LDA based on eigenshape axes 1–14 \((sp10, sp11, 100 \%\) of individuals\); the remaining morphospecies were recovered for 50–92 \% of individuals\). LDA with respect to groups identified by the model-based cluster analysis recovered all three clusters correctly based on eigenshape axes 1–4 (Fig. 3, Additional file 1: Table S12). Finally, LDA on clusters from the second analysis based on eigenshape axes 1–14 recovered all but two of the groups for 100 \% of individuals.
Bayesian species delimitation
The total-evidence approach to Bayesian species delimitation [14, 34] provided strong support for the *a priori* defined morphospecies, however, for independent data types (molecular versus morphometric), the results were sensitive to the priors on the root age \((\tau_0)\) and population size \((\theta)\) parameters (Fig. 4, Additional file 3: Table S13). Broadly, posterior probabilities (i.e., support for species delimitations) increased in the integrative analyses that combined molecular and morphological trait data (Fig. 4). While results were sensitive to both the choice of \(\tau_0\) and \(\theta\), the choice of \(\theta\) seemed to be more influential. The most consistent pattern that emerged is
that low values of $\theta$ sometimes lead to low support for species delimitations. Species remained relatively well supported with high prior values of $\tau_0$. When the model was run under the prior (e.g., without data), with exception of the deepest divergences ($sp09$), the model did not result in any support ($P > 0.95$) for the *a priori* species assignments. This indicates that although the results were sensitive to the priors, the data contained informative signal.
Based on morphometric data alone, the divergence between $sp01 + sp02$ and $sp12$ in tree A was strongly supported ($P > 0.95$), however, the combination with low $\theta$ values reduced support at these nodes (Fig. 4). The analysis based on molecular data alone provided overall support for the *a priori* morphospecies assignments. Exceptions occur for all nodes given low values of $\theta$ with all data sets. For example, $sp01$ and $sp02$ were strongly supported in analyses with higher values of $\theta$ ($P > 0.95$), while there was low support for this divergence in analyses with the lowest value of $\theta$ ($P < 0.32$). The delimitation between $sp02$ and $sp12$ (tree C) was the only split that consistently received low support under all $\theta$ prior values and with all data sets.
As expected, the results were also sensitive to the guide tree choice. For example, when $sp02$ and $sp12$ were specified as belonging to separate groups of species, they were always strongly supported with high posterior probabilities (tree A, B). However, when the guide topology was modified to accommodate the observed high morphological similarity between $sp02$ and $sp12$ (guide tree C), they were almost never
recovered as independent species (Fig. 4). Interestingly, none of the *a priori* defined species gained high support for all prior combinations across all guide trees, even using the integrative total evidence approach (Fig. 4d).
The unguided analyses (molecular data only) that applied nearest neighbor interchanges (NNI) to the initial guide tree topologies largely confirmed the results of the guided (ibPP) analyses. While the initial guide tree and the choice of the $\tau_0$ prior did not
alter the results, the choice of the $\theta$ prior had strong influence on the posterior probabilities of the speciation splits. All *a priori* defined morphospecies were well supported under $\theta_1$ and $\theta_2$ (Table 1) however, under the narrow and small $\theta_3$ prior, in particular $sp1$ and $sp2$, but also $sp1$, $sp10$, $sp12$, $sp2$, and $sp6$, were lumped into one species (Additional file 4: Table S14 and Additional file 5: Table S15).
In the final set of analyses, in which $sp10$ was specified as two separate entities, corresponding to two distinct genotypes (Additional file 1: Figure S7), this split was not supported based on the analysis of the morphometric data alone, as expected. However, this split received strong support based on the analysis of both the molecular only and combined datasets (Additional file 1: Figure S7).
**Discussion**
**Congruence between single DNA markers and morphometric evidence**
Using a wide range of morphometric and phylogenetic tools, we tested for congruence between morphological, molecular, and integrative approaches (i.e., iterative *sensu* [19]) to species delimitation in the chafer beetle genus *Pleophylla*. Morphometric analysis (eigenshape analysis) of the left paramere of the male genitalia, as well as subsequent CVA and LDA provided quantitative support for the majority species assignments based on morphology. In contrast, model based hierarchical clustering showed much less congruence with the morphospecies (Fig. 3e, f), indicating that this approach may not be suitable for delimitation at the level of species.
Molecular-based species delimitation resulted in a wide range of support for morphospecies, based on the analysis of standard markers used among beetles (e.g., [82–85]), from zero (28S and *rrnL*) to moderate or high (*coxI* and ITS1). The ribosomal markers were insufficiently informative to support any of the putative morphospecies (Additional file 1: Figure S3-S5; Table S8), due to the remarkably low interspecific molecular variation observed across the entire genus. This is less surprising for the slowly evolving 28S rRNA marker, but *rrnL* has previously provided reasonable resolution at the species level among scarabs (e.g., [82]). The mitochondrial gene *coxI* and the nuclear region ITS1 were more informative, while the latter provided the best resolution. There was overall congruence between the morphospecies and the ITS1 MOTUs (GMYC, ABGD, PTP), despite the fact that ITS1 had fewer haplotypes than *coxI* (23 versus 53) and a lower relative substitution rate (Additional file 1: Table S5). A wide range of tree building methods, parameters and tree linearization approaches did not improve the results of the GMYC model using *coxI*. In particular, there were three putative morphospecies that were difficult to distinguish on the basis of molecular data alone ($sp01$ vs $02$; $sp6$ vs $sp10$; $sp02$ vs $sp12$). At one extreme, individuals belonging to a single morphospecies ($sp10$) were assigned to two MOTUs on the basis of two distinct ITS1 genotypes. The genotypes had a total of 31 segregating sites, including one 2-base-deletion, two 4-base-deletions, and one 2-base-insertion, indicating that a single mutation is unlikely to be the cause of the molecular variation, although this pattern was not recovered by any other marker. At the other extreme, individuals belonging to two distinct morphospecies were assigned to a single MOTU and shared identical *coxI* and ITS1 haplotypes ($sp01$, $sp02$), which may be attributed to introgressive hybridisation or incomplete lineage sorting.
Distinguishing between secondary gene flow and incomplete lineage sorting is difficult because both processes produce similar phylogenetic patterns [66]. JML analyses [67] indicated that incomplete lineage sorting may be sufficient to explain the observed level genetic variation across species with independent data partitions and species – with the exception of the fast evolving *coxI* third codon (Additional file 1: Table S5; $sp09$ and $sp11$), the monophyly of these species was otherwise well supported. The basic substitution model implemented in JML may not be sufficient to account for hidden substitutions at this site and may underestimate the genetic distance for this partition (Additional file 1: Table S5, S7). Overall, the JML results provide support for an incomplete lineage sorting scenario, however, this test cannot be treated as definitive against secondary gene flow. The method implemented in JML can only be used to detect hybridization events for sequences that have a coalescence time younger than the speciation event [66], and this approach can result in false negatives [86].
**Bayesian species delimitation using an integrative taxonomy framework**
In concordance with our results from the Bayesian species delimitation, Solís-Lemus et al. [14] have shown that the integration of morphological evidence together with molecular data may greatly enhance the discriminative power of species delimitation models. However, it has also been shown that errors and uncertainties in upstream analyses (e.g., guide tree inference, individual-species assignment) and prior parameter choice may impact the accuracy of results [81, 87, 88]. Here, we assessed the impact of a wide range of parameter combinations, including prior parameter and guide tree choice.
Leaché and Fujita [81] previously demonstrated the significant impact of using randomly generated guide tree topologies. Rannala [89] questioned the practicality of exhaustive guide tree manipulation, with respect to the increased computation time associated with popular
phylogenetic inference methods. In addition, a random set of guide trees will include some unreasonable or unlikely topologies, which can lead to inaccurate delimitations (e.g., over-splitting; [88]). Here, we limited our guide tree choice to three options, justified on the basis of evidence of independent molecular and morphometric evidence, in order to further evaluate incongruences between both data sources (Additional file 1: Figure S2). The use of alternative guide trees had a large impact on the results. For example, the use of guide tree C (based on morphological similarity) allowed us to identify support for a putative species pair \textit{(sp02/sp12)}, which was otherwise not identified using alternative molecular based approaches, including the unguided (NNI) approach in BPP (Additional file 4: Table S14 and Additional file 5: Table S15). In an additional set of experiments, we used a fourth guide tree topology based on the support for a putative case of cryptic diversity obtained using alternative single-marker delimitation approaches (\textit{sp10}, Additional file 2: Table S9; Additional file 1: Figure S2, S7). This experiment, however, cannot provide definitive support for these species entities, because the units were inferred on the basis of non-independent evidence. Manual inspection of the alignments for \textit{sp10} revealed 2 ITS1 genotypes with 43 segregating sites represented by \textit{sp10a} and \textit{sp10b}. This is a very strong signal compared to a total of 44 segregating sites in both mitochondrial markers, which did not exhibit any diverging signal between \textit{sp10a} and \textit{sp10b}. Only a single site was polymorphic for \textit{sp10b} in 2 of the 4 \textit{sp10b} specimens. However, these analyses serve to demonstrate that the results obtained using this model can be extremely sensitive to the signal present in single molecular markers, even in presence of data that provide strong evidence for morphological similarity (Additional file 1: Figure S7).
The use of alternative prior combinations for the population size ($\theta$) and root age ($r$) priors each had a large impact on the results. These analyses indicate that these parameters must either be chosen using extreme caution (using independent empirical evidence) or multiple analyses should be performed to assess the robustness of species delimitations to these parameters, such as the analyses performed here. We found that phylogenetically younger species (\textit{sp01, sp02, sp06, sp10, sp12}) and analyses that employed less data (e.g., single versus combined traits) were typically more sensitive to the results. It has also been demonstrated that strong variation in mutation rate and population size among populations or species can also decrease the accuracy of alternative coalescent-based delimitation models [90].
The inclusion of more individuals (and/or data) can lead to more accurate and precise parameter estimates [91], but increased taxon sampling is sometimes not possible due to the natural rarity of some species [73]. The development of better approaches to account for this uncertainty may be important, because in reality many biodiversity studies will be subject to limited taxon sampling. Further research using empirical and simulated data are required to fully assess the impact of guide tree, prior parameter choice, model violation and taxon sampling. Here, we demonstrate that the inclusion of morphological data can lead to more robust estimates of species delimitations. The results obtained using the combined dataset are less sensitive to prior parameter choice, than the analysis based on molecular or morphological dataset alone (Fig. 4; Additional file 3: Table S13, Additional file 4: Table S14 and Additional file 5: Table S15). Overall, nearly all morphospecies received strong support based on the analysis of the combined dataset (Fig. 4d). All sequence-based inference methods, including tree inference using concatenated data or coalescent-based approaches such as “BEAST and BPP, may be impacted by putative incomplete lineage sorting or introgression. An integrative approach to taxonomy enables all available evidence to be utilized and may be particularly useful for delimitating very young species, which will always be difficult to distinguish on the basis of molecular or morphological data alone.
**Conclusions**
The earliest stages of speciation will be the point at which it will be the hardest to establish a boundary between population and species level divergence. However, such cases (and their solution) are the “holy grail” of taxonomy and provide an exemplar for investigating the intermediate stages of the “Darwinian continuum” from varieties to species [92] and inevitably create problems for the definition of species. Integrative or multiple strategies may be necessary in such cases where conflicts are most likely to exist [6, 19]. Together with previous studies [7, 14] we have confirmed that morphology can be a highly informative trait within an integrative approach, such as iBPP, to species delimitation.
Complex cases of species delimitation, such as those among \textit{Pleophylla} species, demonstrate the sensitivity of delimitation approaches to prior parameter choices and are thus useful for investigating the performance of new methodologies. We have highlighted the importance of examining the effect of prior choice on species delimitation results in BPP and iBPP, especially if highly informative prior distributions ($\alpha > 1$) are used. Previously, specifying a high $\theta$ and a low $\tau_0$ value was intended to constitute a conservative prior combination that should not lead to over-splitting [81]. However, we found that this combination actually led to higher support for more
splits, which was attributable to the strong influence of the $\theta$ parameter. For a conservative estimate of species delimitations, we recommend using a low value of $\theta$ to avoid species over-splitting.
The incongruence between trait- or gene-based species delimitations (Fig. 5, Additional file 1: Figure S8) may have multiple independent causes. First, sampling issues and the ability to capture statistically significant entities may be problematic, particularly for trait-based inference [7] (see also above). For example, trait-based clustering algorithms quickly loose power when including too many or too poorly sampled species, or when variation is distributed over too many dimensions, resulting in more noise [14, 93]. These problems may also pose a challenge for combined approaches to species delimitation, however their impacts have not been fully explored. Second, the incongruence among independent methods, employed for the analysis of different data types (molecules versus discrete or continuous morphological traits), may be attributed to the use of competing species concepts [94, 95]. Model based clustering applied on morphological traits is simply based on the morphological species concept; tree-based species inference methods (e.g., GMYC, PTP) are based on the phylogenetic species concept in [88, 96], which rely on the assumption of reciprocal monophyly across gene trees. The assumption of monophyly among independent markers may be problematic because this assumption is known to be violated for closely related species. de Queiroz redefined the criteria inherent to most species concepts [21, 97, 98] that species represent independent metapopulation lineages through time. Instead, in the generalized lineage concept (GLC) the criteria used to demarcate species (e.g., morphological differences, monophyly or reproductive isolation) are treated as attributes that accumulate during the process of lineage diversification [98]. This concept has been broadly adopted by coalescent-based approaches to species delimitation [6, 7, 10, 20, 34, 99–103], which model the lineage diversification process using multiple markers to delimit species (e.g., [104]). Several studies have delimited species successfully using these approaches [5, 81, 94, 95, 105, 106].
BPP (and IBPP) treat species as hypotheses in a probabilistic framework, using objective tests to delineate independent evolutionary lineages (i.e., species), therefore satisfying numerous species concepts [95]. Caution should always be taken when interpreting the results of a single dataset [6, 7], however, an integrative model-based approach to detecting species is likely to have more utility and could result in more robust species delimitations, especially when divergence varies across different phenotypic, genetic or ecological parameters [7].
Finally, based on the outcome of the integrative BPP analysis (Figs. 4 and 5), which was broadly congruent with the single trait evidence, we conclude that in our *Pleophylla* data set *sp1*, *sp6*, *sp9*, *sp10*, *sp11*, and *spX2* are valid species, while *sp2* and *sp12* very likely belong to the same taxon. The results of alternative molecular delimitation methods provided support for potential cryptic species (*sp10*, Additional file 1: Figure S8). However, this signal comes from one of the four markers only (which we demonstrated can overwhelm the signal of other data in the BPP/IBPP analyses, Additional file 1: Figure S7) and this result is not corroborated by morphological or geographical evidence (the two MOTUs occur in the same location). Therefore, at this stage we do not consider these as two separate species. (These conclusions will be further developed by formal taxonomic treatment, type material and taxonomic revision that will be presented in a separate upcoming study; Eberle J, Beckett M, Özguel-Siemund A, Frings J, Fabrizi
S, Ahrens D. Afromontane forests hide nineteen new species of ancient *Pleophylla* chafer (Coleoptera: Scarabaeidae): phylogeny and taxonomic revision, in preparation). Additional information about the structure of a population or species complex, based on much broader individual, geographical and DNA sequence sampling would very likely have improved our case study. However, natural rarity (linked with the time constraints of most biodiversity studies) will always have an impact on the number of available samples and may strongly bias the results [107].
Simulations have suggested that the number of loci required for robust Bayesian species delimitation may be large [10]. Here, we demonstrate that the signal from a single marker can influence the outcome of a fully integrative analysis, even given the inclusion of morphology. These results further underlay the necessity for upgrading the globally successful Barcoding initiatives to include a broader range of universal markers (e.g., [108]). Despite numerous disadvantages [109, 110], this approach would help to overcome some of the major challenges to accurate species delimitation [111]. Future directions in integrative taxonomy will need to further address these issues, including integrative study design and the interpretation of frequently incongruent results. In addition the development of new tools for integrating disparate types of specimen-based data in taxonomic studies offer an exciting opportunity to free taxonomy from subjectivity.
**Availability of supporting data**
Voucher specimens have been deposited in the Zoological Research Museum A. Koenig (Bonn). All molecular sequences generated for this study were deposited in GenBank (Additional file 1: Table S1). Sequence alignments, program input files and phylogenetic trees were deposited on Zenodo (doi:10.1186/s12885-016-0659-3) [112]. The perl script used for running (i)BPP with multiple prior combinations, along with all input files, is available at https://github.com/eberlejonas/BPPmulti.git.
**Additional files**
- **Additional file 1**: Supplementary text, figures S1–S8, tables S1–S8, S10–S12. (PDF 1909 kb)
- **Additional file 2**: 3d morphospace. (GIF 6172 kb)
- **Additional file 3**: Supplementary table S9. (XLSX 64 kb)
- **Additional file 4**: Supplementary table S13. (XLS 185 kb)
- **Additional file 5**: Supplementary table S14. (XLS 40 kb)
- **Additional file 6**: Supplementary table S15. (XLS 38 kb)
**Competing interests**
The authors declare that they have no competing interests.
**Authors’ contributions**
RCMW: molecular lab work, sequence assembly and alignments, phylogenetic inference, DNA-based species delimitation; JE: clustering analyses, DNA-based species delimitation, integrative species delimitation; DA: conducted fieldwork collections, conceived the study. RCMW, JE, DA: morphometric analyses and drafted the manuscript. All authors read and approved the final manuscript.
**Acknowledgements**
We would like to thank Simon Joly for advice using the JML software, Norman MacLeod for advice with the eigenshape analysis, the referees for their helpful comments on an earlier draft of the manuscript, Silvia Fabrizi for assisting with the lab work, and Pia Addison and Cate Bazeler for help with the collection permit for the Cape Province. This project was supported by a studentship from the Natural Environment Research Council to R.C.M.W (NE/E522891/1 and NE/I528250/1), and grants from the German Science Association to D.A. (DFG/AH175/1 and AH175/3). For providing D.A. with research and collection permits, we thank the various governmental institutions and departments in Eastern Cape (Permit No.: WRO 122/07WR and WRO123/07WR), Gauteng (Permit No. CPF6 1281), Limpopo (Permit No.: CPM-006-00001), Mpumalangma (Permit No.: MPM-2009-11-20-1232), Cape Province (Permit No.: AAA0007-00097-0056), and Kwazulu-Natal (Permit Nos.: OP3752/2009, 1272/2007, 3620/2006). This work was partially supported by the computational facilities of the Advanced Computing Research Centre, University of Bristol and of the Zoological Research Museum A. Koenig, Bonn.
**Author details**
1Zoologisches Forschungsmuseum Alexander Koenig Bonn, Centre of Taxonomy and Evolutionary Research, Adenauerallee 160, 53113 Bonn, Germany. 2Department of Entomology, Natural History Museum, London SW7 5BD, UK. 3Department of Life Sciences, Silwood Park Campus, Imperial College London, Ascot SL7 5PY, UK. 4School of Earth Sciences, University of Bristol, Bristol BS8 1RJ, UK.
**Received:** 18 December 2015
**Accepted:** 18 April 2016
**Published online:** 05 May 2016
**References**
1. Agapow PM, Bininda-Emonds OR, Crandall KA, Gittleman JL, Mace GM, Marshall JC, Marshall JC, Purvis A. The impact of species concept on biodiversity studies. Q Rev Biol. 2004;79(2):61–79.
2. Daugherty CH, Cree A, Hay JM, Thompson MB. Neglected Taxonomy and Continuing Extinctions of Tuatara (*Sphenodon*). Nature. 1990;347(6289):177–9.
3. Isaac NJB, Mallet J, Mace GM. Taxonomic inflation: its influence on macroecology and conservation. Trends Ecol Evol. 2004;19(9):464–9.
4. Padial JM, De la Riva I. Taxonomic inflation and the stability of species lists: The perils of ostrich’s behavior. Syst Biol. 2006;55(5):859–67.
5. Carstens BC, Dewey TA. Species Delimitation Using a Combined Coalescent and Information-Theoretic Approach: An Example from North American Myotis Bats. Syst Biol. 2010;59(4):400–14.
6. Carstens BC, Pelletier TA, Reid NM, Satler JD. How to fail at species delimitation. Mol Ecol. 2013;22(17):4369–83.
7. Edwards DL, Knowles LL. Species detection and individual assignment in species delimitation: can integrative data increase efficacy? P Roy Soc B-Biol Sci. 2014;281(1777):20132765.
8. Ezard THG, Pearson PN, Purvis A. Algorithmic approaches to aid species’ delimitation in multidimensional morphospace. BMC Evol Biol. 2010;10.
9. Guillot G, Renaud S, Ledevin R, Michaux J, Claude J. A Unifying Model for the Analysis of Phenotypic, Genetic, and Geographic Data. Syst Biol. 2012;61(6):897–911.
10. Knowles LL, Carstens BC. Delimiting species without monophyletic gene trees. Syst Biol. 2007;56(6):887–95.
11. Leaché AD, Koo MS, Spencer CL, Papenfuss TJ, Fisher RN, McGuire JA. Quantifying ecological, morphological, and genetic variation to delimit species in the coast horned lizard species complex (*Phrynosoma*). Proc Natl Acad Sci U S A. 2009;106(30):12418–23.
12. Pons J, Barracough TG, Gomez-Zurita J, Cardoso A, Duran DP, Hazell S, Kambou S, Sunnill WD, Vogler AP. Sequence-based species delimitation for the DNA taxonomy of undescribed insects. Syst Biol. 2006;55(4):595–609.
13. Puorto G, Salomao MD, Theakston RDG, Thorpe RS, Warrell DA, Wuster W. Combining mitochondrial DNA sequences and morphological data to infer
species boundaries: phylogeography of lanceheaded pitvipers in the Brazilian Atlantic forest, and the status of Bothrops pradoi (Squamata: Serpentes: Viperidae). J Evolution Biol. 2001;14(4):527–38.
14. Solis-Lemus C, Knowles LL, Ane C. Bayesian species delimitation combining multiple genes and traits in a unified framework. Evolution. 2015;69(2):492–507.
15. Wiens JJ, Penkrot TA. Delimiting species using DNA and morphological variation and discordant species limits in spiny lizards (Sceloporus). Syst Biol. 2002;51(1):69–91.
16. Sneath PHA, Sokal RR. Numerical Taxonomy. Nature. 1962;193(4818):855.
17. Blackwelder. A Critique of Numerical Taxonomy. Syst Zool. 1967;16(1):64.
18. Sterner B. Well-Structured Biology – Numerical Taxonomy’s Epistemic Vision for Systematics. In: Hamilton A, editor. Patterns of Nature. California: University of California Press; 2014. p. 213–44.
19. Yeates DK, Seago A, Nelson L, Cameron SL, Joseph L, Tuueman JWJ. Integrative taxonomy, or iterative taxonomy? Syst Entomol. 2011;36(2):209–17.
20. Yang ZH, Rannala B. Unguided Species Delimitation Using DNA Sequence Data from Multiple Loci. Mol Biol Evol. 2014;31(12):125–35.
21. De Queiroz K. Species concepts and species delimitation. Syst Biol. 2007;56(6):879–86.
22. Degnan JH, Rosenberg NA. Gene tree discordance, phylogenetic inference and the multispecies coalescent. Trends Ecol Evol. 2009;24(6):332–40.
23. Hudson RR, Coyne JA. Mathematical consequences of the genealogical species concept. Evolution. 2002;56(8):1557–65.
24. Maddison WP. Gene trees in species trees. Syst Biol. 1997;46(3):523–36.
25. Shaffer HB, Thomson RC. Delimiting species in recent radiations. Syst Biol. 2007;56(6):896–906.
26. Slowinski JB, Knight A, Rooney AP. Inferring species trees from gene trees: A phylogenetic analysis of the elapidae (Serpentes) based on the amino acid sequences of venom proteins. Mol Phylogenet Evol. 1997;8(3):349–62.
27. Bossu CM, Near TJ. Gene Trees Reveal Repeated Instances of Mitochondrial DNA Introgression in Orangethroat Darters (Percidae: Etheostoma). Syst Biol. 2009;58(1):114–29.
28. Keck BP, Near TJ. A young clade revealing an old pattern: diversity in Notothonus darters (Teleostei: Percidae) endemic to the Cumberland River. Mol Ecol. 2010;19(22):5030–42.
29. Wu CA, Campbell DR. Cytoplasmic and nuclear markers reveal contrasting patterns of spatial genetic structure in a natural Ilopomopsis hybrid zone. Mol Ecol. 2005;14(3):781–92.
30. Edwards SV, Liu L, Pearl DK. High-resolution species trees without concatenation. Proc Natl Acad Sci U S A. 2007;104(14):5936–41.
31. Heled J, Drummond AJ. Bayesian Inference of Species Trees from Multilocus Data. Mol Biol Evol. 2010;27(3):570–80.
32. Kubatko LS, Carstens BC, Knowles LL. STEM: species tree estimation using maximum likelihood for gene trees under coalescence. Bioinformatics. 2009;25(7):971–3.
33. O’Meara BC. New Heuristic Methods for Joint Species Delimitation and Species Tree Inference. Syst Biol. 2010;59(1):59–73.
34. Yang ZH, Rannala B. Bayesian species delimitation using multilocus sequence data. Proc Natl Acad Sci U S A. 2010;107(20):9264–9.
35. Simmons LW. Sexual selection and genital evolution. Austral Entomol. 2014;53(1):1–17.
36. Fraley C, Raftery AE, Brendan M, Scrucca L. mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation. In: Technical Report No 597. University of Washington: Department of Statistics; 2012.
37. Fraley C, Raftery AE. Model-based clustering, discriminant analysis, and density estimation. J Am Stat Assoc. 2002;97(458):611–31.
38. Ahrens D. The phylogeny of Sericini and their position within the Scarabaeidae based on morphological characters (Coleoptera : Scarabaeidae). Syst Entomol. 2006;31(1):113–44.
39. Eberle J, Fabrizi S, Lago P, Ahrens D. A historical biogeography of megadiverse Sericini—a another story “out of Africa”? Cladistics; 2016. doi:10.1111/cla.12162.
40. Eberhard WG. Sexual Selection and Animal Genitalia. Cambridge: Harvard University Press; 1985.
41. Ahrens D, Lago PK. Directional asymmetry reversal of male copulatory organs in chafer beetles (Coleoptera : Scarabaeidae); implications on left-right polarity determination in insect terminalia. J Zool Syst Evol Res. 2008;46(2):110–7.
42. Dalla Torre KW. Scarabaeidae: Melolonthinae I. Coleopterorum Catalogus 45. 1912.
43. Beckett M. The distribution patterns in Pleophylla species (Coleoptera: Scarabaeidae) – indicators of ancient forest distributions. Bonn: Rheinische Friedrich-Wilhelms-Universitat Bonn; 2012.
44. Rohlf FJ. TPSDig 2.1. http://life.bio.sunysb.edu/morph/. Accessed Mar 2009.
45. MacLeod N. Generalizing and extending the eigenshape method of shape space visualization and analysis. Paleobiology. 1999;25(1):107–38.
46. MacLeod N, Rose KD. Inferring Locomotor Behavior in Paleogene Mammals Via Eigenshape Analysis. Am J Sci. 1993;293a:300–55.
47. Krieger JD. Measure LMs 4.0. Morpho-tools http://www.morpho-tools.net. Accessed Mar 2009.
48. Fraley C, Raftery AE. Bayesian regularization for normal mixture estimation and model-based clustering. J Classif. 2007;24(2):155–81.
49. Celeux G, Govaert G. Gaussian Parsimonious Clustering Models. Pattern Recogn. 1999;28(5):781–92.
50. McLachlan GJ, Basford KE. Mixture Models: Inference and Applications to Clustering. New York: Marcel Dekker; 1988.
51. Schwarz G. Estimating Dimension of a Model. Ann Stat. 1978;6(2):461–4.
52. Venables WN, Ripley BD. Modern Applied Statistics. 4th ed. New York: Springer; 2002.
53. Revell LJ. phytosys: an R package for phylogenetic comparative biology (and other things). Methods Ecol Evol. 2012;3(2):217–23.
54. Lanfear R, Calcott B, Ho SYW, Guindon S. PartitionFinder: Combined Selection of Partitioning Schemes and Substitution Models for Phylogenetic Analyses. Mol Biol Evol. 2012;29(6):1695–701.
55. Guindon S, Dufayard JF, Lefort V, Anisimova M, Hordijk W, Gascuel O. New Algorithms and Methods to Estimate Maximum-Likelihood Phylogenies: Assessing the Performance of PhyML 3.0. Syst Biol. 2010;59(3):307–21.
56. Stamatakis A. RAxML-VI-HPC: Maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models. Bioinformatics. 2006;22(21):2688–90.
57. Stamatakis A, Hoover P, Rougemont J. A Rapid Bootstrap Algorithm for the RAxML Web Servers. Syst Biol. 2008;57(5):758–71.
58. Huelsenbeck JP, Ronquist F. MRBAYES. Bayesian inference of phylogenetic trees. Bioinformatics. 2001;17(8):754–5.
59. Marshall DC. Cryptic Failure of Partitioned Bayesian Phylogenetic Analyses: Lost in the Land of Long Trees. Syst Biol. 2010;59(1):108–17.
60. Rannala B, Zhuo TQ, Yang ZH. Tail Paradox, Partial Identifiability, and Influential Priors in Bayesian Branch Length Inference. Mol Biol Evol. 2012;29(1):325–35.
61. Drummond AJ, Ho SYW, Phillips MJ, Rambaut A. Relaxed phylogenetics and dating with confidence. Plos Biol. 2006;4(5):e99–710.
62. Drummond AJ, Suchard MA, Xie D, Rambaut A. Bayesian Phylogenetics with BEAUti and the BEAST 1.7. Mol Biol Evol. 2012;29(8):1969–73.
63. Papadopoulou A, Anastasiou I, Vogler AP. Revisiting the Insect Mitochondrial Molecular Clock: The Mid-Aegean Trench Calibration. Mol Biol Evol. 2010;27(7):1659–72.
64. Papadopoulou A, Jones AG, Hammond PM, Vogler AP. DNA taxonomy and phylogeography of beetles of the Falkland Islands (Islas Malvinas). Mol Phylogenet Evol. 2009;53(3):935–47.
65. Bouckaert R, Heled J. DensiTree 2: Seeing Trees Through the Forest. doi:10.1101/012401 2014.
66. Joly S, McLenaghan PA, Lockhart PJ. A Statistical Approach for Distinguishing Hybridization and Incomplete Lineage Sorting. Am Nat. 2009;174(4):E54–70.
67. Joly S. JML: testing hybridization from species trees. Mol Ecol Resour. 2012;12(1):179–84.
68. Templeton AR, Crandall KA, Sing CF. A Cladistic-Analysis of Phenotypic Associations with Haplotypes Inferred from Restriction Endonuclease Mapping and DNA-Sequence Data.3. Cladogram Estimation. Genetics. 1992;132(2):619–33.
69. Puillandre N, Lambert A, Brouillet S, Achaz G. ABGD, Automatic Barcode Gap Discovery for primary species delimitation. Mol Ecol. 2012;21(8):1864–77.
70. Ezard THG, Fujisawa T, Barraclough TG. SPLITS: Species’ Limits by Threshold Statistics R package. 2009. http://barraclab.bio.ica.ac.uk. Accessed May 2012.
71. Fontaneto D, Herniou EA, Boschetti C, Caprioli M, Melone G, Ricci C, Barraclough TG. Independently evolving species in asexual bdelloid rotifers. Plos Biol. 2007;5(4):p14–21.
72. Zhang JJ, Kapli P, Pavlidis P, Stamatakis A. A general species delimitation method with applications to phylogenetic placements. Bioinformatics. 2013;29(22):2869–76.
73. Ahrens D, Fujisawa T, Krammer HJ, Eberle J, Fabrizi S, Vogler AP. Rarity and Incomplete Sampling in DNA-based Species Delimitation. Syst Biol. 2016. [Epub ahead of print].
74. Hart MW, Sunday J. Things fall apart: biological species form unconnected parsimony networks. Biol Letters. 2007;3(5):509–12.
75. Britton T, Anderson CL, Jacquet D, Lundqvist S, Bremer K. Estimating divergence times in large phylogenetic trees. Syst Biol. 2007;56(5):741–52.
76. Sanderson MJ. r8s: inferring absolute rates of molecular evolution and divergence times in the absence of a molecular clock. Bioinformatics. 2003;19(2):301–2.
77. Rambaut A, Charleston M. TreeEdit 1.0. http://tree.bio.ed.ac.uk/software/treeedit/. Accessed May 2012.
78. Powell JR. Accounting for uncertainty in species delineation during the analysis of environmental DNA sequence data. Methods Ecol Evol. 2012;3(1):1–11.
79. Vogler AP, Monaghan MT. Recent advances in DNA taxonomy. J Zool Syst Evol Res. 2007;45(1):1–10.
80. Tamura K, Stecher G, Peterson D, Filipski A, Kumar S. MEGA6: Molecular Evolutionary Genetics Analysis version 6.0. Mol Biol Evol. 2013;30(12):2725–9.
81. Leaché AD, Fujita MK. Bayesian species delimitation in West African forest geckos (Hemidactylus fasciatus). P Roy Soc B-Biol Sci. 2010;277(1697):3071–7.
82. Ahrens D, Monaghan MT, Vogler AP. DNA-based taxonomy for associating adults and larvae in multi-species assemblages of chafer (Coleoptera : Scarabaeidae). Mol Phylogenet Evol. 2007;44(1):436–49.
83. Ahrens D, Vogler AP. Towards the phylogeny of chafer (Sericini): Analysis of alignment-variable sequences and the evolution of segment numbers in the antennal club. Mol Phylogenet Evol. 2008;47(2):783–98.
84. Bocak L, Barton C, Crampton-Platt A, Chesters D, Ahrens D, Vogler AP. Building the Coleoptera tree-of-life for > 8000 species: composition of public DNA data and fit with Linnaean classification. System Entomol. 2014;39(1):97–110.
85. Hunt T, Bergsten J, Levkanicova Z, Papadopoulou A, John OS, Wild R, Hammond PM, Ahrens D, Balke M, Caterino MS et al. A comprehensive phylogeny of beetles reveals the evolutionary origins of a superradiation. Science. 2007;318(5858):1913–6.
86. Heled J, Bryant D, Drummond AJ. Simulating gene trees under the multispecies coalescent and time-dependent migration. BMC Evol Biol. 2013;13.
87. Olave M, Sola E, Knowles LL. Upstream Analyses Create Problems with DNA-Based Species Delimitation. Syst Biol. 2014;63(2):263–71.
88. Zhang C, Rannala B, Yang ZH. Bayesian Species Delimitation Can Be Robust to Guide-Tree Inference Errors. Syst Biol. 2014;63(6):993–1004.
89. Rannala B. Are molecular taxonomists lost upstream? In: Phylogeny etc. Meditations on Phylogenetic Inference. http://phylogenyetc.tumblr.com/post/78791524128/are-molecular-taxonomists-lost-upstream. Accessed Dec 2015.
90. Fujisawa T, Barraclough TG. Delimiting Species Using Single-Locus Data and the Generalized Mixed Yule Coalescent Approach: A Revised Method and Evaluation on Simulated Data Sets. Syst Biol. 2013;62(5):707–24.
91. Yang ZH. The BPP program for species tree estimation and species delimitation. Curr Zool. 2015;61(5):854–65.
92. Mallet J. Mayr’s view of Darwin: was Darwin wrong about speciation? Biol J Linn Soc. 2008;95(1):3–16.
93. Hausdorf B, Henning C. Species Delimitation Using Dominant and Codominant Multilocus Markers. Syst Biol. 2010;59(5):491–503.
94. Carstens BC, Satler JD. The carnivorous plant described as Saracenia alata contains two cryptic species. Biol J Linn Soc. 2013;109(4):737–46.
95. Fujita MK, Leaché AD, Burbink FT, McGuire JA, Moritz C. Coalescent-based species delimitation in an integrative taxonomy. Trends Ecol Evol. 2012;27(9):480–8.
96. Sites JW, Marshall JC. Operational criteria for delimiting species. Annu Rev Ecol Evol S. 2004;35:199–227.
97. de Queiroz K. The general lineage concept of species, species criteria, and the process of speciation. In: Howard SJ BS, editor. Endless Forms: Species and Speciation. New York: Oxford University Press; 1998.
98. de Queiroz K. Ernst Mayr and the modern concept of species. Proc Natl Acad Sci U S A. 2005;102:6600–7.
99. Camargo A, Morando M, Avila LJ, Sites Jr JW. Species delimitation with ABC and other coalescent-based methods: a test of accuracy with simulations and an empirical example with lizards of the Liolaemus darwinii complex (Squamata: Liolaemidae). Evolution. 2012;66(9):2834–49.
100. Ence DD, Carstens BC. SpedeSTEM: a rapid and accurate method for species delimitation. Mol Ecol Resour. 2011;11(3):473–80.
101. Jones G. Species delimitation and phylogeny estimation under the multispecies coalescent. bioRxiv 2015. doi:10.1101/010199.
102. Jones G, Aydin Z, Oxelman B. DISSECT: an assignment-free Bayesian discovery method for species delimitation under the multispecies coalescent. Bioinformatics. 2015;31(7):991–8.
103. O’Meara BC, Ane C, Sanderson MJ, Wainwright PC. Testing for different rates of continuous trait evolution using likelihood. Evolution. 2006;60(5):922–33.
104. Edwards SV. Is a New and General Theory of Molecular Systematics Emerging? Evolution. 2009;63(1):1–19.
105. Kubatko LS, Gibbs HL, Bloomquist EW. Inferring Species-Level Phylogenies and Taxonomic Distinctiveness Using Multilocus Data in Sistrurus Rattlesnakes. Syst Biol. 2011;60(4):393–409.
106. Niemiller ML, Near TJ, Fitzpatrick BM. Delimiting Species Using Multilocus Data: Diagnosing Cryptic Diversity in the Southern Cavefish, Typhlichthys Subterraneus (Teleostei: Amblyopsidae). Evolution. 2012;66(3):846–66.
107. Lim GS, Balke M, Meier R. Determining Species Boundaries in a World Full of Rarity: Singletons. Species Delimitation Methods. Syst Biol. 2012;61(1):165–9.
108. Janzen DH, Hajibabaei M, Burns JM, Hallwachs W, Remigio E, Hebert PDN. Wedding biodiversity inventory: of a large and complex Lepidoptera fauna with DNA barcoding. Philos T Roy Soc B. 2005;360(1462):1835–45.
109. Collins RA, Cruckshank RH. Known knowns, known unknowns, unknown unknowns and unknown knowns in DNA barcoding: a comment on Dowton et al. Syst Biol. 2014;63(6):1009–11.
110. Dowton M, Meiklejohn K, Cameron SL, Wallman J. A Preliminary Framework for DNA Barcoding, Incorporating the Multispecies Coalescent. Syst Biol. 2014;63(4):639–44.
111. Dupuis JR, Roe AD, Sperling FA. Multi-locus species delimitation in closely related animals and fungi: one marker is not enough. Mol Ecol. 2012;21(18):4422–36.
112. Eberle J, Warnock RCM, Ahrens D (2016) Data from: Bayesian species delimitation in Pleophyla chafer (Coleoptera) – the importance of prior choice and morphology. Zenodo. doi:10.1186/s12862-016-0659-3. |
Love is patient; love is kind; love is not envious or boastful or arrogant or rude. It does not insist on its own way; it is not irritable or resentful; it does not rejoice in wrongdoing, but rejoices in the truth. It bears all things, believes all things, hopes all things, endures all things... And now faith, hope, and love abide, these three; and the greatest of these is love.
1 Corinthians 13:4-7,13
The thirteenth chapter of First Corinthians is one of the most familiar passages in all of scripture. Even those who are not especially religious, and whose entire experience of church is that they happened to get married in one, have probably come across these words before. "If I speak in the tongues of mortals and of angels ... when I was a child, I spoke like a child ... for now we see in a mirror, dimly, but then we shall see face to face ... so faith, hope, and love abide, these three; and the greatest of these is love." For many of us, these verses have become like old, trusted friends. No matter how long it's been since we last visited, they express truths so timeless and profound that just sharing their company has a way of instructing and inspiring us.
In a sense, this chapter represents Paul's attempt to describe the governing rule for Christian life - love. And according to Jesus, the supreme requirement also is that we love God with all our hearts, souls, minds, and our neighbors as ourselves. Love is the one thing, above all else, which our Lord requires of us as disciples, and thus it should be the primary characteristic of our faith. If someone were to inquire, "What does it mean to be a Christian?" what better answer can we give than to say, "It means that we love one another." And if by chance there would be the follow-up question, "What does it mean to love one another?" we can find no better example than the life of Jesus Christ. He did more than merely tell us to love our neighbors; he also taught us how. His intent, it seems, was to move us to the point where we finally love everybody we meet -- including those we just met and even those we wish we never had.
The type of love Paul speaks about is agape love. Agape does not want. It gives. Agape does not need. It serves. Agape is not an emptiness desperately trying to be filled. It is already overflowing. Agape is patient and accepting and willing to endure all things. Agape love never ends, while other types of love cease the moment the object of our attraction becomes unattractive. Agape love is love by choice, and not simply a feeling.
The Apostle Paul says that, of all the gifts God has given us, love is the greatest. It is the most powerful force that exists. But to some extent, it is also the most powerless, because it can do nothing except by our consent. We need to decide to love. It's not a feeling; it's a choice.
See you in church!
Pastor Robin
Sunday Evening Worship and Praise Services continue on Sundays at 5:00 p.m. Please help us spread the word! This is a 45-50 minute, “come as you are” service with singing, a short message, and communion.
**Women’s Activities**
*Women’s Dinner* - Our next women’s dinner will be at 6:00 p.m. on Monday, February 25th, at the Monte Carlo Italian Kitchen, 345 W. Olentangy St., Powell, OH 43065 (614) 389-3711. If you need transportation, please let Pastor Robin know.
Next Women’s Bible Study is on Monday, February 4th at 7:00 pm. Susan Wolfe is the facilitator for the discussion utilizing the book entitled, *More Than Enough: How Jesus Meets Our Deepest Needs* by Jeff Iorg. Please read Chapter 4 for the class.
**Meals on Wheels Schedule for February and March**
Friday, February 15 Biemel Friday, March 15 Sumner
Saturday, February 16 Engen Saturday, March 16 Robinson
If you would like a home visit by Pastor Robin, please let Pastor Robin or Tracy know so we can get one scheduled. It does not need to be a pastoral emergency… just an opportunity to get to know you better.
**Thrivent Members** – For those of you with choice dollars don’t forget that Fellowship can be a recipient of those dollars. Thank you to all who have donated this way in the past!
**Did You Know That Fellowship Has an Active Care Team?** Our care team meets monthly and sends out cards to those in our congregation, and their relatives and friends with concerns. We now have other members trained in Congregational Care and Hospital Visitation who will be assisting Pastor Robin with the pastoral care needs of our congregation, visiting our homebound members and offering communion.
**Gluten Free Communion Wafers are available.** If you have trouble with the regular communion wafers or with the bread served, please let Pastor Robin know. We do have gluten-free wafers available. Large print bulletins are also available for worship. Please ask the usher for assistance.
**Do You Order From Amazon?** If so, then try Amazon Smile to support Fellowship Lutheran Church! First, go to Amazon Smile and choose Fellowship Lutheran Church, Columbus, OH as your charity. Then, shop as usual and Fellowship will receive a percentage of your purchase. It’s that easy!
**Kroger Plus Fundraising Information:** If you would like to register your Kroger card for the Community Rewards Program, please contact Pastor Robin or the church office. Thank you so much for participating in this fundraiser! We received over $1,000 last year!
| Date | Attendance |
|--------|------------|
| 12/2 | 59/8 |
| 12/9 | 62/27 |
| 12/16 | 107/7 |
| 12/23 | 53/6 |
| 12/24 | 117 |
| 12/30 | 36 |
| 1/6 | 61/6 |
| 1/13 | 51/6 |
| 1/20 | Cancelled |
| 1/27 | 50/9 |
“For just as the body is one and has many members, and all the members of the body, though many, are one body, so it is with Christ. For in the one Spirit we were all baptized into one body—Jews or Greeks, slaves or free—and we were all made to drink of one Spirit.”
1 Corinthians 12:12-13
This passage goes on to talk about how the members of the church in Corinth all contribute to the well-being of the church. At Fellowship, we have many members with many different skills and backgrounds. That gives us a lot of assets to use in our work for God.
The Leadership team spent Saturday Jan. 26 planning and visioning for the new term. Much of our focus is on making better use of our members’ talents and time. That is one of the key objectives of the organizational structure that is outlined in the constitutional update. We hope to empower our members to contribute to the congregation and the community that give fit their skills and passion. This approach comes from a book, Power Surge, written by an ELCA Lutheran pastor in Burnsville, MN. Picking people for specific roles can easily pigeon-hole someone into a role that doesn’t really fit them. Therefore, you can get burn-out and lack of energy in that role. One of the comments that came from several people in our focus groups was a seeming lack of energy. This is one way we hope to address this challenge and turn it into a strength.
We also identified 3 key principles that we wish to guide us in this term:
• Reinvigorate our congregation
• Be willing to Innovate and take some risks
• Empower and be empowered
These principles will help us narrow down our goals and objectives and keep us focused on what is most important for our mission. And, we hope it generates excitement and commitment to help us meet our purpose of sharing God’s love and Grace through our words and actions.
We are committed and optimistic that we can make some important changes and improvements that will lead to long term growth in members and our mission and ministries. We encourage you to think about their talents and skills and what things you would like to participate in and maybe even new things that no one else has thought about. You and I are all empowered to do things that move us forward in our ministry and mission.
**Congregational Meeting Extended to February 10**
We held our January congregational meeting on Sunday, January 27. Unfortunately, we did not have enough voting members in attendance to reach a quorum. So, we were unable to vote on the proposed update to our budget. We adjourned the meeting and will re-open on February 10 following our service to review and approve the budget. Please try to attend as it will be important for us to get this done. In the meantime, we will be executing to the updated budget that was presented on Jan. 27, and, hopefully, will be approved February 10.
As always, comments and questions are welcome.
Your servant in Christ,
Todd Engen
1 Thessalonians 5:11 “Therefore encourage one another and build up one another, just as you also are doing.”
This is my first newsletter article and I am very excited to be the Director of Congregational Life. I am awestruck by the energy at our PPC Planning workshop, especially by Carol Jaeger, and look forward to helping strengthen our congregation.
As I am stepping into this new territory at Fellowship, I would like to create a couple of teams to help with 2 upcoming activities: **Mardi Gras Potluck** on March 2\textsuperscript{nd} and **Easter Breakfast** on April 21\textsuperscript{st}. If you are interested in helping out, please don’t hesitate to email me email@example.com or chat with me at church.
A small but very impactful, recurring event of fellowship is having a host provide a small snack after church. I will be refreshing our host list and would greatly encourage families to sign up. This is a great and easy way for each of us to help bring our Fellowship family together!
**Dates to Remember**
3/2/19 Mardi Gras Potluck at 6 PM - Bring your favorite “Fat Tuesday” recipe
4/21/19 Easter Breakfast
Speak Up
“Speak up for those who cannot speak for themselves, for the rights of all who are destitute. Speak up and judge fairly; defend the rights of the poor and needy. Proverbs 31:8-9”
Sunday School News
In the adult class we will continue with the study of the New Testament from Matthew to Revelation. We will discover how historical research can illuminate the New Testament by a series of twenty lectures by the Professor of Religious Studies, Bart D. Ehrman at the University of North Carolina at Chapel Hill. Dr. Ehrman has crafted these lectures as a historical introduction to the 27 books of the New Testament to allow us to come to understand their content, meaning, and historical accuracy. Each lecture lasts about twenty-five minutes after which the class will discuss the lecture. Dr. Ehrman is outstanding! Hope to see you there!
Food Pantry
LSS is still operating the food pantry at the Champion-Frebis location on Monday, Wednesday, and Thursday from 12-2 p.m. The Westside pantry has moved to the Glenwood Community Center at 1888 Fairmont Ave. It will operate on Tuesdays and Fridays from 12:00 - 2:00 p.m. This is a temporary move for the Westgate center which will be remodeled this winter.
They still need volunteers at both locations from 11:00 a.m. until 2:00 p.m., Monday through Friday. I changed my volunteer day to Wednesday because of the donation center. I’m willing to coordinate working at Frebis on Wednesdays. You can call Josiah, the LSS volunteer coordinator at 740-364-8298 or me at 614-764-1084. You may also call the main number 877-LSS-MEAL from 10:00 a.m. to 2:00 p.m.
Men’s Breakfast
Our next men’s breakfast will be on Saturday, February 9th at 8:30 a.m. All men are invited. Please come if you can. We’ll eat at 8:30 a.m. and you’ll be home by 9:45 a.m. Come and enjoy breakfast and fellowship.
Amazing Grace Day Camp
It is not too early for planning the Amazing Grace Day Camp here. It will be from July 14-19. Last year we had twenty four volunteers helping out with tasks from cooking to crafts. Please see Todd, Pastor Robin, or me for details.
In God’s service,
Bill
Outreach Team
Carol Jaeger
You are the light of the world. A city set on a hill cannot be hidden. Nor do men light a lamp, and put it under the peck-measure, but on the lampstand; and it gives light to all who are in the house. Let your light shine before men in such a way that they may see your good works, and glorify your Father who is in heaven.
Matthew 5:14-16
The change in Fellowship’s organization describes teams to guide our congregational ministry. The Outreach Team, with guidance from the pastor, is intended to develop and maintain a structure that will enable members to become aware of and volunteer to serve in programs that fulfill the service function of Christians within the congregation and within the community. Sharing and spreading the Good News of God’s love and grace in Christ through our words and actions is the responsibility of each member of the congregation.
Reorganizing our infrastructure is an opportunity to evaluate how we “shine our light” upon others, to share good works, and glorify our Father who is in heaven. It is a value-added exercise to re-confirm our volunteer energy, activity need, and continuing commitment. And, evaluating all that we are doing, and who is doing it, may help us avoid volunteer fatigue. With your continued assistance, most of the activities of our outreach ministry will continue, such as Souper Bowl, Smokey Row Food Drive, Meals on Wheels, Burgers and Basketball, and the Little Library. The time and talent that you contribute to these activities and the many additional events not mentioned, is to be celebrated!
FLC’s commitment to providing and serving meals every other month at First English Lutheran Church has been suspended for further evaluation. David Bear, who has graciously prepared and transported excellent nutritional meals to First English for many years, is no longer able to continue. He provided the service, by combining his expertise and talent with the equipment available at his work environment. Also, under the direction of David, many FLC volunteers served the meals and baked desserts. I want to extend to David, and all of our volunteer servers and bakers, a heartfelt “thank you” for their generous and caring commitment to this outreach project.
I look forward to working with PPC, and with all of you, to continue the development of a strong and sustaining outreach program.
In His Service,
Carol Jaeger
What's happening update from Stewardship & Support
I hope all is well for you and yours as we begin our journey into this year (2019); wow – time flies when you're having fun.
On the stewardship front, our goal is to create some potential car wash fund raisers and also have a few summer time BBQ cookouts after Sunday church. Mike has suggested the possibility of food truck Sunday’s so we are going to think out side the box. We need to enjoy a few hours of fellowship and also help expose our Church to potential new members. Along with some great Ribs and BBQ Sandwiches, we’ll have of course some burgers and hot dogs. The dates for the BBQ Summer Fellowship Fun Fest will be determined later in the year.
Car wash fund raisers will be the second Saturday of every month starting in April thru August. The Car wash fund raisers will be held at NASTY’s to start with as we have a great venue for this and it will also give us some exposure to possibly some new potential members. Nasty’s will provide free meals for all youth and family participants who help, – (AS it takes team support to have successful events). Our goal is to raise $2,000 for our Church to help support where it is needed.
One area we all have to think about is our Famous Pumpkin Patch. Jim and his team (and unfortunately it was a small team – as we do need more participation if we go forward with this from all members to help sell our pumpkins), did a great job last year as they produced incredible results in the amount of funds raised for stewardship projects. We have until May to decide if as a congregation and family we can execute this event. The unloading of the truck is easier for us that are aging gracefully and can no longer lift those pumpkins, (and I say this politely), as the Dublin Scioto and Worthington Kilbourne youth helped unload to fulfill some time toward their community service graduation requirements.
The team also discussed the possibility of a wine tasting (or make your own wine night out). We also talked about the possibility of a family & friends monthly bowling fellowship excursion for some things to do and continue our goal of building a new family base for our church as we need new members along with our current families.
One all important aspect of the stewardship side, we have recently gone through change with some members going in a different direction. This is a situation that has caused change for us that remain. We have a great core of families that attend Our Church (Fellowship Lutheran). Team (Together everyone accomplishes more), lets continue to rebuild and grow our church again. Invite one family a month to be our guest for worship and fellowship.
If you have any other thoughts and or ideas let any of the team members know.
Have Fun and Feel God’s Energy & Fellowship, at Fellowship Church, Our Church.
Thanks!
John
“Sing to the Lord a new song; sing to the Lord, all the earth. Sing to the Lord, praise his name; proclaim his salvation day after day. Declare his glory among the nations, his marvelous deeds among all peoples.” Psalm 96: 1-3
We are excited about entering a new year of Worship at Fellowship! I look forward to renewed engagement and energy in our worship at Fellowship.
**Sunday morning worship opportunities**
If you are not already assisting during worship on Sunday mornings, please consider volunteering to serve in roles such as Usher, Communion Assistant, Lay Reader or Assisting Minister. Just contact the church office (firstname.lastname@example.org) to let us know where you would like to serve.
**Preparing for Lent and Easter**
March brings Lent, leading up to Easter in mid-April. We welcome volunteers to contribute to our Lenten observances.
**Midweek Lenten soup suppers and services**
During the Lenten season, we will meet each Wednesday evening at 6:15 pm for a soup supper followed by a service at 7:00 pm. Please consider contributing to a Wednesday evening Lenten supper. These light suppers (soup and sandwiches or salad) are served just before our service. Two or three families can provide each meal. The Board of Worship will provide paper and plastic ware, as well as peanut butter, jelly and bread. Supper will start at 6:15, and the church will be open at 5:00 so you can begin preparations. Plan to serve approximately 20 - 25 people. Everyone is invited to attend these meals to share in what Lutherans like best… fellowship and food.
Please let us know of your plans to host a meal by signing up on the sign-up sheet in the narthex, starting mid-February.
**Maundy Thursday agape meal and service**
Our traditional Maundy Thursday Agape meal and service of Holy Communion will take place on Thursday, April 18. Supper begins at 6:15 pm and the service begins at 7:00 pm.
**Mark your calendars**
| Event | Date |
|------------------------------|------------|
| Healing service | March 3 |
| Ash Wednesday | March 6 |
| Midweek Lenten services and suppers | March 13, 20, 27 |
| | April 3, 10 |
| Palm Sunday | April 14 |
| Agape meal and service | April 18 |
| Good Friday | April 19 |
| Easter | April 21 |
Amazing Grace Day Camp
"Go therefore and make disciples of all nations, baptizing them in the name of the Father and of the Son and of the Holy Spirit, and teaching them to obey everything that I have commanded you. And remember, I am with you always, to the end of the age." Matthew 28: 19,20 (NSRV)
Fellowship has signed up to host Lutheran Outdoor Ministries of Ohio's (LOMO) Amazing Grace Day Camp again this summer, July 15-19. We have hosted this day camp for the last 2 years. It started when the Southern Ohio Synod (SOS) asked us if we would be interested when another church bowed out.
Amazing Grace Day Camp is a unique blend of outdoor ministry and congregational evangelism, outreach, and outdoor ministries. Amazing Grace has been developed by LOMO and is a partnership of the ELCA, Southern Ohio Synod and LOMO. The camp is staffed with LOMO counselors.
The Day Camp was started with a couple ideas in mind. First, fewer people are interested in overnight camps, especially church-oriented ones. And, second, SOS and all Lutheran organizations are looking for ways to broaden their appeal to non-churched people and people outside of our traditional demographics. SOS thought that the area around Fellowship was a good match with that strategy. Our area has a lot of apartments nearby and growing numbers of people with lower incomes and are often minorities, and daycare in the summer is a challenge for many of them.
The goals and principles of the camps are:
To provide an opportunity to invite new people to experience the Gospel.
Value relationships.
To provide a camping experience for youth who may not be able to attend a resident camp.
A tool for outreach to the unchurched
We have had good success hosting this camp the last 2 years, largely due to the tremendous efforts of our congregation's volunteers. We had more than 15 of our members involved last year in making the camp a success. We are still looking for volunteers for this year. When you volunteer, you get to be a part of the fun, exciting week with children and the wonderful counselors that LOMO provides. Please contact me, Pastor Robin, or Bill Lude if you are interested or able to help with meals, crafts, registration, or hosting one of the counselors. I promise you will enjoy it as much as the kids do.
If you know someone with elementary aged kids, this is a great thing to invite them to.
For more information, see https://www.lomocamps.org/outreachdaycamps
Todd Engen – Mission Interpreter
Our Prayers and Support for . . .
FOR OUR MEMBERS:
Maxine Brunner
Sam Brunner
Major N Crispin
Bob Drake
Barb Fisher
Curtis Fleisher
George Haggard
Leena LeMay
Sonja Meighen
Joshua Mott
Sue Perry
Don Poland
Martha Sampson
Sonya Thelin
Shawn Turnbull
FOR OUR FAMILY AND FRIENDS:
Joan Albert, Ruth Albert’s mother...
Krista Alexander, Pat Frey’s aunt...
Jeff Hershberger, Sonya Thelin’s nephew...
Lisa and Dave Hershberger, Sonya Thelin’s sister and brother-in-law...
Steve Bear, Fritzi Bear’s son, Dave’s brother...
Toni Carroll, Pastor Robin’s friend’s wife...
Rosemary Craig, Cindy Frey’s mother...
George Lude, Bill Lude’s brother...
Cindy Morgan, Margery Gress’ daughter’s mother-in-law
Rebecca Schultz, Andrea Sumner’s mother...
Orun Shockley, Karen Harris’ friend’s son...
Kim Short, Bill Lude’s niece...
George Sumner, Mike Sumner’s father...
Shirley Tipton, Michelle Cleveland’s grandma...
Baby Hayden Valesky, Votino’s great nephew...
Catherine Vonderhae, Becky Schaughency’s friend...
FOR OUR SHUT-INS:
Lynn Brown, Debbie Bear’s mother
DEATHS:
Ron Ortman, Becky Schaughency’s friend...
Cindy Smith, Darcy Dom’s friend from Cambridge...
February Birthdays
Haley Defibaugh 2/08
Zach Jaeger 2/09
Eliana Anderson 2/10
Erling Lee 2/11
Caitlyn Votino 2/11
Clayton Hobbins 2/13
Bob Drake 2/15
Kathleen Wolkan 2/18
Brittany Hairston 2/21
Lillian Cardimen 2/22
James Votino 2/22
Paul Wargowsky 2/22
Leena LeMay 2/28
Happy Birthday to those celebrating birthdays this month! This list reflects the information we have in our computer at this time. We welcome your corrections and additions!
February Anniversaries
## Church Staff & Leadership
### Church Staff
- **Pastor**: Rev. Robin Wargowsky
- **Treasurer**: Heather & Chris Tonn
- **Financial Secretary**: Tracy Farel
- **Office Secretary**: Tracy Farel
- **Organist**: Andrea Sumner
- **Sub. Organist**: David Fleisher
- **Sunday evening pianist**: Amy Wargowsky
- **Choir Director**: George Biemel
- **Custodian**: Kenneth Hunt
### Executive Committee
- **President**: Todd Engen
- **Vice-President**: Mike Rankin
- **Secretary**: Ruth Albert, Pastor Robin
### Congregational Life
- **Director**: Chad Laucher
### Education & Youth
- **Director**: Bill Lude
### Outreach
- **Director**: Carol Jaeger
### Stewardship & Support
- **Director**: John Votino
### Worship
- **Director**: Sonya Thelin
---
**Thank you for your leadership!!!**
| Date | 3-Feb | 10-Feb |
|--------|-------|--------|
| Time | 10:00 AM | 10:00 AM |
| Assist. Minister | Hayden Laucher | Chad Laucher |
| Acolyte | McKenzie Laucher | Gwen Cardimen |
| Comm. Assist. | Sonya Thelin | Carolyn Crispin |
| Lay Reader | Kathy Farel | Jeanine Biemel |
| Ushers | Bill Lude & Major Crispin | Abigail Cardimen & Jillian Mott |
| Altar Guild | Carol Scantland & Dianna Roll | Carol Scantland & Dianna Roll |
| Hymn Sel. | Cindy Frey | Cindy Frey |
| Counters | Jim Jaeger & Pete Cardimen | Bill Farel & Mike Sumner |
| Greeters | Karen Harris | Crispin |
| Host | | |
| Date | 17-Feb | 24-Feb |
|--------|--------|--------|
| Time | 10:00 AM | 10:00 AM |
| Assist. Minister | Todd Engen | Ruth Albert |
| Acolyte | Hayden Laucher | Aleigha Crispin |
| Comm. Assist. | Andy De'Vantier | Sam Sano |
| Lay Reader | Donna Williamson | Sonya Thelin |
| Ushers | Chad Laucher & Tom Mott | Bill Farel & Jim Jaeger |
| Altar Guild | Carol Scantland & Dianna Roll | Carol Scantland & Dianna Roll |
| Hymn Sel. | Cindy Frey | Cindy Frey |
| Counters | Andy De'Vantier & Lori Hobbins | Jim Jaeger & Pete Cardimen |
| Greeters | Laucher | Votino |
| Host | | |
February 2019 Calendar
Weekly Recurring Events
Sunday
- 8:45 a.m. - Sunday School
- 10:00 a.m. - Worship Service
- 5:00 p.m. - Worship and Praise
Monday
- 8:00 p.m. - Narcotics Anonymous
Tuesday
Wednesday
- 7:00 p.m. - Choir (except February 27)
Thursday
Friday
- 8:00 p.m. - Narcotics Anonymous
Other Events & Meetings
February 3, 2019 (Sunday)
- 10:00 a.m. - Souper Bowl Sunday
- 11:15 a.m. - Executive Committee
February 4, 2019 (Monday)
- 7:00 p.m. - Women’s Book Study
February 9, 2019 (Saturday)
- 8:30 a.m. - Men’s Breakfast
February 10, 2019 (Sunday)
- 11:15 a.m. - Congregation Meeting
February 12, 2019 (Tuesday)
- 7:00 p.m. - Ministry Teams
February 15, 2019 (Friday)
- 6:30 p.m. - BAPS
February 19, 2019 (Tuesday)
- 7:00 p.m. - Council (PPC)
February 25, 2019 (Monday)
- 6:00 p.m. - Women’s Dinner
February 28, 2019 (Thursday)
- 10:30 a.m. - Care Team & Lunch
| Sun | Mon | Tue | Wed | Thu | Fri | Sat |
|-----|-----|-----|-----|-----|-----|-----|
| 27 | 28 | 29 | 30 | 31 | 1 | 2 |
| | 8:45 am - Sun. School
10 am - Worship
11:15 am - Cong. Mtg
5 pm - Worship | 6 pm - Wm Dinner
8 pm. - Narc. Anon. | | | 8 pm - Narc. Anon.
8 pm - Narc. Anon. | 8:30 am - Men’s Breakfast |
| 3 | 4 | 5 | 6 | 7 | 8 | 9 |
| 8:45 am - Sun School
10 am - Worship
11:15 am - Executive
5 pm - Worship | 7 pm - Women’s Book Study
8 pm. - Narc. Anon. | | 7 pm - Choir | | | 16 |
| 10 | 11 | 12 | 13 | 14 | 15 | 16 |
| 8:45 am - Sun. School
10 am - Worship
11:15 am - Cong. Mtg
5 pm - Worship | 8 pm. - Narc. Anon. | 7 pm - Ministry Teams
7 pm - Choir | | 14 Valentine’s Day | 10:45 am - Meals on Wheels
6:30 pm - BAPS
8 pm - Narc. Anon. | |
| 17 | 18 | 19 | 20 | 21 | 22 | 23 |
| 8:45 am - Sun. School
10 am - Worship
5 pm - Worship | 8 pm. - Narc. Anon. | 8 pm. - Narc. Anon. | 7 pm - Council (PPC)
7 pm - Choir | | 8 pm - Narc. Anon. | |
| 24 | 25 | 26 | 27 | 28 | 29 | 30 |
| 8:45 am - Sun. School
10 am - Worship
5 pm - Worship | 6 pm - Wm Dinner
8 pm. - Narc. Anon. | | 7 pm - NO Choir | 10:30 am - Care Team
8 pm - Narc. Anon. | | |
Our Purpose: To share God’s love and grace through our words and actions.
Our Vision: To be a “called and gathered” family of faith whose members so fully experience the undeserved love of God that we are moved to express that same love of God, in our relationships with one another and our whole community . . .
February 2019 Newsletter |
Improving the Assessment of Noise Exposure and Warning Signal Audibility on Construction Sites
Nikolina Samardzic
Aslihan Karatas,
Behzad Esmaeili
Christian Hammond
Lawrence Technological University
February 2023
©2023, CPWR-The Center for Construction Research and Training. All rights reserved. CPWR is the research and training arm of NABTU. Production of this document was supported by cooperative agreement OH 009762 from the National Institute for Occupational Safety and Health (NIOSH). The contents are solely the responsibility of the authors and do not necessarily represent the official views of NIOSH.
Abstract
Fifty-one percent of construction workers are exposed to hazardous noise that can cause permanent noise-induced hearing loss (NIHL), and 52% do not wear a hearing protection device (HPD). Hearing loss could be reduced more effectively with more accurate measurements of noise exposure. The common occupational noise exposure measurement devices are single-channel noise dosimeters worn on a worker’s shoulder, which are not advanced enough to capture and analyze complex sounds that pose threats to workers’ hearing. This research proposes a more accurate noise exposure and audibility assessment by using binaural (two-ear) measurements, paving the way for more effective noise characterization and hearing loss prevention in loud workplaces, as well as helping workers identify the source of warning signals. The difference between these two measures would significantly affect interpretations of hearing loss risk (dose) calculations. This project’s results showed clinically significant differences (>3 dB) between the auditory risk assessment using current state-of-the art and proposed binaural measurements. A preliminary acoustic perception study was conducted in a laboratory setting using human participants and binaural measurements of the warning signal and noise of the construction equipment. It was found that localization of the audible warning signals could not be identified. The findings from this project are crucial for highlighting the importance of binaural measurements and, more generally, of accurate assessment of noise exposure and auditory warning signal perception of the workers on a construction site.
Key Findings
- The study, by testing binaural (two-ear measurements), exposed drawbacks of the standard monaural noise exposure measurement and analysis method of single-channel dosimeter and sound level meter.
- The study quantified sound impulsiveness/hazardousness with a binaural loudness metric, which is similar to human hearing, a measurement that is not possible with traditional sound pressure level (SPL) metrics. (Results, Part I)
- There were significant differences in traditional monaural and binaural assessment of noise exposure on a construction site, with the binaural measurements always higher than those from the single-channel dosimeter measurements. (Results, Part II A)
- Binaural assessment, which better reflects noise exposure on a construction site, allowed for identification of a more noise-exposed ear and quantification of asymmetry of noise exposure (i.e., as a percentage of the total number of measurements with higher SPL and/or loudness, from each ear, or a number of impulsive events and their acoustic characteristics for each ear). (Results, Part II B)
- A preliminary acoustic perception study, conducted using the recorded warning signals from the operating construction equipment (played over headphones in an acoustic perception lab with human participants), found that the localization of the audible warning signals could not be identified. (Results Part III)
- The study created a database of approximately 1500 noise events commonly encountered on a construction site from the daily noise exposure recordings and is available upon request from PIs for future noise exposure metric development and construction noise assessment research.
# Table of Contents
Abstract .......................................................................................................................... i
Key Findings ................................................................................................................ ii
Introduction ................................................................................................................... 1
Importance of Binaural Measurements and Accurate Acoustic Assessment .............. 2
- Time-varying characteristics ......................................................................................... 3
- Asymmetric noise exposure ......................................................................................... 4
- Sound (warning signal) localization ............................................................................ 4
Objectives ..................................................................................................................... 4
Methods ....................................................................................................................... 4
- Measurement Methods ................................................................................................. 4
- Analysis Methods ......................................................................................................... 6
- Procedure for Analysis: SPL and Loudness Calculations (Parts I and II) .................. 7
- Procedure for Localization Evaluation (Part III) ....................................................... 8
Accomplishments, Including Relevance and Practical Application .......................... 9
Results ......................................................................................................................... 10
Changes/problems that Resulted in Deviation from the Methods ............................. 12
Future Funding Plans ................................................................................................... 12
List of Presentations/publications, Completed and/or Planned .................................. 12
Dissemination Plan ...................................................................................................... 13
References ................................................................................................................... 14
APPENDIX .................................................................................................................... 17
Introduction
NIOSH and the National Hearing Conservation Association identified construction workers as an “underserved” population, with 51% of construction workers exposed to hazardous noise (Kerns et al., 2018), and 25% of noise-exposed construction workers developing a hearing loss that negatively impacts their day-to-day activities and communication (Masterson et al., 2015). Hearing loss is correlated with higher risk of depression, social isolation, cognitive decline, and poor educational outcomes (Lawrence et al., 2020). Accordingly, it is crucial to accurately measure and analyze the noise exposure at construction sites.
Several previous studies have revealed common types of noise monitoring on construction sites (Seixas et al., 2012; Fernández et al., 2009; Suter, 2002; Seixas et al., 2001; Mahdi et al., 2020; Fedorko et al., 2019; Hong et al., 2015). Another set of studies analyzed construction workers’ onsite noise exposure using personal noise dosimeters and sound level meters (traditional noise exposure measurement devices) through different methodologies:
(i) Task-Based Measurement (TBM), a method in which construction tasks are measured (e.g. excavation, demolition, construction framing, earthwork) (Reeb-Whitaker et al., 2004);
(ii) Job Based Measurement (JBM), a method in which complicated work patterns are split into task categories (Legris and Poulin, 1998; Li et al., 2016; Fernández et al., 2009; and
(iii) Full-shift Measurement (FSM), a method in which continuous sound pressure level is measured over a working day (Arezes et al., 2012; Dabirian et al., 2020).
These studies documented the daily construction worker noise exposure, or a particular hazardously noisy task as is recommended by OSHA (29 CFR 1910.95).
Hearing loss can be reduced with accurate assessment of noise exposure. The use of the traditional devices (dosimeters and sound level meters) in the aforementioned methodologies relies on single-channel measurements. Such measurements cannot be used to assess the realistic human binaural perception of sounds, especially for asymmetrical sound exposure and localization of sounds, simply because they do not accommodate situations where sound signals differ between the two ears. Although traditional noise assessment methods make a tremendous contribution in controlling and mitigating the occupational noise risks, they fail to provide comprehensive assessment of the soundscape on a construction site. This shortcoming may potentially increase the risks of excessive noise exposure and insufficient audibility of warning signals at sites. Accurate characterization of the hazardousness of noise as well as localization of important sounds are directly linked to the binaural (two-ear) nature of human hearing. To address the aforementioned research gap, this study used binaural (two-ear) measurements with microphones at the ears’ locations for accurate evaluation of the acoustic signal perception by workers on a construction site. No published studies are available on the binaural assessment of the acoustic environment on a construction site.
The most recent studies of in-ear noise dosimetry use miniature microphones, placed deep inside the ear canal, to monitor daily noise exposure levels (Rabinowitz et al., 2013; Bonnet et al., 2020; Hugues et al., 2018). While these measurements are interesting, they are used primarily in experimental research rather than everyday worker monitoring. Additionally, there is currently no standard method to combine data from the two ears (obtained using this measurement method) that
would form a single metric to indicate the worker’s risk of noise-induced hearing loss (NIHL). Therefore, this project conducted data analysis of the binaural measurements based on the most recently developed loudness calculation algorithm (ISO 532-2, 2017) that can accommodate situations where the signals differ at the two ears, as is typically the case in real listening situations such as a construction site. Ultimately, and conveniently, the ISO 532-2 loudness algorithm can provide an estimate of the binaural loudness as a single value. As such, the binaural loudness metric is a potential alternative for improving the accuracy of noise exposure assessment, as proposed in this study. Loudness-based assessment of noise exposure based on binaural measurements can potentially reveal significant differences between hazardousness of sounds. A louder impulsive sound is known to be more hazardous than a steady-state sound, perceived as less loud, may both have the same measured sound pressure level (SPL) and calculated SPL-based noise dosage. Impulsiveness of the noise signals, and potentially auditory risk due to impulsiveness, can potentially be quantified by the binaural loudness calculation. This was investigated in the analysis of the binaural noise data collected from the construction sites. Therefore, the proposed noise exposure quantification method was based on advanced (binaural) measurement of complex physical acoustical stimuli, and standardized binaural psychoacoustic/loudness metric, in contrast to the traditional method based solely on sound pressure level (SPL), which may not capture any differences in hearing loss risk.
The results of the study directly support NIOSH’s National Occupational Research Agenda (NORA) Construction Strategic Goal #6—to reduce occupational hearing loss in construction through a multifaceted research and outreach effort. Furthermore, this research aligns with CPWR’s special emphasis area to reach high-risk sectors such as small employers, vulnerable workers, and residential and light commercial construction. The study participants were from this industry sector and the results provide awareness of the importance of hearing conservation, signal audibility, and safety in the high-risk sector workplaces.
The proposed findings of this research have several practical applications for the workers and their employers. First, the proposed technique can provide a much more reliable approach for measuring the sound exposure, which can lead to a more efficient safety strategy for mitigating risk by reducing exposure time or using more effective personal protective equipment (PPE). Second, using the proposed approach, a database of sound exposure for different construction activities can be developed to enable pairwise comparison and reassigning at-risk workers. Finally, companies that are developing construction equipment and tools can benefit from the results of the study by obtaining a more accurate estimate of sound generated from their products.
**Importance of Binaural Measurements and Accurate Acoustic Assessment**
Accurate acoustic measures are important for two reasons: hearing loss and safety. This section will highlight the importance of binaural measurements and analysis, specifically from the context of accurate assessment of noise exposure and auditory warning signal perception of the workers on a construction site.
Existing acoustic evaluation methods used by OSHA (i.e., dosimeter and personal sound meters) do not accommodate situations where sound signals differ across the two ears. Speech communication in a noisy environment depends strongly on binaural processing (Genuit, 2004).
This processing supports directional hearing and binaural release from masking (Bronkhorst and Plomp, 1988; Moore, 2012). In addition, the outer ear acts as a directional filter that can change the sound pressure level at the eardrum by +15 to −30 dB, depending on the frequency and direction of sound incidence (Blauert, 1997), and this can introduce strong SPL differences between the ears.
A sound field is distorted by the presence of a listener, through sound reflection, absorption, and diffraction. The perceived sound is also a function of the shape of the human body, particularly the upper body (torso, shoulders and head). Dosimeters and sound level meters do not account for the effects of the presence of a human listener on the sound field. The microphone of the dosimeter is typically located on the shoulder and sound level meters are typically placed at a convenient location on the construction site, as close as possible to the sound receptors (construction workers). However, binaural measurements are essential for accurately characterizing the audibility and localization of warning signals and speech intelligibility on a site. Binaural measurements are particularly important because of the hazards on a construction site, warnings for which depend on the audibility and localization of warning signals and speech intelligibility. The audibility of the sounds, especially from alarm/auditory warning signals, and voice/speech signals containing critical safety/emergency or work-related information, plays a critical factor in the decision to wear hearing protection devices (HPDs) on site. Therefore, this study focused on binaural measurements to accurately evaluate the worker noise exposure and warning signal audibility on a construction site. This was performed with a binaural headset with a portable data acquisition system, allowing for mobile measurements that accurately capture sound as perceived by the construction workers throughout the day.
Improved measurement accuracy of binaural recordings offers a playback capability with a unique impression of standing in the original sound field. As such, binaural recordings are also suitable for sound simulations in a laboratory setting, hearing protection modelling and assessment, and explorative studies on the noise exposure assessment on a construction site, with listening tests and human test subjects.
The following three major characteristics of acoustic soundscape on the construction site were captured by this project’s binaural measurements:
**Time-varying characteristics**
Time-varying characteristics of sound have an impact on the perception of impulsive and broadband noise, both commonly encountered on a construction site, and affecting the aforementioned parameters (audibility, localization, and hearing loss). For example, impulsive and broadband noise have different loudness characteristics (expressed in sones) even when characterized by the same SPL (expressed in dB). Traditional dosimeters and sound level meters are unable to accurately capture the effects of impulsive noise on a construction site. In order to fully evaluate the effect of impulsive noise on the auditory system, specific parameters should be considered such as peak pressure, durations, rise time, energy, spectral content, number and mixture of impulses, and time between impulses (Kardous et al, 2005). Current dosimeters do not provide any of these measurement parameters other than peak pressure – the range and accuracy of peak pressure measurements are not adequate, because of the low sampling frequency of the dosimeter. In contrast, adequately sampled binaural measurements obtain all of this information.
Asymmetric noise exposure is also common in construction jobs and is most often attributed to the “head shadowing” effect (Berg et al, 2014), which occurs when one ear is shielded from the noise. The result may be asymmetric hearing loss. This is a common benefit claim for noise-induced hearing loss cases and is often attributed to occupational noise exposure, even if the phenomenon is often unexplained (Dobie, 2014). Binaural measurements and analysis would contribute to the prevention of asymmetrical hearing loss by providing a more accurate acoustic assessment of the construction work environment.
Sound (warning signal) localization
Sound localization—in particular, of warning signals—on a construction site is another important acoustic phenomenon that requires binaural assessment. Suter (2002) indicated that both hearing protection devices (HPDs) and hearing loss degrade the ability to localize the warning and speech on a construction site.
Objectives
The objective of this research is to develop a more accurate acoustic measurement and assessment of construction sites, with emphasis on comparing traditional methods (i.e., monaural and SPL-based assessment) to a novel method (i.e., binaural and loudness-based assessment), using the measurements of noise exposure and warning signal audibility. Specifically, the study:
- Tested the viability of using sound recording similar to human hearing on a construction site for noise exposure assessment: Binaural (two-ear) measurements
- Analyzed perception of construction noise similar to human hearing: Binaural psychoacoustic/loudness-based metric
- Exposed significant differences in traditional monaural and binaural assessment of noise exposure on a construction site
- Assessed noise exposure differences across ears on a construction site: asymmetric noise exposure
- Performed preliminary acoustic perception assessment of warning signal localization.
Methods
Measurement Methods
The Full-shift Measurement (FSM) methodology utilized over the course of several days of measurements on a construction site allows for a comprehensive acoustic assessment. The FSM relies on piecewise analysis of the recorded sound signals, where the analyzed “pieces” or selected time intervals of the recorded signals would be associated with particular tasks of jobs, per Task-Based Measurement (TBM) and Job-Based Measurement (JBM) methodologies. The analysis can also be performed over a selected time interval (every hour, every few hours or minutes). To the best of our knowledge, such measurements and the associated analyses are not available in the published literature of construction noise assessments. The FSM over the course of several days offers a comprehensive acoustic assessment. Therefore, it is the selected measurement methodology in this proposed study.
The data collection process was performed at a construction site in Detroit (Figures 1 and 2). Three workers/volunteers participated on each of the three days, using both traditional (dosimeter wearable on the shoulder) and binaural (two-ear headset) sound measuring devices. Three sets of dosimeters and SQobold binaural data acquisition (DAQ) system were rented from HBK and HEAD Acoustics Inc., respectively. The sound level meter (SLM) measurements were obtained at the exterior of the building, as close as possible to the building and the workers’ location. Its location, however, had to be far enough to prevent any interference with the construction activities. Approximately 960 gigabytes of data from all the equipment (dosimeters, SQobold binaural headset, and the SLM) was recorded over the course of three days.
The data was then organized with a detailed file naming convention, and backed up on an external hard drive, a Lawrence Technological University (LTU) network shared drive, and a research laptop dedicated to the project. The worker activities were observed and the comments from the workers obtained at the end of each day regarding their perception of the levels and loudness of noise experienced throughout the day. Figure 1 illustrates the location of the wearable devices, including different positions of the SQobold DAQ system carried around the waist in a pouch (“fanny pack”) during the workday. The most convenient location for the pouch was determined by the worker/volunteer. The wire connecting the headset to the DAQ system was covered by the workers’ clothes and not exposed to the work environment. Depending on the handedness of the worker, the dosimeter was placed on the shoulder of the side of the working hand, presumably closer to the noise sources throughout the day.
Figure 1: Noise exposure measurement setup, with dosimeter (A: red), SQobold binaural headset and DAQ system (B: yellow), and sound level meter (C: blue)
Figure 2: Audibility of warning signals, measurement setup with siren sound emission adjustment (E: bottom/green), and SQobold binaural measurement around the equipment (D: top/yellow)
Prior to the measurements, work analysis with job and task information planned throughout the day was obtained from the volunteers. On the first two measurement days, the volunteers were two carpenters and a plumber. On the third day, the volunteers were three carpenters. Next, time frames were obtained from the binaural recordings based on the types of activities throughout the day. Those time frames were later analyzed using the sound pressure level (dB and dBA) and loudness (sone) metrics, as specified in the project proposal. The warning signal measurements (Figure 2) were also obtained at three locations surrounding the equipment. The warning signal and operating equipment noise signals could not be separately measured, due to the nature of the equipment design (the warning signals could not be activated without the equipment operating and generating noise).
Figure 3 illustrates a preliminary acoustic perception evaluation that was conducted at LTU Phono lab, with seven human participants, with warning signal mixed in with the noise to evaluate warning signal audibility and localization. The participants were normal-hearing individuals, at least 19 years of age, and were recruited from the general LTU population. The measurements of warning signal and running equipment (measured as presented in Figure 2) noise were played over the headphones (Figure 3). The participants were simply asked to verbally indicate to the research personnel: 1) whether or not they were able to hear particular sounds (warning signals and speech), and 2) whether or not they were able to determine the direction of sound (localization).

**Figure 3:** The acoustic perception evaluation of warning signal audibility and localization, conducted in the LTU Phono Lab, using the recorded noise and warning signals shown in Figure 2.
**Analysis Methods**
There were three parts to the analysis.
**Part I** – The analysis was based on evaluating whether the traditional SPL-based and loudness-based assessment can reveal differences in impulsiveness of sounds. A louder impulsive sound is known to be more dangerous than a steady-state sound, perceived as less loud, may both have the same SPL/SPL-based noise dosage.
Part II – This part tested a working hypothesis that binaural measurements and analysis can reveal:
A) clinically significant differences (>3 dB) between traditional SPL-based single-channel dosimeter noise exposure assessment
B) asymmetrical noise exposure (≥ 1 dB between left and right ear based on the fact that 1 dB would be a perceptible difference in sound).
Part III determined whether the localization of the recorded warning signal in noise is possible using acoustic perception evaluation.
Procedure for Analysis: SPL and Loudness Calculations (Parts I and II)
An example setup for both SPL and loudness analysis, performed for each time frame, using BK Connect software, is shown in Figure 4 (left and right, respectively). The SPL calculation overview is shown in Figure 5. The SPL calculation utilizes the root mean square (RMS) value of the sound pressure measurements ($p$), over the selected time frame, associated with a job, task, shift, or specific time duration of a noise event of interest. The reference pressure, $P_{\text{ref}}$, is 0.00002 Pa. The exposure level calculations, based on SPL assessment (Figure 5), utilized the methods described in ISO 9612. The binaural loudness calculations overview, based on the ISO 532-2 algorithm, is illustrated in Figure 6.
Figure 4: BK Connect software analysis interface showing example SPL (left) and binaural loudness (right) analysis, data processing parameters and the results, as explained in Figure 5 and Figure 6, respectively.
Figure 5: SPL analysis overview for each selected measurement time frame (as a function of time, and overall). See Figure 4 (left) for BK Connect software setup for this calculation.
ISO 532-2 Binaural Loudness Calculation Algorithm
Third Octave Levels [dB(A)] vs. Frequency [Hz] → Fixed filter for transfer of sound source to eardrum → Fixed filter for transfer through middle ear → Transform spectrum to excitation pattern → Transform excitation to specific loudness
Left Ear
Calculate inhibition function → Calculate inhibited specific loudness
Right Ear
Calculate inhibition function → Calculate inhibited specific loudness
Sum inhibited loudness values → Calculate area under specific loudness pattern
Binaural Loudness [sone]
Figure 6: The ISO 532-2 algorithm for binaural loudness (BL) calculations. See Figure 4 (right) for BK Connect software setup for this calculation.
Procedure for Localization Evaluation (Part III)
The participants of the acoustic perception evaluation (Figure 3) were asked to indicate verbally to the research personnel whether they were able to hear particular sounds (warning signals and speech), and whether they were able to determine the direction of sound (localization).
It should be noted that, due to the nature of heavy construction equipment, warning signal measurements used for the acoustic perception evaluation, could not be obtained separately from
the background noise while the equipment was operating, as originally planned. Therefore, in this project, a comprehensive warning signal audibility and localization assessment could not be conducted. Such assessment would require advanced noise filtering algorithms to separate warning signal and noise measurements and to then adjust the signal-to-noise ratio (SNR) to ultimately determine warning signal audibility threshold levels. A methodical data collection with a wider variety of listener/receiver directionality configurations for signal localization studies is recommended in future research, as well as research and development of appropriate noise filtering algorithms.
**Accomplishments, Including Relevance and Practical Application**
Significant deficiencies were discovered in the results supplied by the dosimeter measurements, currently the most commonly used occupational noise exposure assessment method. For example, the format of the dosimeter recordings does not allow for the extraction of time-varying acoustic parameters or raw data; it is impossible to extract and analyze specific loud events, such as impulses perceived by the workers at various times throughout the day. The calculation of the dose (%), was provided as a single calculated value for the entire sampling period, i.e. duration of the entire work day.
Additionally, there was also an inherent inconsistency in the placement location of the dosimeter with respect to the source of noise that depended on the handedness of the worker and the type of work. In terms of the analysis, the hazardousness of different types of sounds could not be identified using dosimeter measurements and dosage calculations based on the physical measurements of the SPL.
The sound level meter (SLM) results were not comparable to the dosimeter and binaural measurements due to the fact that, for practical and safety reasons, they could not be placed at the exact work locations where noise was perceived by the workers. For greatest accuracy, noise exposure assessment needs to be obtained by a wearable device, ideally using a binaural data acquisition and analysis method.
The binaural measurement method was validated on two construction sites with minimal risk to the subjects. These minimal risks of wearing the devices are limited to, for example, embarrassment or annoyance over wearing the device or mild discomfort due to wearing it. Using the measurements, cases of asymmetric noise exposure were identified and impulsive, more dangerous, noise was quantified using a binaural loudness metric (ISO 532-2. Figure 6).
The study exposed the need to quantify situational awareness on construction sites with detailed and methodical measurement of signal and noise. The inability to localize warning signals was revealed in a laboratory setting using human participants, with measured locations, directions and equipment. For a future study, an advanced filtering algorithm needs to be developed in order to extract the warning signal from the noise (measured concurrently), to be able to adjust the signal to noise ratio in an acoustic perception evaluation, in a laboratory setting, for studies of sound localization in noise and hearing protection device (HPD) development and optimization.
Results
Part I – The analysis is based on evaluating whether the traditional SPL-based and loudness-based assessment can reveal differences in impulsiveness of sounds; A louder impulsive sound known to be more dangerous than a steady-state sound, perceived as less loud, may both have the same SPL/SPL-based noise dosage. Figure 7 exemplifies this phenomenon using the construction site noise.

**Figure 7:** Binaural loudness (from binaural measurements) vs dosimeter
The traditional single-channel dosimeter measurement of an impulsive sound of a particular SPL (in dBA) is not well correlated with binaural loudness (in sones) of the same sound. In other words, the sound exposure characterized by a particular SPL, and the resulting noise exposure assessment, can have different loudness levels, and potentially different hearing damage risks, given that impulsive (more dangerous) sounds are louder sounds.
Part II – Testing a working hypothesis that binaural measurements and analysis can reveal:
A) clinically significant differences (>3 dB) between traditional SPL-based single-channel dosimeter noise exposure assessment
We found that SPL values from the binaural measurements were always higher than those from the single-channel dosimeter measurements. The SPL difference across ears, an indication of asymmetrical noise exposure, particularly for impulsive events throughout the day, was between 5 to 10 dB. The difference between the dosimeter measurement and the average SPL between ears (captured by the binaural DAQ), especially for impulsive events, was between 5 and 20 dB. These SPL differences are clinically significant (>3 dB): The daily, permissible noise dose calculation is based on the noise exposure level and the duration of the exposure. For each noise level increase of 3 dB (NIOSH, 1998) or 5 dB (OSHA, 1983), the noise dose doubles. The SPL differences found using binaural measurement, compared to the dosimeter measurements, in the preliminary data collection would inevitably result in critical differences in interpreting the hearing loss risk (dose) calculation and would yield underestimation of true noise exposure.
A T-test was used to compare the means of two sample data sets (binaural and dosimeter/single-channel). The null hypothesis was accepted if the mean SPL difference of sample data sets was 3 dB. Alternative hypothesis was be accepted if the mean difference of the samples was not 3 dB. This will determine if data sets are significantly different or not. All of the collected data sets are normally distributed (a requirement for using a t-test). It should be noted that, for first test, the alternative hypothesis will be a mean difference that is greater than 3 dB. The statistical analysis performed using Minitab software yielded the results in Table 1.
**Table 1: Statistical analysis of binaural and single channel noise exposure measurements (Part II, A)**
| Sample | N | Mean | StDev | SE Mean | Difference | 98% Lower Bound for Difference |
|-----------------|-----|-------|-------|---------|------------|--------------------------------|
| Louder Ear A-Weight | 167 | 95.37 | 6.22 | 0.48 | 7.975 | 6.456 |
| Dosimeter A-Weight | 167 | 87.39 | 7.2 | 0.56 | | |
* \( \mu_1 \): population mean of Louder Ear A-Weight
\( \mu_2 \): population mean of Dosimeter A-Weight
Difference: \( \mu_1 - \mu_2 \)
| Null hypothesis* | \( H_0: \mu_1 - \mu_2 = 3 \) |
|------------------|-------------------------------|
| Alternative hypothesis* | \( H_1: \mu_1 - \mu_2 > 3 \) |
| T-Value | DF | P-Value |
|---------|----|---------|
| 6.76 | 325| 0 |
B) asymmetrical noise exposure (\( \geq 1 \) dB between left and right ear based on the fact that 1 dB would be a perceptible difference in sound).
The statistical analysis performed using Minitab software yielded the results in Table 2. The P-value is .002, which is less than the significant value of .02. This means there is a statistical difference between data sets. We can reject the null hypothesis that there is a difference of 3 dB in the means and accept the alternative hypothesis that the mean difference between data sets is different from 3. There is a 98% confidence interval the average difference between each of the samples will be between -.695 and 2.512 dB. (Sample 1 – Sample 2). The difference between the two data set’s averages is .908 dB. This difference between the ears (approximately 1 dB) is perceptible and therefore significant.
**Table 2: Statistical analysis of asymmetry of noise exposure using left and right ear measurements (Part II, B)**
| Sample | N | Mean | StDev | SE Mean | Difference | 98% CI for Difference |
|-----------------|-----|-------|-------|---------|------------|------------------------|
| Left Ear A-Weight | 167 | 94.74 | 6.22 | 0.48 | 0.908 | (-0.695, 2.512) |
| Right Ear A-Weight | 167 | 93.83 | 6.32 | 0.49 | | |
* \( \mu_1 \): population mean of Left Ear A-Weight
\( \mu_2 \): population mean of Right Ear A-Weight
Difference: \( \mu_1 - \mu_2 \)
| Null hypothesis* | \( H_0: \mu_1 - \mu_2 = 3 \) |
|------------------|-------------------------------|
| Alternative hypothesis* | \( H_1: \mu_1 - \mu_2 \neq 3 \) |
| T-Value | DF | P-Value |
|---------|----|---------|
| -3.05 | 331| 0.002 |
**Part III** – The result of the evaluation was that, for the three configurations tested, all of the participants were able to hear warning signals. However, none of the participants were able to identify the direction of the warning signal emitted from the equipment. Further, there are no
known acoustical metrics for objective evaluation of localization of warning signals in noise. This research paves the way for this important area of future construction safety research.
For more sample measurement analysis examples, Appendices A, B and C are included. Appendix A indicates a significant (greater than 3 dB) difference between the monaural/dosimeter and binaural/SQobold sound pressure level (SPL), for an example event (Part I of analysis), analyzed in the same time period and duration on both devices. Appendix B indicates a difference of approximately 8 dB between the dosimeter and the SQobold DAQ system (averaged across ears) SPL values, as well as a 3 dB difference across ears, using another example event (Part II of analysis). Appendix C illustrates an example of a traditional single-channel dosimeter measurement results, using the Work Noise Partner software.
**Changes/problems that Resulted in Deviation from the Methods**
The warning signal audibility and speech intelligibility evaluation could not be performed using the adaptive procedure. The adaptive procedure implies that presentation levels of the warning signal or speech in noise would be increased or decreased by a fixed amount, depending on the listener’s ability to hear the recorded signal, all in a laboratory environment (Phono Lab), as judged by the research personnel. Warning signals and noise on the construction sites could not be measured separately, due to the nature of operation of the equipment (warning signals had to be emitted as the equipment is operating), the challenge was separating the warning signal from the background noise. Acoustic perception evaluation was still performed (Figure 7) by evaluating warning signal audibility and localization. In all cases, based on the auditory cues from the headphones (i.e. recorded signals), the study participants could not correctly distinguish the direction of the warning signal.
**Future Funding Plans**
The goal of this study is improving the accuracy of the assessment of noise and warning signal perception on construction sites. The effectiveness of the hearing protection devices was not assessed in this study. However, an improvement in the acoustic assessment methods and accurate evaluation of the sound field on a construction site can potentially improve the development and the optimization of the hearing protection devices and, in the case of the warning signals, sound enhancement algorithms for improved audibility. PI Samardzic and Co-Is Karatas and Esmaeili are planning to resubmit an R21/NIH grant application in March 2023 (originally submitted in June 2022) in order to expand the scope of this research.
**List of Presentations/publications, Completed and/or Planned**
There are currently two journal papers under development, to be submitted to possibly the Journals of Professional Safety, Safety Science, Safety Research or Noise Control Engineering Journal.
Dissemination Plan
This research will be to disseminate the result of the study locally, regionally, and nationally through multiple concurrent efforts.
- Research outcomes and tools will be available on the websites of CPWR, Lawrence Technological University, University of Illinois – Chicago, and Purdue University.
- Social networking accounts of PIs (e.g., LinkedIn, Research Gate) will be used to disseminate the outcomes of research.
- To encourage students to pursue a career in construction safety, a workshop will be provided for the AGC student chapters at the Lawrence Technological University, University of Illinois – Chicago, and Purdue University. In addition, we will train graduate and undergraduate students on the importance of construction noise exposure and improve noise management on construction sites as part of the Construction Management curriculum in our universities.
References
Arezes, P. M., Bernardo, C. A., & Mateus, O. A. (2012). Measurement strategies for occupational noise exposure assessment: A comparison study in different industrial environments. *International Journal of Industrial Ergonomics, 42*(1), 172-177.
https://www.academia.edu/13656107/Measurement_strategies_for_occupational_noise_exposure_assessment_A_comparison_study_in_different_industrial_environments
Berg R.L., Pickett, W., Linneman, J.G., Wood D.J., Marlenga B. (2014). Asymmetry in noise-induced hearing loss: evaluation of two competing theories. *Noise Health, 16* (69), 102-107.
https://www.noiseandhealth.org/article.asp?issn=1463-1741;year=2014;volume=16;issue=69;spage=102;epage=107;aulast=Berg
Blauert, J. (1997) *Spatial Hearing: The Psychophysics of Human Sound Localization*, MIT Press, Cambridge, Mass, pp. 427.
Bonnet, F., Nelisse, H., Nogarolli, M. and Voix, J. (2020). Individual in situ calibration of in-ear noise dosimeters. *Applied Acoustics, 157*(1).
https://critias.etsmtl.ca/wp-content/plugins/zotpress/lib/request/request.dl.php?api_user_id=464010&dlkey=YBYEGTP2&content_type=application/pdf
Bronkhorst, A.W. & Plomp, R. (1988). The effect of head-induced interaural time and level differences on speech intelligibility in noise. *J. Acoust. Soc. Am., 83*, 1508-1516.
https://pubmed.ncbi.nlm.nih.gov/3372866/
Dabirian, S., Han, S. H., & Lee, J. (2020). Stochastic-based noise exposure assessment in modular and off-site construction. *Journal of Cleaner Production, 244*, 118758.
https://agris.fao.org/agris-search/search.do;jsessionid=0381FEA3F4811BFFF24BC90639D67FBD?request_locale=ru&recordID=US202000065824&query=&sourceQuery=&sortField=&sortOrder=&countryResource=&agrovocString=&advQuery=¢erString=&enableField=
Dobie R.A. (2014). Does occupational noise cause asymmetric hearing loss?. *Ear Hear, 35*(5), 577-579.
https://pubmed.ncbi.nlm.nih.gov/24879031/
Fedorko, G., Heinz, D., Molnár, V., & Brenner, T. (2019). Modelling Noise Exposure by Means of Infrared Technology. *Advances in Science and Technology. Research Journal, 13*(4).
http://www.astrj.com/Modelling-Noise-Exposure-by-Means-of-Infrared-Technology,112683,0,2.html
Fernández, M. D., Quintana, S., Chavarría, N., & Ballesteros, J. A. (2009). Noise exposure of workers of the construction sector. *Applied Acoustics, 70*(5), 753-760.
https://www.infona.pl/resource/bwmeta1.element.elsevier-27a025bb-1a70-3d8b-88bb-69e06bea3291
Genuit, K. (2004). The sound quality of vehicle interior noise: a challenge for the NVH-engineers. *Int. J. Vehicle Noise. Vib.*, 1, 158-168.
https://www.inderscienceonline.com/doi/abs/10.1504/IJVNV.2004.004079
Hong, T., Ji, C., Park, J., Leigh, S. B., & Seo, D. Y. (2015). Prediction of environmental costs of construction noise and vibration at the preconstruction phase. *Journal of Management in Engineering*, 31(5), 04014079.
https://ascelibrary.org/doi/10.1061/%28ASCE%29ME.1943-5479.0000313
Hugues, N., Bonnet, F., Voix, J. (2018). In-ear noise dosimetry: challenges and benefits. *Euronoise 2018*, Crete, Greece.
https://www.euronoise2018.eu/docs/papers/101_Euronoise2018.pdf
ISO 532-2 (2017). *Acoustics - Methods for calculating loudness - Part 2: Moore-Glasberg method* (International Organization for Standardization, Geneva).
https://www.iso.org/standard/63078.html
ISO 9612 (2009). *Acoustics — Determination of occupational noise exposure — Engineering method* (International Organization for Standardization, Geneva).
https://www.iso.org/standard/41718.html
Kardous C.A., Willson R.D., Murphy W.J. (2005). Noise dosimeter for monitoring exposure to impulse noise. *Applied Acoustics*, 66(8), 974–985.
https://www.researchgate.net/publication/222232975_Noise_dosimeter_for_monitoring_exposure_to_impulse_noise
Kerns E., Masterson E.A., Themann C.L., Calvert G.M. (2018). Cardiovascular conditions, hearing difficulty and occupational noise exposure within U.S. industries and occupations. *American Journal of Industrial Medicine*, 61, 477-491.
https://pubmed.ncbi.nlm.nih.gov/29537072/
Lawrence, B. J., Jayakody, D. M., Bennett, R. J., Eikelboom, R. H., Gasson, N., & Friedland, P. L. (2020). Hearing loss and depression in older adults: a systematic review and meta-analysis. *The Gerontologist*, 60(3), e137-e154.
https://pubmed.ncbi.nlm.nih.gov/30835787/
Legris, M. & Poulin, P. (1998). Noise exposure profile among heavy equipment operators, associated laborers, and crane operators. *American Industrial Hygiene Association Journal*, 59(11), 774-778.
https://www.tandfonline.com/doi/abs/10.1080/15428119891010947
Li, X., Song, Z., Wang, T., Zheng, Y., & Ning, X. (2016). Health impacts of construction noise on workers: A quantitative assessment model based on exposure measurement. *Journal of Cleaner Production*, 135, 721-731.
https://app.dimensions.ai/details/publication/pub.1001113308
Mahdi, J. F., Akhter, N., Khan, A. W., & Irfan, S. (2020). Health Hazards Associated with Noise Due to Civil Construction Activity. *Journal of Southwest Jiaotong University, 55*(2). [https://www.jsju.org/index.php/journal/article/view/562](https://www.jsju.org/index.php/journal/article/view/562)
Masterson E.A., Deddens J.A., Themann C.L., Bertke S. & Calvert GM. (2015). Trends in worker hearing loss by industry sector, 1981-2010. *American Journal of Industrial Medicine, 58*, 392-401. [https://pubmed.ncbi.nlm.nih.gov/25690583/](https://pubmed.ncbi.nlm.nih.gov/25690583/)
Moore, B.C.J. (2012). *An Introduction to the Psychology of Hearing, 6th Ed.*, Brill, Leiden, The Netherlands, 1-441.
NIOSH (1998). *Criteria for a Recommended Standard, Occupational Noise Exposure*. DHHS (NIOSH) Publication No. 98–126. [https://www.cdc.gov/niosh/docs/98-126/default.html](https://www.cdc.gov/niosh/docs/98-126/default.html)
OSHA (1983). Occupational Noise Exposure; Hearing Conservation Amendment; 29 CFR 1910.95, Final Rule. [https://www.osha.gov/laws-regs/regulations/standardnumber/1910/1910.95](https://www.osha.gov/laws-regs/regulations/standardnumber/1910/1910.95)
Rabinowitz, P.M., Galusha, D., Dixon-Ernst, C., Clougherty, J.E., Neitzel, R.L. (2013). The dose response relationship between in-ear occupational noise exposure and hearing loss. *Occup Environ Med, 70*(10), 716–721. [https://pubmed.ncbi.nlm.nih.gov/23825197/](https://pubmed.ncbi.nlm.nih.gov/23825197/)
Reeb-Whitaker, C. K., Seixas, N. S., Sheppard, L., & Neitzel, R. (2004). Accuracy of task recall for epidemiological exposure assessment to construction noise. *Occupational and environmental medicine, 61*(2), 135-142. [https://www.jstor.org/stable/27732178](https://www.jstor.org/stable/27732178)
Seixas, N. S., Ren, K., Neitzel, R., Camp, J., & Yost, M. (2001). Noise exposure among construction electricians. *AIHAJ-American Industrial Hygiene Association, 62*(5), 615-621. [https://pubmed.ncbi.nlm.nih.gov/11669388/](https://pubmed.ncbi.nlm.nih.gov/11669388/)
Seixas, N. S., Neitzel, R., Stover, B., Sheppard, L., Feeney, P., Mills, D., & Kujawa, S. (2012). 10-Year prospective study of noise exposure and hearing damage among construction workers. *Occupational and environmental medicine, 69*(9), 643-650. [https://pubmed.ncbi.nlm.nih.gov/22693267/](https://pubmed.ncbi.nlm.nih.gov/22693267/)
Suter, A. H. (2002). Construction noise: exposure, effects, and the potential for remediation; a review and analysis. *AIHA J, 63*(6), 768-789. [https://pubmed.ncbi.nlm.nih.gov/12570087/](https://pubmed.ncbi.nlm.nih.gov/12570087/)
APPENDIX
A1 – Dosimeter Measurement Example: Work Noise Partner software indicating sound pressure level of 101.0 dBA for a work event
A2 – SQobold DAQ system Binaural Measurement Example: Artemis software presentation (sound pressure vs time) of the same data section/event captured by the dosimeter in A1
A3 – Artemis software sound pressure level calculation of the data section captured by the dosimeter (A1), resulting in 105.3 dBA (left ear) and 104.7 (right ear)
B1 – Dosimeter Measurement Example: Work Noise Partner software indicating sound pressure level of 94.8 dBA for a work event
B2 – SQobold DAQ system Binaural Measurement Example: Artemis software presentation (sound pressure vs time) of the same data section/event captured by the dosimeter in B1
B3: – Artemis software sound pressure level calculation of the data section captured by the dosimeter (B1), resulting in 114 dBA (left ear) and 111 (right ear)
C1: – Dosimeter Measurement Example Result, Work Noise Partner software
www.cpwr.com • www.elcosh.org |
SILENCE IS GOLDEN
review
Silent Yacht 55
WORDS BY KEVIN GREEN
PHOTOGRAPHY BY KEVIN GREEN AND SUPPLIED
Solar cell development is a moving at pace which is creating more uses for it, something Silent Yachts company founder Michael Köhler is acutely aware of. After many years of voyaging on conventional power boats it led him and wife Heike to build their first electric prototype which they sailed for five years and 15,000nm before founding the company and constructing their first production Solarwave 46 in 2009. She was the first renewable-powered blue-water catamaran.
I first met the Köhlers in 2017 when they arrived at La Grande Motte boat show with the Silent 64, which I took out for sea trials. I recall gliding along on the Mediterranean at 10 knots while the twin electric motors hummed quietly, consuming 64 kilowatts, before we throttled back to a cruising speed of 6.6 knots for a more sustainable consumption of 31 kilowatts. I steered with a Raymarine autopilot dial and the twin throttles (but a conventional wheel clips into place for traditionalists).
The latest boat, the Silent 55, has advanced substantially from the 64, especially with the developments in much more powerful lithium batteries. Along with the Silent 55 and the Silent 64, the other models in the range are the Silent 55 VIP Ferry and the upcoming Silent 79.
The solar-powered Silent 55 catamaran is successfully pioneering renewable power in a quality and seaworthy design.
Solar power comes from 30 panels rated for approximately 10kW-peak output...
THE ELECTRIC PROPOSITION
The Silent Yachts proposition to buyers is that their systems require hardly any maintenance and produce no fumes or noise so the operational costs are substantially lower compared to power yachts using more traditional propulsion systems. This Silent 55 cruises in the remote Mergui Archipelago in Myanmar during winters so has to be self-sufficient.
The company offers several varieties of its vessels, including sailing versions, kite-powered versions, electric ones and a hybrid model. Our review boat was the electric version with upgraded 2 x 135kW motors that achieve 14 knots and a large optional diesel generator.
At first glance the Silent 55 looks like many other power catamarans – a tall flybridge above a large squared-off saloon and spacious lounge decks fore and...
aft. Looking closer revealed the reason for the large topside structures – these areas host the solar panels.
The overall shape is fairly sleek and low-slung to reduce windage. But to maximise the extensive inside space requires upright topsides and squared bulkheads in the saloon which somewhat compromises the aesthetics. So it doesn’t have the smooth curves of a Tesla electric car but, like these vehicles what counts is under the hood.
Solar power comes from 30 panels rated for approximately 10 kilowatt-peak output, controlled by a smart solar regulator – a maximum power point tracking (MPPT) unit that controls the energy going into the lithium batteries, giving stored power for nighttime cruising while a 15kVa inverter provides DC/AC power for all household appliances.
“What this represents to the yachtsman, among other features, is the ability to cruise for many hours at normal speed and throughout the entire day and evening at reduced speed,” says Köhler.
No appreciable sound came from the motors, prompting me to go below for a look at them.
Pier 21 Marine Centre
- 50 tonne Travelift
- All trades available on-site or D.I.Y.
- Drystacks • Marina
15 Westhaven Drive • ph 09-374 4461
email firstname.lastname@example.org
www.pier21.co.nz
Gloss Boats®
MARINE SPRAYING
PIER 21 ON-SITE PAINTERS
- Complete repaints
- Topsides
- Coamings
- Decks
- Polishing
- Varnishing
- Masts
- Touch ups
- Antifouling
- Propspeed
136 Beaumont Street,
Westhaven, Pier 21.
Ph: 0800 456 772
email@example.com
glossboats.co.nz
APARTMENT-STYLE SALOON
Cruising catamarans attract buyers for their comfort and stability. The two hulls allow them to carry heavy loads and of course contain a lot of living spaces. Looking around the Silent 55, it doesn’t disappoint in any of these areas.
Walking inside from the sheltered aft deck reveals a spacious saloon to me with galley at the doorway, dinette on the forward port quarter and steering console opposite. Ahead is a deck level owner’s suite, an unusual feature; and up to six cabins can be optioned on the Silent 55.
The U-shaped galley on the portside comprises a deep sink, electric cooktop, dishwasher and worktop space nearby on the starboard side as well, making it an effective cooking space with lots of cupboards; ideal for those blue-water voyages. Refrigeration consumes the most energy on yachts but is an essential so there’s a drawer fridge; in addition our review boat had a large upright household fridge.
The other big consumer is air-conditioning which is a 50,000 BTU.
OPPOSITE & ABOVE As with a conventional cat, the appeal is in the expansive spaces.
LEFT The neat electric motor installation and its supporting power supply.
2019 UPGRADE TO SILENT YACHT 55
The Cannes 2019 Boat Show saw the arrival of an upgraded Silent 55. Key improvements include higher-powered 250kW motors as standard and increased stored power (210kWh batteries compared with 70kWh).
More efficiencies have been found through a redesigned drive-train that minimises friction and reduces mechanical noise. Following customer feedback some interior design improvements have also been made, says Köhler. ‘We did these updates and changes because we always try to improve and to install the best and latest technology available to satisfy our clients. We have built one new Silent 55 already and we’ve got three more orders for this model, which shows that we’re heading the right direction’.
reverse cycle unit on the Silent 55 – it heats as well as cools. Most usefully, there’s a water-maker powered by the solar-electric system, which produces 100l/per hour, so enough to supply a boat load of passengers. Other good galley features include a sliding window that opens aft to the outside dinette area.
Moving to the middle of the saloon takes me to another set of large cupboards that also houses the retractable television; ideal for viewing from the dinette which has an adjustable table (making it a day bed) and surrounded on two sides by settee space. A step up on the starboard quarter is the navigation station which offers clear views all around and has a steering wheel with two seats.
An array of Raymarine instrumentation controls the Silent 55, including the essential autopilot. The main controls are the throttles and beside them two joysticks for the Lewmar 10kW thrusters located in each hull. The other essential screen is the small power consumption one which uses simple bar charts to show usage.
TOP RIGHT The foredeck trampolines are always a prime spot for chilling.
ABOVE With all this seating, it’s readymade for a party.
DECK CABIN
Three of the Silent 55 layouts have a forward owner’s stateroom that capitalizes on the 8.46m beam for a spacious layout with a walkaround double berth. Stepping in here from the saloon reminds me of a superyacht I was recently on, such is the space and airiness of this owner’s suite.
The advantages of deck level accommodation are many: all round windows that avoid the claustrophobia some guests feel while in the hulls, close proximity to the helm so the owner can quickly check on navigation and on the Silent 55 the ablutions are extensive.
Located in the starboard hull, down a few steps, the bathroom uses a large part of the hull on each side of these stairs. At the rear is a shower cubicle while up front there’s a sink – an interesting bamboo unit with head in the forepeak. The guests enjoy a longitudinal VIP cabin on the starboard side and two other generously proportioned doubles in the port side; all with ensuite bathrooms.
These wide hulls allow beds to run athwartships, so use the entire beam of each hull while also saving floor space in the cabins. Joinery and finish is outstanding throughout the inside as well on the Silent 55, reflecting the hand-built approach.
FLYBRIDGE & DECKS
From the aft deck a ladder leads me up to the flybridge where there’s a lounge and wide helm seats with small table to port. Handily, the helm seats flip to create a convivial seating area. The helm echoes all the controls from the main console, so includes the twin throttles, thrusters and autopilot but the views are clearer all round.
The flybridge fibreglass roof shades the entire topside and, cleverly, is movable with sturdy stainless struts allowing it to pivot fore and aft. By doing this it can also seal the entire flybridge, making it weatherproof, reducing windage and maximising the solar arrays. Solar arrays cover the roof, the area aft of the flybridge and the saloon roof.
Having solar on my own yacht, one early learning experience I found was to keep them immaculately clean from even dust and light grime if maximum performance is to be attained. So bird poo is a major challenge in many parts of the world.
Back on deck, man-made teak under foot gave good grip as I sat at the aft saloon to enjoy alfresco views and easy access to the sea via the moulded steps. Between each hull the dinghy is slung high and clear of the water but is easily launched for that run ashore. Moving forward, the wide decks and high safety rails reassured me.
I found yet another unusual feature midway along the saloon side where a two-seater lounge seat is indented. At the bows yet more lounge space was available in a sunken bulkhead between the trampolines and in sunbeds above this on the saloon roof. The centre spine of the Silent 55 contained a 1500-watt windlass with capstan which has a 30kg anchor and 100m of galvanised chain.
BUILD AND SYSTEMS
Catamaran performance depends on some key design features, notably the hull volume that allows loads to be carried and the clearance beneath the bridgedeck. The optimum bridgedeck height clears the waves without compromising stability, so at 1.0m unloaded the Silent 55 has reasonable clearance.
The overall hull shape, with fine entry allowing good windward performance and volume aft for the main cabins also looked seaworthy. Designed with sealed deck sections and collision compartments for safety, it uses watertight bulkheads and integrated interior furniture to create a stiff hull structure.
The Silent 55 is built using vacuum-bagged resin infusion to create a lightweight glass-sandwich composite construction hull, reinforced with carbon fibre at stress points, and uses vinylester resin to prevent osmosis blistering. Our review boat was hull number five, and several others were in production back at the yard in Austria.
Looking at the main systems, stored power is in 28 lithium Victron batteries (weighing 800kg and producing 140kW/hours). “These latest batteries are about 40% cheaper than the ones we fitted to the Silent 64,” says Köhler.
These are charged by the 32 solar panels, outputting 370watt each (total theoretical output 11,840watts). Panel size is 1.5m by 1.0m – all neatly integrated in to the superstructure of the Silent 55 and are marimised. The electrical system can support a powered swim platform and allow the thrusters to be run off the batteries (with the generator).
SAILING THE MEDITERRANEAN
Leaving the Cannes dock was without drama, thanks to the thrusters on each hull pushing the 17-ton catamaran sideways before we silently moved ahead and through the busy Vieux Port. Sitting alongside Köhler on the wide flybridge seating the views were clear forward, allowing us to safely reach the open sea where our speed increased from 3 to 6.3 knots. The engine gauges showed us consuming 28 kilowatts while incoming power from the panels was listed at 1.3 kilowatts due to the cloudy day.
No appreciable sound came from the motors, prompting me to go below for a look at them. Opening the engine hatch revealed the UQM motor – similar to those used in forklifts – and it emitted a humming sound about as loud as an average human voice; 50 decibels.
The American UQM company supplies a significant part of the electric marine sector and claims the mantle for the world’s highest powered electric outboard (180hp). With a background in the automotive industry, powering a fleet of BMW E1 electric vehicles for California and the GM Precept Hybrid car are among many other UQM projects.
Back on the flybridge I took over the steering and pushed the throttles down and watched our speed increase to 10 knots, with consumption at 80 kilowatts, and no vibration felt throughout the hull. At this rate, and combined with the dull day, our batteries were discharging so to combat this the optional 100kW Volvo generator is programmed to kick-in to add amps to the system when batteries fall to 30%.
But in sunny weather, running at 5 knots, the Silent 55 can do 100 miles daily. The hydraulic steering wheel felt heavy, something that the company was going to rectify, as it consumes more energy from the autopilot. But turning the wide catamaran was done easily, as it gently glided around; with no smoke or noise. All I could hear was the water streaming past the fibreglass hulls and the feel of the wind on my face.
**SPECIFICATIONS**
- **loa** 13.4m
- **Beam** 7.2m
- **Draft** 0.75m
- **Light displacement** 11 tons
- **Water** 250 litres
- **Wastewater** 250 litres
- **Fuel** 250 – 500 litres
- **Solar** 9,000W
- **E-Motors** 2 x 30kW / 2 x 80kW
- **Generator** 22 kW / 100 kW
- **Cruise Speed** 6–8kt / 6–10kt
- **Top Speed approx.** 12 kt / 15 kt
- **CE Certification** A
---
**Silent Yacht 55**
**PACKAGES FROM $2.52 million**
**CONCEPT & INNOVATION**
Michael Köhler
**MANUFACTURED BY**
Silent Yachts
www.silent-yachts.com
**HIGHLIGHTS**
- The silence – it’s almost eerily quiet
- Multiple areas for relaxing
- Quality finishes – she’s a gem
**SPECIFICATIONS**
- **loa** 16.70m
- **lwl** 16.52 m
- **Beam** 8.46m
- **Displacement light** 17,200kg
- **Draft** 0.64m
- **Bridgedeck height** 1.00m
- **Engine std** 2 x Kräuter turnable saildrives. 30kW electric motors, bronze-prop, 70 kWh lithium battery.
- **Engine option (with generator)** 2 x 250kW or 135kW e-motors on shafts and 100kW Volvo D3 diesel generator.
- **Solar generation** 9,000-watt peak capacity
- **Fuel** 300 l (diesel for generator)
- **Water** 500 l (option 1,000)
- **Waste-Water** 2 x 500 l
- **CE Certification** A Ocean
- **Cruising speed** 12 knots
- **Top speed** up to 20 knots |
Modeling and Identification of the Hydraulic Servo Drive
Piotr Woś\textsuperscript{1,*}, Ryszard Dindorf\textsuperscript{2}
\textsuperscript{1} Faculty of Mechatronics and Machine Design, Kielce University of Technology, Kielce, 25-314, Poland
\textsuperscript{2} Faculty of Mechatronics and Machine Design, Kielce University of Technology, Kielce, 25-314, Poland
Abstract. This paper discusses possibilities to apply the dynamic identification with a discrete linear model while assessing the state of the electro-hydraulic drive dynamics. This evaluation is crucial while designing modern power or positional control systems. Experimental data is applied in order to determine the model dynamics of the real system and estimate unknown parameters of an object model. The dependencies were interpreted. The paper includes the selected results of current identification tests of the electro-hydraulic drive system as a result of which the discrete parametric object model was derived.
1 Introduction
Conventional control systems designed for specific working conditions operate well provided that they are subject to small deviations from these conditions. In fact, controlled objects do not fulfill the stationary conditions as a result of this interference [1]. Therefore, a problem of incomplete information about the object should not be taken into account. Incomplete information about the object, as far as hydraulic systems are concerned, usually refers to the lack of knowledge of the exact values of all parameters and signals which are applied to determine regulatory quality indicators or other optimization conditions. In the object control structures with incomplete information, algorithms providing the identification of system indeterminateness and change of the controllers’ settings might be distinguished. It makes it possible to adjust the control system to variable conditions. Despite considerable advancement in on-line design of the controllers’ algorithms, the industry is dominated by periodically-adjusted classic PID controllers. It is generally believed that those controllers are reliable and there are many methods of adjusting them [2]. However, due to the stability of the closed system, the methodology of adjusting the controller in real time is more modern, efficient and reliable for multi-dimensional control systems of high order. The standardization of those algorithms requires additional identification methods for continuous systems to be developed which might cooperate with on-line adjustable controllers. Thus, it is essential to build appropriate algorithms of parametric identification as well as appropriate algorithms of state observation. Identification is an approach to build a model of dynamic system which consists in analyzing the dependences between its input and output signals without intruding into the laws of physics governing these dependencies. The identification is applied when the rules governing the model phenomenon are either unknown or too complicated to be used for designing the model construction for control purposes. In such a situation in order to determine the model dynamics of the real system and estimate unknown parameters of an object model, experimental data is applied. The identification procedure in an iterative procedure and encompasses the following stages: planning measurements in order to obtain data, selecting a model structure in terms of its form and row of equations, estimating model parameters and verifying the model correctness by using other measurement data than used for estimating the parameters [3]. Moreover, the identification method should not be sensitive to the nature of input signals. Obtaining data sufficient for identification is only possible through a proper stimulation of system inputs, and such an interaction is not always possible in practice [4]. One method of control systems is adaptive control with model identification [5]. The model identification adaptive control (MIAC) system illustrated in Fig. 1. Identification of model is determined on the input $u$ and output $y$. Depending on these model optimal controller parameters are calculated and adjusted.

2 Mathematical identification model
The parametrized model defines precisely the behaviour of the control object under specific conditions which are described by inputs and outputs at present and in the past. The algorithms for estimating the model coefficients were developed with the aim to track the changes of model coefficients. A typical parametric description is to present the properties of the control system in the form of discrete transmittance. The dynamic properties of the control process depend on the parameters’ values of the discrete transmittance. In the synthesis of the control algorithms, the choice upon the modelling method depends to a great extent on how the suggested model is interpreted physically and whether it is easy to identify it [6]. Nonparametric models in the form of previously stored samples of the object responses, models formulated in the space of states as well as parametric models in the form of input-output discrete transmittance might be distinguished among many known descriptions [7]. Determining a parametric model describing the studied object consists in defining the model class and selecting a particular model there from. Defining the model class is limited to determining the mathematical expression in the parametric form describing the studied object. The model is selected from the accepted class by determining the numerical values of its coefficients. As a rule, foreknowledge makes it possible to establish only the model class. However, the coefficient values of the mathematical model are determined as a result of the implementation of the selected identification algorithm which stipulates how the measurement results should be processed in order to determine the values of the model coefficients. Due to the use of measurement data which is of “discrete-in-time” nature, the mathematical identification models are discrete and linear in relation of the desired parameters. They take it also into account that the measurement data might be subject to mistakes as a result of introducing random interference into the models. As far as discrete time is concerned, the operator of signal backward shift by $k$ of samples $q^{k(x(t-m))} = x(n-k)$ is introduced to shorten the record, which gives the equation that is similar to the equations with complex argument $z^k$ in the field of transforms Z.
In order to identify the dynamic model of the electro-hydraulic servo-drive, a deterministic model with the following structure was applied:
$$A(q^{-1}) y_t = B(q^{-1}) u_{t-d} + \xi_t,$$
where: $y_t$, $u_t$ – input and control signals, respectively, $d$ – discrete delay, $\xi_t$ – object interference.
Model (1) does not require a priori information about the identification and thus, it is relatively easy and shows no sensitivity to frequently occurring industrial changes of the structure of the control object. The disadvantage of this model is a relatively great number of previously stored samples which, however, can be used to describe objects with complex dynamics. The transmittance model in the discrete form (2) has unknown parameter values of the control object which must be identified:
$$G(z) = \frac{B(z^{-1})}{A(z^{-1})} = \frac{b_1 z^{-1} + b_2 z^{-2} + ... + b_n z^{-n}}{1 + a_1 z^{-1} + a_2 z^{-2} + ... + a_n z^{-n}} z^{-d}$$
where: $a_1, a_2, ..., a_n, b_1, b_2, ..., b_n$ – parameters of the control object, $n$ – model order
3 Algorithm of the model identification
The process of identification is a cognitive process focused on quantitative and qualitative learning about the reality. The basis of this process is experimental research which has limited accuracy influenced by many factors, i.a.: the accuracy of the direct measurement of the object variables, influence of external disturbing factors, choice upon the signal stimulating the object with relation to its ability to identify, properties of the identification algorithm and the class of the adopted model. Both the identification process and the measurement process consist in obtaining the information about the studied object. The basis for determining the coefficients of the identification model is the measurement results of the input and output signals conducted directly on the identified object. An on-line identification algorithm was applied to provide a parametric identification of the electro-hydraulic drive. A solution for the parameters’ estimation task with the use of calculation algorithms in accordance with the block diagram shown in Fig. 2 was suggested herein. The identification process consists of a few stages, the most crucial of which is the stage of numerical calculations of estimator $\hat{\theta}$.

**Fig. 2.** Block diagram of adjusting the object model: $u_t$ control signal, $y_t$ object output signal, $\bar{y}_t$ output signal of the object model, $\hat{\theta}$ estimate vector of the object model parameters, $\varepsilon_t$ model error, $\xi_t$ object interference.
3.1 Systems with parameters variable over time
Determining the system parameters using the method which takes the whole history of the observation process into account is the average estimation obtained throughout the complete registration period. The applied iterative algorithm does not often follow the changes in the parameters of the objects loaded with random interferences. Moreover, those systems are excessively overloaded as by increasing the number of measurements, the size of matrix $P_t$ and vector $\phi$, increase as well at the same time and as a consequence, the demand for the memory of the control system grows. Adjusting the iterative algorithm to tracking the object parameters variable over time leads to the method with exponential forgetfulness. The method based on RLS recursive algorithm with WRLS (Weighted Recursive Least Squares) exponential forgetfulness consists in introducing $\lambda_t$ forgetting coefficient, which decides upon the size of the algorithm memory, into the control equations. If the changes in parameters’ values are tracked in an on-line way, estimates $\hat{\theta}$ should be determined on the basis of on-line measurements of input and output variables. Past data has no significant impact on the current object state and should be forgotten. In order to make matrix $P_t$ independent from the past data, weight parameter $\lambda_t$, often called exponential forgetting coefficient, was introduced. Exponential forgetting coefficient $\lambda_t$ decreases the influence of the past data on the current evaluation determined with covariance matrix $P_t$:
$$P_t = \left( P_{t-1} - \frac{P_{t-1} \phi^T \phi P_{t-1}}{1 + \phi^T P_{t-1} \phi} \right) \frac{1}{\lambda_t}, \quad (3)$$
Then, the vector of estimated parameters takes the following form:
$$\hat{\theta}_t = \hat{\theta}_{t-1} + \frac{P_{t-1} \phi}{\lambda_t + \phi^T P_{t-1} \phi} e_t, \quad (4)$$
$$e_t = y_t - \phi^T \hat{\theta}_{t-1}, \quad (5)$$
Weight coefficient $\lambda_t$ variable over time is applied in order to accelerate the convergence of the algorithm and is determined with equation: $\lambda_t = \lambda^0 \lambda_{t-1} + (1 - \lambda^0)$, where $\lambda_t$ aims exponentially to 1, while the number of measurement samples increases.
Assuming that $\lambda^0 = \lambda_0 = 1$, the weight coefficient takes the following value: $\lambda = \lambda_t = 1$. This value is applied while tracking low-variable parameters of the control system. For $\lambda = 1$, matrix $P_t$ depends in the case of every moment $t$ on all previous data. Due to the fact that while adjusting, the parameters were prone to changes, there is no linear dependency between vector elements $\bar{\theta}_t$. Introduction of this parameter results in an exponential limitation of the error signal, which means that older samples of the estimation signal error are taken to total error measurement $e_t$ with correspondingly lower weight. It should be expected that lower values $\lambda < 1$ will upgrade the model more quickly which is beneficial for a quicker adaptation, and simultaneously lower values $\lambda$ lead to greater temporary variations of the values of the estimated coefficients which is not desirable. That is why, the selection of forgetting coefficient $\lambda$ should be a well thought over compromise. The value of the forgetting coefficient is selected in accordance with the rate of changes of the identified object parameters as well as the level of interference included in the measurement data.

3.2 Robust estimation
A major challenge for engineers is to build such a model identification algorithm of the control object which will make it possible to accelerate the adaptation of model parameters when changing the reference signal values or the object properties. By estimating the model parameters in real time (on-line), a dynamic change of those parameters at the moment of random interference occurrence $\xi_t$ is possible. A solution for parameters’ estimation task with the use of calculation algorithms based on RLS adaptive recursive algorithm with forgetting exponential AWRLS (Adaptive Weighted Recursive Least Squares) [8] was suggested herein. This method consists in constant $\lambda_t$ value adaptation during the identification process. Adopting $\lambda = const$, as demonstrated by the attempts conducted, is difficult to be implemented in practice. In order to allow tracking the model parameters with quickly variable values, an attempt to improve RLS was made by using the adaptation of exponential forgetting coefficient $\lambda_t$ so that it would include the change of input and output parameters during the object identification. In this case, exponential coefficient $\lambda_t$ is updated at any time $t$ and recorded in the following form:
\[
\lambda_i = \left[ 1 + \kappa + \frac{\left( y_i - \overline{\varphi}_i^T \hat{\theta}_{i-1} \right)^2}{\alpha_i} \right]^{-1} \cdot \left[ \frac{\overline{\varphi}_i^T P_{i-1} \overline{\varphi}_i}{1 + \overline{\varphi}_i^T P_{i-1} \overline{\varphi}_i} \right]
\]
where:
\[
\kappa = 1 + \ln \left( 1 + \overline{\varphi}_i^T P_{i-1} \overline{\varphi}_i \right),
\]
\[
\alpha_i = \lambda_{i-1} \left[ \alpha_{i-1} + \frac{\left( y_i - \overline{\varphi}_i^T \hat{\theta}_{i-1} \right)^2}{1 + \overline{\varphi}_i^T P_{i-1} \overline{\varphi}_i} \right],
\]
\[
\nu_i = \lambda_{i-1} (\nu_{i-1} + 1)
\]
where: \(y_i\) – values of output size samples, \(\overline{\varphi}_i, \hat{\theta}_i\) – anticipated process output, \(P_i\) – covariance matrix.
Parameter \(\lambda_i\) which is found in equation (6) is an adaptive weight parameter, which has a significant influence on the convergence rate of the control servodrive algorithm. If coefficient \(\lambda_i\) aims to unity, the estimation variance decreases and the process dynamic properties deteriorate by prolonging the memory of previous measurement results. At the same time, in the states determined for a low value of coefficient \(\lambda_i\), the estimation quality deteriorates as the variance of the generated estimation increases. Constant value tracking of the forgetting coefficient provides great process dynamics (low value of coefficient \(\lambda_i\)) and low random variation of the generated estimation in the event of weak stimulation (the value of coefficient \(\lambda_i\) is close to unity, see Fig. 5). Figure 4 shows time courses of parameters \(a_i, b_i\) for \(n=3\) of the model order of the control object (1), determined using AWRLS while controlling the displacement of the hydraulic actuator for sinusoidal extortion.

**Fig. 5. Change of exponential forgetting factors \(\lambda_i\)**
### 4 Experimental test stand
In the algorithms research of control electro-hydraulic servo drive is used the test stand. In the Fig. 6 presented a view of the test stand consisting of the electro-hydraulic servomechanism controlled by the proportional flow valve (2). The force is achieved by weights installed on the linear guide support (5). Servomechanism control systems is using the card of C/A converters. Displacement of the linear guide block is measured relatively to the corps test stand with the converter position Novotechnik Company (4). Velocity of the linear guide block is calculated by the differentiation piston position of the actuator (1). In the system there is also possible measurement of the technological resistance force using the force sensor (3). Hydraulic power supply is also included in the test stand \((P_{\text{max}} = 31.5 \text{ MPa})\), along with the proportioned pressure valve (DBETR-10/315G24K4M-381). The test stand also includes computer system in the superior control system equipped with the Matlab/Simulink (xPC Target) software. The microcomputer has cards of converters C/A and A/C of the PCI type – DAS1602/16 Measurement Computing Corporation. Card along with the converter of position and force creates the measurement system [9].

### 5 Model verification
During the research, an analysis of the time course of remainders \(\varepsilon\) of adjusting the model to the intended object response was conducted. Due to the random nature of remainders \(\varepsilon\), statistical tests were applied which determine the autocorrelation function of remainders \(R_f(\tau)\). The autocorrelation function (own correlation) of signal \(x(t)\) determines the dependency of the values in two various moments \(t\) at the distance of \(\tau\) from each other. The function definition is stipulated with equation (7):
\[
R_f(\tau) = \frac{\sum_{t=1}^{N-\tau} x(t)x(t+\tau)}{\sum_{t=1}^{N} x(t)^2}
\]
\[ R_x(\tau) = \lim_{T \to \infty} \frac{1}{T} \int_0^T x(t)x(t + \tau) d\tau \]
(7)
where: \( \tau \) – displacement over time.
If the model is matched in a proper way, the properties of remainders \( e \) should be characterized by symmetric distribution and be independent from input signals. They should have the properties of white noise, i.e. a stationary stochastic signal with an average value equaling to zero. This signal does not carry any information about the object reaction to the forced signal. The complete information about the object behaviour should be reproduced form the model and the process output. Figure 7 and 8 shows the courses of absolute values \( |e| \) and autocorrelation function of remainders \( R_x(\tau) \) for model (1), type \( n=2 \) and \( n=3 \).
**Fig. 7.** The courses of absolute error \( |e| \) and autocorrelation function of remainders \( R_x(\tau) \) for \( n=2 \) model
**Fig. 8.** The courses of absolute error \( |e| \) and autocorrelation function of remainders \( R_x(\tau) \) for \( n=3 \) model
If the model accuracy is measured with the autocorrelation function of remainders \( R_x(\tau) \), then, if properly identified, it will asymptotically equal to zero for each displacement which is different than zero. From the courses presented above (see Fig. 7 and Fig. 8) it might be concluded that the best match was obtained for model \( n=3 \). No proper match with model \( n=2 \) was observed. The remainders depend on the input size, and autocorrelation function \( R_x(\tau) \) does not aim asymptotically to zero.
### 6 Summary
An important issue regarding the selection of the adaptive controller parameters is to identify the model of the control object in a proper way. Both the model parameters and the controller parameters are selected in real time (\textit{on-line}). The proper identification of the model parameters determines the quality of the electro-hydraulic servo-drive control. A proper selection of appropriate tools to verify the model match for the intended object response is crucial.
### References
1. B. Yao, IEEE/ASME Tran. on Mech., vol. 5, pp. 79-91, (2002).
2. M. Ahmadvazhad, M. Soltanpour, M.M.E Vol. 9, No.7, (2015).
3. M. Jelali, A. Kroll, \textit{Hydraulic Servo Systems - Modelling, Identification & Control}, Springer, (2003).
4. P. Wos, R. Dindorf, A.J. of C. No. 15, pp. 1065–1080, (2013).
5. K. Takahashi, M. Inoue, S. Ikco, \textit{Proc. Int. Conf. Fluid Power Trans. and Control}, pp. 68-87 (1985).
6. T. Kheweree, S. Kuntanapreeda, A.J. of Con., Vol. 17, pp. 855–867, (2015).
7. M. F. Rahmat, Md. Rozali, Amer J of App. Scie. No.7, pp. 1100-1108, (2010).
8. V. Bobál, J. Böhm, J. Fessl, J. Machácek, \textit{Digital self-tuning controllers : Algorithms, Implementation and Applications}, Tech.& Eng., (2005).
9. P. Wos, R. Dindorf, A.J. of C. No. 15, pp. 1065–1080, (2013). |
1. The Subsidiary Body for Scientific and Technological Advice (SBSTA), at its twenty-sixth session, invited relevant organizations to submit to the secretariat, by 15 May 2007, information on existing and emerging assessment methodologies and tools; and views on lessons learned from their application; opportunities, gaps, needs, constraints and barriers; possible ways to develop and better disseminate methods and tools; and training opportunities. It requested the secretariat to compile these submissions into a miscellaneous document to be made available to the SBSTA by its twenty-seventh session. (FCCC/SBSTA/2006/11, para. 33).
2. The secretariat has received seven such submissions. In accordance with the procedure for miscellaneous documents, the five submissions received from intergovernmental organizations are attached and reproduced* in the language in which they were received and without formal editing. In line with established practice, the two submissions from accredited non-governmental organizations have been posted on the UNFCCC website at <http://unfccc.int/3689.php>.
* These submissions have been electronically imported in order to make them available on electronic systems, including the World Wide Web. The secretariat has made every effort to ensure the correct reproduction of the texts as submitted.
1. SECRETARIAT OF THE CONVENTION ON BIOLOGICAL DIVERSITY
(Submission received 31 May 2007)................................................................. 3
2. FOOD AND AGRICULTURE ORGANIZATION OF
THE UNITED NATIONS
(Submission received 6 June 2007)................................................................. 10
3. SECRETARIAT OF THE UNITED NATIONS
INTERNATIONAL STRATEGY FOR DISASTER REDUCTION
(Submission received 31 May 2007)................................................................. 24
4. WORLD METEOROLOGICAL ORGANIZATION
(Submission received 20 July 2007)................................................................. 47
5. SECRETARIAT OF THE PACIFIC REGIONAL
ENVIRONMENT PROGRAMME
(Submission received 11 June 2007)................................................................. 54
1. The Subsidiary Body for Scientific and Technological Advice (SBSTA) of the United Nations Framework Convention on Climate Change (UNFCCC) invited Parties and relevant organizations to submit to the UNFCCC secretariat, by 31 May 2007, information on existing and emerging assessment methodologies and tools; and views on lessons learned from their application; opportunities, gaps, needs, constraints and barriers; possible ways to develop and better disseminate methods and tools; and training opportunities.
2. Since climate-change adaptation is integrated into all of the programmes of work of the Convention on Biological Diversity (CBD), with the exception of the programme of work on technology transfer, in response to the request from SBSTA, the Executive Secretary of the CBD prepared this document on assessment methodologies and tools for climate-change adaptation planning.
3. This document specifically addresses the following methodologies and tools: the ecosystem approach (section I), impact assessments (section II), risk-management approaches (section III), value and valuation techniques (section IV), and monitoring and evaluations tools which link biodiversity and climate change (Section V). Additional resources available from the CBD are provided at the end of the document (Section VI).
4. Information for inclusion in this document was derived from national reports submitted by Parties to the CBD, the reports of the Ad Hoc Technical Expert Group on biodiversity and climate change, and a review of the implementation of relevant programmes of work on thematic areas and cross-cutting issues under the Convention on Biological Diversity.
I. THE ECOSYSTEM APPROACH
5. The ecosystem approach (also known and integrated land and water management, landscape management, etc.) is a strategy for the integrated management of land, water and living resources that promotes the conservation and sustainable use of biodiversity in an equitable way.
6. The main principles of the ecosystem approach focus on capacity building; participation; information gathering and dissemination, research; comprehensive monitoring and evaluation; and governance.
7. As such, advantages of the ecosystem approach include: stakeholder participation; consideration of scientific, technical and traditional knowledge; and the achievement of balanced ecological, economic and social costs and benefits.
8. Since the ecosystem approach takes a broad perspective to management, it is an ideal methodology through which the multiple impacts from climate change, including on biodiversity, can be reflected in comprehensive and responsive adaptation planning.
9. The implementation of the ecosystem approach will be the subject of an in-depth review at the twelfth meeting of the Subsidiary Body on Scientific, Technical and Technological Advice (SBSTTA) of the Convention on Biological Diversity, in July 2007, and at the ninth meeting of the Conference of the Parties, in May 2008. Preparatory work for this review would indicate that opportunities to strengthen ongoing efforts include *inter alia*:
- Developing standards for application of the ecosystem approach;
- Simplified and improved marketing approaches to appeal to a wider audience; and
- Capacity-building at all levels by developing a strategic approach through enhanced partnerships.
10. In response to the in-depth review of implementation four key obstacles to the application of the ecosystem approach were identified:
- The need to simplify the description of ecosystem approach and make it more attractive to, and comprehensible for, key target audiences;
- The need to improve the “marketing” of the ecosystem approach, chiefly by promoting it as a planning tool to achieve enhanced economic benefits;
- The need to enhance the availability of tools to implement the ecosystem approach; and
- The need ensure that the application of the ecosystem approach goes beyond the biodiversity sector to all sectors whose actions impact on the delivery of ecosystem goods and services (positive and negative) across different levels (e.g. internationally, nationally and locally).
**II. IMPACT ASSESSMENTS**
11. Environmental impact assessment and strategic environmental assessments can be used to evaluate the environmental and socio-economic implications of various projects including climate change adaptation plans and the impacts of adaptation activities on biodiversity.
12. Environmental impact assessments typically consist of seven steps:
(i) Developing the project concept;
(ii) Screening the project concept to identify potentially significant impacts;
(iii) Scoping project activities to establish whether or not an impact assessment is needed;
(iv) Information gathering to establish baselines;
(v) Prediction of impacts;
(vi) Mitigation measures and management plans to minimize negative impacts; and
(vii) Monitoring and supervision.
13. Article 14 of the Convention on Biological Diversity explicitly encourages the use of environmental impact assessments in order to avoid or minimize adverse effects of a diverse set of activities on biodiversity and to allow public participation in such procedures.
14. In order to support Parties in the application of impact assessments, the annex to decision VI/7 of the Conference of the Parties to the CBD provides guidelines for incorporating biodiversity-related issues into environmental impact assessment legislation.
15. The Secretariat of the CBD also published “Voluntary Guidelines on Biodiversity-Inclusive Impact Assessments” (CBD Technical Series No. 26), which includes case studies, background
material and practical examples of the application of impact assessments. The voluntary guidelines explore both direct drivers of change through impact assessments and the review of indirect drivers of change carried out within the framework of the Millennium Ecosystem Assessment.
16. Lessons learned from the application of environmental impact assessments reveal the importance of public participation and the need to ensure that, to the extent possible, environmental impact assessments are applied as early in the project cycle as is feasible.
17. Strategic environmental assessments differ slightly from environmental impact assessments in so far as they can be applied to broader policy decisions regarding climate change adaptation planning. They serve to facilitate the integration of environmental considerations within policy planning and to inform decision-makers of uncertainty and consistency in objectives.
18. The process of completing strategic environmental assessments is very similar to an environmental impact assessment except that strategic environmental assessments:
- Place increased importance on baseline surveys;
- Conduct an options analysis; and
- Present information specifically for use in policy decisions.
19. Lessons learned from the application of strategic environmental assessments reveal the value-added from public participation and quality control through, for example, an audit committee.
III. RISK-MANAGEMENT APPROACH
20. A number of planning tools have been developed using risk-management approaches including:
- The Ramsar Wetland Risk Assessment Framework\(^1\);
- Various disaster risk indices\(^2\), \(^3\), \(^4\), and
- A number of early warning systems\(^5\).
21. Based on such diverse experiences, the Ad Hoc Technical Expert Group on Biodiversity and Climate Change, convened under the Convention on Biological Diversity with financial support from the Government of Finland, created a framework for adaptation integrating biodiversity concerns consistent with risk management approaches as presented in figure 1 below.
---
\(^1\) [http://www.ramsar.org/key_guide_risk_e.htm](http://www.ramsar.org/key_guide_risk_e.htm)
\(^2\) [http://www.undp.org/bcpr/disred/english/publications/rdr.htm](http://www.undp.org/bcpr/disred/english/publications/rdr.htm)
\(^3\) [http://www.iadb.org/exr/disaster/rmi.cfm?language=en&parid=5](http://www.iadb.org/exr/disaster/rmi.cfm?language=en&parid=5)
\(^4\) [http://siteresources.worldbank.org/INTDISMGMT/Resources/9environment.pdf](http://siteresources.worldbank.org/INTDISMGMT/Resources/9environment.pdf)
\(^5\) [http://www.fao.org/GIEWS/english/index.htm](http://www.fao.org/GIEWS/english/index.htm)
Figure 1: Applying a Risk Management Approach to Adaptation Planning
Identification of problem and its scope
Inclusiveness
• Partners, stakeholders
• Identification, participation
Current adaptation knowledge base
• Status and trends (existing data and traditional knowledge)
• Biodiversity
• Climate change, variability, and extremes
• Adaptive and coping capacity and resilience
• Behavior/practices/technologies
• Impacts on biodiversity
• Vulnerable Systems (ecosystems, species)
Adaptation action planning
• Identification and prioritization of adaptation options
• Development of policies and measures
• Synergies between objectives of conventions
• Integration into national sustainable development plans
Implementation and Monitoring
• Collection of new/additional long-term data on climate system and biodiversity
• Monitoring outcomes of adaptation action plans/collation of methods of implementation
New initiatives, outreach
Research, education, training and public awareness
Review and advice
IV. VALUE AND VALUATION TECHNIQUES
22. Environmental assets, including biodiversity resources, have both use and non-use values. Use values include products for consumption (such as food, clean water and biomass), outputs that are not consumed (such as recreational and aesthetic values), functional benefits based on the provision of ecosystem services an options for future use. Non-use values include existence and bequest values including the value derived from knowledge of continued existence.
23. The valuation of biodiversity resources in adaptation planning can be used to assign a monetary value to the services provided by biodiversity including, for example, coastal protection (e.g. mangroves and coral reefs) and the provision of alternative livelihoods (e.g. non-timber forest products and the harvesting of medicinal plants). As such, valuation techniques allow adaptation project planners to fully consider the current and future economic, environmental and social impacts of change.
24. Techniques for the valuation of biodiversity services are numerous and include willingness to pay and willingness to accept compensation. These techniques are based on observed or hypothetical behaviour including:
- Market prices;
- Contingent valuation;
- Choice experiment tests;
- Avoidance cost method; and
- Opportunity cost method.
25. The Secretariat of the CBD recently published “An Exploration of Tools and Methodologies for Valuation of Biodiversity and Biodiversity Resources and Functions” (CBD Technical Series No. 28), which provides additional information on tools and practical examples.
26. The publication examines valuation and decision-making in economic and non-economic frameworks and concludes that while valuation techniques are increasingly being integrated into decision making, there remains a need to:
- Ensure broad participation in valuation, especially when social impacts are significant or traditional knowledge is used to capture values;
- Build capacity in valuation studies including the collection of primary research; and
- Establish best practices or commonly accepted guidelines for valuation.
27. It also explores options for strengthened international collaboration for valuation including for: capacity-building, fostering research, and building appropriate institutional enabling environments.
V. MONITORING AND EVALUATION TOOLS
28. Ongoing monitoring and evaluation is critical for: (i) the evaluation of the vulnerability of biodiversity and ecosystems to the impacts of climate change; and (ii) the assessment of the effectiveness of climate change adaptation plans.
29. Monitoring tools can be employed in adaptation planning to assess either the direct physical impacts of climate change (e.g. sea level rise, changes in precipitation, changes in temperature) or the results of these impacts on ecosystems, people and livelihoods.
30. Examples of some tools and methods are presented in table 1 below. The tools and methods presented below do not represent all possibilities; rather they provide examples of some of the more commonly implemented tools and methods as identified through research conducted by the Secretariat of the Convention on Biological Diversity.
**Table 1. Examples of tools and methods to assess vulnerability**
| Impacts of climate change | Tools and Methods |
|---------------------------|-------------------|
| | Physical Processes | Vulnerability |
| Sea-level rise | Sea level Fine Resolution Acoustic Measuring Equipment (SEAFRAME)⁶ | Coastal Vulnerability Index (CVI)⁷ |
| | Continuous Global Positioning System⁸ | |
| Increased air / ocean temperatures | Ocean Monitoring (e.g. Global Ocean Data Assimilation System⁹, National Oceanographic Data Center¹⁰) | Coral reefs monitoring protocols (e.g. reef resilience toolkits¹¹) |
| | Meteorological Stations (e.g. National Climate Data Center¹², Climate Anomaly Monitoring System¹³) | Glacial lake outburst vulnerability assessment |
| Changing precipitation regimes | Meteorological Stations (e.g. Global Precipitation Measurement¹⁴) | Fire risk assessment |
| | Satellite Monitoring (e.g. International Satellite Land Surface Climatology Project¹⁵) | Drought vulnerability assessment |
| | Palmer Drought Severity Index¹⁶ | Global Information and Early Warning System¹⁷ |
| Increased frequency of extreme events | Global Hazards / Extremes Monitoring (e.g. Tropical Atmosphere Ocean Project¹⁸) | Household vulnerability assessments |
| | | Disaster Risk Index¹⁹ |
---
⁶ [http://www.icsm.gov.au/icsm/tides/SP9/PDF/IOCVIII_acoustic_errors.pdf](http://www.icsm.gov.au/icsm/tides/SP9/PDF/IOCVIII_acoustic_errors.pdf)
⁷ [http://cdiac.ornl.gov/epubs/ndp/ndp043c/sec9.htm](http://cdiac.ornl.gov/epubs/ndp/ndp043c/sec9.htm)
⁸ [http://www.bom.gov.au/pacificsealevel/cgps/cgps_fact_sheet.pdf](http://www.bom.gov.au/pacificsealevel/cgps/cgps_fact_sheet.pdf)
⁹ [http://www.cpc.ncep.noaa.gov/products/GODAS/](http://www.cpc.ncep.noaa.gov/products/GODAS/)
¹⁰ [http://www.nddc.noaa.gov/](http://www.nddc.noaa.gov/)
¹¹ The Nature Conservancy and Partners R2- Reef resilience: building resilience into coral reef conservation; additional tools for managers: Volume 2.0. CD ROM Toolkit, 2004.
¹² [http://www.ncdc.noaa.gov/oa/ncdc.html](http://www.ncdc.noaa.gov/oa/ncdc.html)
¹³ [http://www.cpc.ncep.noaa.gov/products/global_precip/html/wpage_cams_opi.shtml](http://www.cpc.ncep.noaa.gov/products/global_precip/html/wpage_cams_opi.shtml)
¹⁴ [http://gpm.gsfc.nasa.gov/](http://gpm.gsfc.nasa.gov/)
¹⁵ [http://www.gewex.org/islscp.html](http://www.gewex.org/islscp.html)
¹⁶ [http://www.drought.noaa.gov/palmer.html](http://www.drought.noaa.gov/palmer.html)
¹⁷ [http://www.fao.org/giews/english/index.htm](http://www.fao.org/giews/english/index.htm)
¹⁸ [http://www.pmel.noaa.gov/tao/](http://www.pmel.noaa.gov/tao/)
¹⁹ [http://gridca.grid.unep.ch/undp/](http://gridca.grid.unep.ch/undp/)
31. Lessons learned from the application of monitoring and assessment tools reveal the need for:
- A flexible framework to facilitate adaptive management and learning by doing;
- The pairing of technical, human and institutional capacity building;
- Links between technical experts, decision-makers, and project managers; and
- The inclusion of traditional and local knowledge in monitoring and evaluation.
VI. ADDITIONAL RESOURCES
Ecosystem Approach Sourcebook: [http://www.cbd.int/ecosystem/sourcebook](http://www.cbd.int/ecosystem/sourcebook)
Technical Series No. 10 – Interlinkages Between Biological Diversity and Climate Change: [http://www.cbd.int/doc/publications/cbd-ts-10.pdf](http://www.cbd.int/doc/publications/cbd-ts-10.pdf)
Technical Series No. 25 – Guidance for Promoting Synergy Among Activities Addressing Biological Diversity, Desertification, Land Degradation and Climate Change: [http://www.cbd.int/doc/publications/cbd-ts-25.pdf](http://www.cbd.int/doc/publications/cbd-ts-25.pdf)
Technical Series No. 26 – Voluntary Guidelines on Biodiversity – Inclusive Impact Assessment: [http://www.cbd.int/doc/publications/cbd-ts-26-en.pdf](http://www.cbd.int/doc/publications/cbd-ts-26-en.pdf)
Technical Series No. 28 - An Exploration of Tools and Methodologies for Valuation of Biodiversity and Biodiversity Resources and Functions: [http://www.cbd.int/doc/publications/cbd-ts-28.pdf](http://www.cbd.int/doc/publications/cbd-ts-28.pdf)
Methods and Tools
FAO Contribution to
“The Nairobi Work Programme (NWP) on impacts, vulnerability and adaptation to climate change”
On invitation of SBSTA to submit to the secretariat, by 31 May 2007, information on the relevant programmes, activities and views on the issues listed under item 21 of the Conclusions of the Nairobi work programme on impacts, vulnerability and adaptation to climate change
Context and mandate of FAO to work on methods and tools for climate impact, vulnerability and adaptation assessments
One of the Governing Bodies of FAO, the Committee on Agriculture (COAG), has stressed the need for the Organization to continue to be a neutral and technical forum on the issue of Climate Change and to contribute to the debate, focusing on such issues as data, definitions and methodologies related to agriculture and climate change.
COAG supported the development of an integrated climate change programme based on current activities, within FAO Regular Budget provisions, and consistent with the legal and political framework of the UN Framework Convention on Climate Change (UNFCCC) and the technical work of the IPCC. This includes the promotion of practices for climate change mitigation, the adaptation of agricultural systems to climate change, the reduction of emissions from the agricultural sector as far as it is carefully considered within the major objective of ensuring food security, the development of practices aimed at increasing the resilience of agricultural production systems to the vagaries of weather and climate change, national and regional observing systems, as well as data and information collection and dissemination.
The Committee called on FAO to assist Members, in particular developing countries, which are vulnerable to climate change, to enhance their capacities to confront the negative impacts of climate variability and change on agriculture. In 1998, an Interdepartmental Working Group on Climate Change was established and mandated to coordinate FAO’s cross departmental, multidisciplinary work on climate change.
The issues of climate change mitigation and adaptation has been specifically addressed and prioritized as a key area of future work by FAO’s governing bodies at the Committee on Agriculture (COAG), the Committee on Food Security (CFS), the Committee on Forestry (COFO). In the context of FAO’s internal reform 2006/2007, a new division “Environment, climate change and bioenergy” (NRC) was created reflecting the importance given to the subject.
NRC, under the Natural Resources Management and Environment Department, plays a central role in coordinating, together with the Interdepartmental Working Group on Climate Change, FAO climate change related programmes and activities. The main mandate of NRC is to contribute to and promote environmental and natural resources management and conservation in the context of sustainable agriculture, including forestry and fisheries, rural development and food security. NRC provides advisory services, formulation, backstopping and evaluation to FAO’s field projects and Headquarter’s programmes, including some 50 countries in Africa, Asia, Latin America, the Caribbean, Central and Eastern Europe. The main technical orientation of NRC is aimed at:
- promoting and optimizing, within the FAO network, the use of remote sensing, GIS and agrometeorology tools, for the collection, archiving and processing of data on renewable natural resources and food security;
- to transfer and integrate the use remote sensing, GIS and agrometeorology tools into Member Nations’ activities, for the specific purposes of:
- early warning, environmental monitoring and rapid assessment of crop growing conditions;
- inventory, monitoring and management of natural resources at various levels: local, national, regional and global;
- integration of various types of data in local or national environment information systems;
- coordination of FAO’s remote sensing, agrometeorology, early arning and natural resource monitoring activities, and to follow and initiate new technological developments.
NRC recently began a process aimed at developing a climate change adaptation strategy and workplan. A central component of this strategy involves a screening of FAO’s data and information resources in order to identify those tools that will assist climate change adaptation. That process is not complete yet. As a result, the tools and methodologies listed below represent the set of possible options available to FAO. How these tools and methodologies FAO applies, it will become clearer as the climate change adaptation strategy and workplan develops. A further focus of the strategy regards how FAO combines these tools and information resources. Whilst data are essential for effective adaptation, it is anticipated that FAO will use its traditional data and information tools in novel and more coordinated ways – that are not fully captured in this submission - in supporting climate change adaptation.
More specifically it is anticipated that FAO will use and combine its existing information resources to establish vulnerability baselines, identify adaptation options, screen those options and monitor the impact of implemented adaptation.
It should be noted that this submission is among others FAO’s submissions to SBSTA, and as such it highlights a very specific component of FAO’s contribution to climate change adaptation.
**FAO submission to SBSTA**
According to the outline provided by UNFCCC this submission reports on FAO programmes and activities relating to the SBSTA sub-heading “Methods and Tools”, with the objective of contributing to the sub-themes:
(i) “Promoting development and dissemination of methodologies and tools for impact and vulnerability assessments, such as rapid assessments and bottom-up approaches, including as they apply to sustainable development”, and
(ii) “Promoting the development and dissemination of methods and tools for assessment and improvement of adaptation planning, measures and actions, and integration with sustainable development”.
The aim of the activities in this area is to:
1. Apply and develop methodologies and tools for impact, vulnerability and adaptation assessments;
2. Develop methodologies and tools for adaptation planning, measures and actions, and integration with sustainable development;
3. Disseminate existing and emerging methods and tools;
4. Facilitate the sharing of experiences and lessons learned, including those contained in the UNFCCC Compendium on methods and tools to evaluate impacts of, and vulnerability and adaptation to, climate change\(^1\), including the assessment of costs and benefits.
FAO has a credible track-record in collecting, processing and applying information on natural resources, climate and the potential and actual production of food and fibre. In some instances, most notably fisheries, FAO is the exclusive source of this data. Much of this data will be useful, indeed essential, in the formulation of adaptation baselines and effective adaptation strategies. Information on its own, however, is insufficient to ensure effective adaptation. Only by combining the available data and information tools with FAO’s long-standing, country and region-specific, experience in technology transfer, information dissemination via extension and development facilitation will the FAO’s information resources prove effective in shaping adaptation processes. FAO has recently articulated its commitment to a “corporate-response” to climate change. Implicit in this approach is an understanding that effective adaptation will require a combination of tools and methodologies with the facilitation, social and institutional expertise that are necessary for their use.
a) **Information on existing and emerging assessment methodologies and tools**
Reducing vulnerability to current climate variability represents an essential step towards reducing vulnerability to climate change. At the same time climate change may require communities and countries to adapt to new threats if they are to survive. Effective adaptation to climate change involves both social and institutional processes aimed at creating the capacity to cope with a wide-range of future climate scenarios, and the multiple stresses that they will impose. Information on the following FAO methods and tools is given: (i) Agro-ecological Zoning; (ii) Climate Impact Assessment; (iii) AQUACROP; (iv) CLIMWAT; (v) Gender Issues; (vi) Global Land Cover Network.
**Agro-ecological Zoning**
The Agro-ecological Zoning (AEZ) methodology and related decision support tools allow the analysis of land productivity, crop intensification, food production and sustainability issues. AEZ methodology and supporting software packages can be applied at global, regional, national and sub-national levels. AEZ uses various databases, models and decision support tools which are described below. The AEZ methodology is useful for assessing land resources, and as such
---
\(^1\) [http://unfccc.int/adaptation/methodologies_for/vulnerability_and_adaptation/items/2674.php](http://unfccc.int/adaptation/methodologies_for/vulnerability_and_adaptation/items/2674.php)
provides a tool for better planning and management and monitoring of these resources. AEZ can be used in various assessment applications, including:
- land resource inventories;
- inventories of land utilization types and production systems, including indigenous systems, and their requirements;
- assessment of the impact of climate change on cropping systems and food production;
- potential yield calculations and estimates of how yield will be affected by climate change;
- land suitability and land productivity evaluations, including forestry and livestock productivity;
- estimations of arable areas, mapping of agro-climatic zones, identifying soil problem areas, identifying and mapping agro-ecological zones, identifying the suitability of land for cropping and pastoral activities, quantitative estimates of potential crop areas;
- land degradation assessments, assessments of carrying capacity and how this will be affected by different climate regimes and land use optimization modelling;
- assessing and mapping flood and drought damages to crops;
- monitoring land resources development.
It is anticipated that the AEZ process will be crucial in identifying agricultural and natural resource baselines, and in monitoring how these baselines are being altered. FAO's AEZ methodology also provides a means of identifying how natural resources and agricultural production is likely to be perturbed under future climate scenarios and in identifying suitable crops and locations under future climate scenarios.
**Climate Impact Assessment**
FAO has a long tradition in supporting early warning systems through the crop monitoring and forecasting technology based on field data, satellite based indices and application software. Since 1974, FAO has developed and improved its crop forecasting methodology, and has been supplying updated information on crop conditions mainly in sub-Saharan countries through the Global Information and Early Warning System on Food and Agriculture and to the national Food Security Information and Early Warning Systems world-wide. Building on these national systems, which are known and used by countries, represents a more effective starting point than trying to launch new, possibly improved but largely untested, analytical tools for climate impact assessment.
FAO has been a leader in the use of new data types (in particular rainfall, crop phenology and remotely sensed data) and specific software tools such as crop specific water balance, data interpolation in time and space and analysis tools. These data and tools are designed to be scale independent, and can monitor patterns of climate variability at global, continental, regional, national, sub-national and farm level. They have been tested and used extensively by countries and are appropriate for vulnerability risk assessments and to define best practices for climate change adaptation.
By improving the use of Early Warning and Information Systems and Disaster Information Management Systems, the short- and long-term impact of (extreme) events on agricultural livelihoods can be assessed, and disaster preparedness and risk mitigation enhanced.
Because of the nature of climate change, effective climate change adaptation will require repeated efforts at various spatial scales to develop methodologies that render agriculture more resilient and responsive. FAO’s climate impact assessment tools are capable of supporting this process. FAO’s track-record and experiences in applying these tools are likely to prove particularly useful in the formulation of adaptation responses. These include:
- Modularity of software and common file formats constitute central issues in the FAO philosophy with regards to the development of climate impact assessment methods and tools.
- In developing and/or applying methodologies, FAO takes an approach that integrates different technical and socio-economic elements, according to location-specific priorities and available resources.
- South-south co-operation is seen as fundamental, and is encouraged by FAO as a means of promoting the transfer of technical capacities and know-how between developing countries.
- Dynamic Farming Optimization (DFO) – the improvement of tactical decision-making at farm level and based on the quantitative observation and analysis of local environmental factors - is seen as an essential component of FAO’s climate impact assessment approach.
- Applications of methods and tools should begin at on-farm level and be up-scaled to sub-national / national / regional / global level.
- Field activities should inform and strengthen capacity at all levels – that is from farm-level to national institutions involved in agriculture and natural resource management.
A variety of climate impact assessment tools developed or under development by FAO is described below. Table 1 describes the linkages between application, product, tool, data input, tools, spatial scale and audience for climatic information.
**Agroclimatic water stress mapping**
In order to provide a global, near real-time warning of current and future agricultural emergencies, the agroclimatic water stress mapping tool identifies, through a calibration matrix, areas where excess or deficit rainfall is likely to produce serious damage to rainfed agriculture or pastures. The risk can be weighted with other critical factors, for instance high population density or high soil degradation.
The tool produces water stress maps in a digital form consisting in comparing actual and average monthly precipitation digital maps at 0.5° of resolution during the periods when agricultural activities are more “sensitive” to water stress. The agricultural areas are the zones where the combination of rainfall, mean temperature and potential evapotranspiration average patterns produces an active growing season. Based on a user-selected future time, the maps identify the regions where the agricultural season will be disrupted by adverse water supply conditions. In addition, this predictive instrument can use seasonal forecast data to produce maps identifying the probability of water deficit or surplus conditions in the coming months.
**AGROMETSHELL**
AgroMetShell (AMS) consolidates several food security early warning software packages that have been developed by FAO. It represents an essential tool for assessing the impact of climatic
conditions on crops, climatic risk analysis and for regional crop yield forecasting. AMS is a software tool designed to support crop forecasting and the core part of the software is formed by the crop specific water balance calculations. Based on rainfall, evapotranspiration and crop data, AMS can calculate if and when a crop experienced water shortage, eventually leading to reduced crop yields.
AMS includes several modules, such as:
- **ADDAPIX**. Addapix performs a pixel-by-pixel clustering analysis in order to identify areas on a map, or a set of maps, that exhibit similar patterns of weather change. For instance, areas where the rainy season was late and which suffered drought at the time of flowering, could constitute a cluster. It includes a numerical classification at the pixel level, providing a map of homogeneous areas together with the "profiles" that characterise them. The pixel-by-pixel clustering of a set of images (or two related sets, as is the case with monitoring) provides a means of extracting and transmitting the essential information from large data sets in an easy-to-understand way.
- **CrowPer**. Crop gROWing PERiod determines the growing season characteristics for a specific crop, i.e. the average ("normal") and actual ("current") beginning, peak and end of the growing season(s) for any geographically defined location (points or maps), in a fully automated fashion based on ground data and satellite imagery. All outputs are accompanied by a reliability index and map inputs and outputs are provided in digital format.
- **Crop Suitability** provides an evaluation of crop suitability at short-time scale and for future-term climate change scenarios. First, individual crop suitability ratings are analyzed and then suitability for various cropping patterns are rated using a database of known and potential cropping patterns (rotations). This suitability modeling takes into account individual crop characteristics, input/management levels, soil physical characteristics, hydrologic and climatic conditions, and seasonal variability. Extrapolations of existing cropping system technologies can also be made to delineate suitable areas on a national scale.
- **Crop Yield Forecast** provides yield forecast of the major food crops at sub-national / national / regional levels, as accurate and timely as possible. Crop yield forecast procedures combine all kinds of input such as historical yield statistics, weather indicators, simulated crop indicators, remote sensing based vegetation indices, additional information sources and expert knowledge. The components of the crop yield forecast routine include:
- observed meteorological data collection, processing and analysis;
- simulation of agro-meteorological crop growth parameters;
- low spatial resolution (and high temporal resolution) satellite data analysis;
- statistical analysis and regressions.
- **Crop Yield under Climate Change Scenarios**. Climate change has a direct impact on crop yields. However, while coupled global atmospheric and oceanic circulation models (GCMs) are becoming increasingly robust in their efforts to predict pattern of global warming under different scenarios, to date they have not proven suitable for predicting the local changes in meteorological parameters that determine crop growth and yields (e.g. precipitation, surface solar radiation, humidity and wind speed). These variables are, however, observed by national services and are thus available for national investigations.
under the recent circumstances. The crop yield under climate change scenario’ tool bridges the gap between GCM results on the one hand and crop-yield impacts on the other hand by using FAO’s crop-specific soil water balance model and the Stochastic Weather Generator (SWG) fit to observed meteorological data and crop yield statistics. As such it allows for the investigation of the sensitivity of different crops in various regions with respect to a broad range of different future climate scenarios.
- **Crop Yield Trend Analysis**. This tool analyses trends in crop yield at the national and/or local level. This is essential in determining the patterns of inter-annual and intra-seasonal variability and probability of extreme weather events. It also ensures that the time period used for the calibration of crop forecasting methods is devoid of any significant trends that can invalidate the results.
- **Extreme Weather Events Risk Analysis**. Based on historical climate data, this tool analyses the daily maximum and minimum temperature and rainfall data in order to derive climate change indices that provide insight into extreme events:
- percentile-based indices: sample the extreme end of a reference period distribution (e.g. 10th or 90th percentile of min. and max. temperature);
- absolute indices: represent maximum or minimum values within a season (e.g. maximum 5 day rainfall);
- threshold indices: number of days on which temperature/rainfall falls above or below a fixed threshold (e.g. frost days, days with rainfall > 10mm);
- duration indices: define periods of excessive warmth, cold, wetness or dryness (e.g. heat wave duration, growing season length, number of consecutive dry days);
- other indices: diurnal or inter-annual temperature range, intensity of daily rainfall.
- **Stochastic Weather Generator**. The stochastic weather generator (SWG) simulates possible “future” weather scenarios using the most relevant weather variables from existing daily or monthly records. Daily values of air max/min temperature, precipitation, and wind speed can be generated from random processes based on parameters estimated either on daily data or monthly means. Synthetic values of solar radiation, vapor pressure deficit, and reference evapotranspiration are produced by physically-based relationships. For precipitation, the routines can generate long-term time-series using parameters from existing daily or monthly data and it includes the amount of rainfall and snowfall.
- **Weather-Based Yield Index for Crop Insurance**. Extreme weather hazards such as droughts and floods lead to severe income losses for rural people, especially farmers and poor people. Given their limited ability to offset these losses, many rural people become food insecure and suffer extreme hardships in disaster years. It might be possible to cope with small, localized droughts by transporting food-supplies from other districts of the country that have excess production and by sourcing government budget reserves. In case of a severe regional drought, this reallocation of resources may not be manageable and it would be appropriate to utilize weather–based maize yield indices in the form of insurance. A weather-based crop yield index is developed by evaluating historical weather data and determining the relationship between rainfall and maize yields. If there is a strong correlation between the two, then this index could be used to manage weather risk. AgroMetShell software is used to derive an effective weather-based maize yield index that could be used for crop insurance purposes to monitor crop performance and to produce real-time pixel-based maize yield index maps covering the whole country with a resolution
of 0.05° latitude and longitude (approximately 5 km). First estimates of the index can be provided at planting time and updated in real time throughout the season.
- **WINDISP**. It is a software package for the display and analysis of satellite images, digital maps and associated databases which are used for crop forecasting. The tool allows sophisticated analysis at pixel level.
**Dynamic Farming Optimization**
With subsistence agriculture expanding more and more into marginal areas, and with at least some modernisation taking place, subsistence farmers face the problem of further degrading their environment and increasing variability of their production. There is a need to promote sustainable farming systems at peasant level and to ensure improved food security and income of rural communities, especially in areas suffering from large inter- and intra-seasonal variations of climatic conditions. The Dynamic Farming Optimization (DFO) approach intends to improve cropping strategies specifically tailored to the changing local environment of subsistence farmers in making better use of climate resources, notably rainfall and solar radiation, while at the same time reducing the strain on the environment, notably on soils. DFO represents a set of techniques able to contribute to optimising farming practices, as a function of current environmental conditions, especially to capture uppermost possible benefits from unusually favourable and/or non favourable climatic (rainfall, temperature, radiation, etc.) conditions. The purpose of DFO is to help farmers stabilise their production and income through advices based on local farming practices, historical weather data ("risk assessment"), actual current season weather and future climate conditions ("dynamic farming optimization").
**FAOClim**
FAO manages a major world-wide database of agro-climatic variables covering more than 32,000 stations and focusing on monthly averages and historical time series, which are essential tools for variability analyses and risk studies. The database management system (FAOClim-Net), linked to real-time daily meteorological data flow, allows browsing and retrieval of basic data to users. It is proposed that FAOClim provides a crucial resource in understanding how climate is changing and in establishing the baselines from which climate is being perturbed. Without historical baselines and an understanding of the magnitude of perturbations, it is very difficult to mobilise appropriate adaptation.
**New_LocClim**
New_LocClim (Local Climate Estimator) software can estimate climatic conditions at locations for which no observations are available and provides nine different spatial interpolation methods (IDW, kriging, Shepard, thin-plate splines, etc.). It allows for an extensive investigation of interpolation errors and the influence of different settings on the results. Furthermore, statistical analysis of the interpolated spatial fields is provided and detailed analysis for single geographic points can be prepared. New_LocClim aims at the preparation and investigation of climate maps, including the possibility for users to interpolate their own data and to prepare maps (grids) at any spatial resolution, and to determine crop growing season characteristics.
The preparation of climate maps at any spatial resolution allows users to investigate about climate at various level of detail, from point to region. Based on the FAOClim database, New_LocClim can determine the average growing season as defined by the FAO Agro-Ecological Zones project that is the period during a year when precipitation exceeds half the potential evapotranspiration. The tool allows changing this definition by altering the ratio between
precipitation and potential evapotranspiration. Furthermore, it distinguishes between moist and humid growing seasons.
**Rainfall Estimate with Gauge Analysis**
The objective of this activity is to develop a method to estimate rainfall amount over a day or a 10-day period, particularly for certain regions where the coverage of the weather stations is scarce. The algorithm uses the data from the weather stations to calibrate the satellite estimation. The data taken from the weather stations provide accurate cumulative rainfall measurements, and are assumed to be the true rainfall near each station. The method is designed to use data from any weather station network: it can be a local weather station network or the WMO SYNOP messages distributed via WMO Global Telecommunication System (GTS).
The rainfall estimate routine runs at continental / regional / national level. The tool can be also run by regional / national meteorological centers so that they can use local rainfall data and specific meteorological models. Once operational, the routine to estimate the rainfall amount over Africa will be applied to the Indian Ocean area as well. The input data are used to provide rainfall forecast for the coming day and week.
**AQUACROP: an irrigation model**
AQUACROP, a new version of CROPWAT, is a Windows-based software programme designed to simulate biomass and yield responses of field crops to various degrees of water availability. Its application encompasses rainfed as well as supplementary, deficit and full irrigation. It is based on a water-driven growth-engine that uses biomass water productivity (or biomass water use efficiency) as key growth parameter ($WP_b$). The model runs on daily time-steps using either calendar time or thermal time. It accounts for three levels of water-stress responses (canopy expansion rate, stomatal closure and senescence acceleration), for salinity build-up in the root zone and for fertility status. An important peculiarity of the model is that the $WP_b$ parameter is normalized for climatic conditions (specifically, the evaporative demand of the atmosphere $-ET_o-$ and the CO$_2$ concentration) and it simulates biomass and yield also under various global warming and elevated CO$_2$ conditions. It allows to evaluate different water-management strategies, the development of recommendations for improved irrigation practices and the planning of irrigation schedules under varying water availability/supply.
AQUACROP is a tool for (i) predicting crop production under different water-management conditions (including rainfed and supplementary, deficit and full irrigation) under present and future climate change conditions, and (ii) investigating different management strategies, under present and future climate change conditions. Appropriate for risk-management and adaptation-capacity studies of cropping systems. It can be applied at all locations; agricultural sector; site-specific, but can be extrapolated to larger scale by GIS applications.
The key inputs to the AQUACROP model are: basic climatic data (temperature, rainfall, and reference evapotranspiration; CLIMWAT 2.0 database, provided with the program as an option); basic soil data (texture for each -1 to many - layer along the depth); crop data (already calibrated crop-parameters are provided with the model); selected management conditions. The key outputs are: canopy development, above-ground biomass, final yield, crop water consumption (with separation between soil evaporation and crop transpiration), and general crop water and irrigation requirements. AQUACROP will be provided with calibrated parameters for all major and underutilized agricultural crops and can be applied worldwide.
It is intended for use by agricultural and extension service professionals with sufficient background and experience in crop and water management. As a means of monitoring changes in crop yields and explaining the impacts of climate change and the need for adaptation, and also in planning appropriate adaptation, AQUACROP has the potential to provide a useful tool.
**CLIMWAT 2.0: a climatic database for AQUACROP**
Under AQUACROP, calculations of crop water requirements and irrigation requirements are carried out with inputs of climatic and crop data. CLIMWAT 2.0 is a climatic database to be used in combination with AQUACROP and allows the ready calculation of crop water requirements, irrigation supply and irrigation scheduling for various crops for a range of climatological stations worldwide. A database facility CLIMWAT 2.0 has been developed which allows a direct link from AQUACROP to an extensive climatic database of more than 5,000 stations worldwide.
The combination of AQUACROP and CLIMWAT has the potential to provide a measure of climate change thresholds and to provide information on adaptation.
**Gender Issues**
Climate change is expected to have gender specific impacts and accordingly climate change adaptation should include gender disaggregated approaches. A number of well-developed tools for gender mainstreaming exist within FAO, and are being used in a variety of contexts. Applying these tools to climate change adaptation policy making and implementation will form an integral component of FAOs contribution to climate change adaptation.
*Gender analysis*: Making gender disaggregated data available and supporting relevant research; evaluating policies, institutions and programmes for gender specific impacts, gender balance and action on gender issues;
*Gender Impact Assessment (GIA)*: Producing gender analysis of adaptation to climate change and vulnerability to its impacts for more sustainable mechanisms of risk management;
*Gender budgeting*: apply gender budgeting to climate change funds;
*Promoting women in decision-making*: institutional mechanisms for the advancement of women, e.g. quota systems; establishing task forces and other organisational development mechanisms; innovative types of outreach to women, including awareness raising, capacity building, education and training for women and men (including changing curricula, public campaigns, gender sensitivity training, guidelines for gender mainstreaming, etc); collecting and sharing good practices at local, national and international levels, including peer group review of good practice and promoting successful strategies; developing and applying such tools successfully depends on the appropriate legislative environment, demonstrated political will and support as well as necessary funding being in place.
The goal of the Global Land Cover Network (GLCN) is to improve the availability of global information on land cover and its dynamics. Currently available land cover information often lacks the required levels of accuracy, or is collected using a variety of different standards, thus preventing comparison between the regions and compilation of global totals. Land cover mapping and monitoring activities provide information that is essential for the sustainable management of natural resources and environmental protection.
FAO and its partners have developed a broad suite of software and methodologies to allow countries and individual organizations the ability to: gather and acquire land cover and environmental data; undertake photo interpretation and data analysis; generate land cover change analysis products and develop environmental databases with environmental as well as socio economic information. All procedures are undertaken using harmonized methods and standards to ensure a broad stakeholder access to what is generated and to allow the development of regional and global datasets. Land cover and land cover change data are fundamental to the sustainable management of natural resources, environmental protection, food security and humanitarian programmes. They are also essential for climate change monitoring, prediction and adaptation strategies.
b) Views on lessons learned from their application
Information tools are a necessary but insufficient resource to ensure effective climate change adaptation. The success of FAO tools is that they have been developed under various conditions so that they can be applied at any spatial level: from on-farm up to global level. The limited amount of input data required to run most of FAO’s information tools makes them a good compromise to deal with the poor density of the climate observation network in many developing countries. They are also applied by several UN and international Agencies for national / regional and global assessments.
Global Land Cover Network (GLCN) is based on the success of the FAO Africover project which was established in response to a number of national requests for assistance in the development of reliable and georeferenced information on natural resources. These data are needed for: early warning; food security; agriculture; disaster prevention and management; forest and rangeland monitoring; environmental planning; watershed catchments management; statistics on natural resources; biodiversity studies, and climate change monitoring, modelling and adaptation activities.
c) Opportunities, gaps, needs, constraints and barriers
All FAO climate impact assessment tools are freeware and, although most of current versions run under MS Windows environment, future versions will be developed for an “open-source” environment.
In order to utilize gender mainstreaming tools in the climate change adaptation policy process, gender-disaggregated data are needed, as is empirical evidence demonstrating the gender differences of vulnerabilities and adaptive capacities.
The main constraints is the lack of access to raw data (e.g. expensive satellite data) and the lack of standards and common methods which leads to incompatibility and access to datasets, especially historical ones. FAO has developed common standards to overcome this problem including the Land Cover Classification System (LCCS). LCCS is a scale independent method of classifying land cover. The approach supports all types of land cover monitoring and enables a comparison of land cover classes regardless of data source, sector or country.
d) Possible ways to develop and better disseminate methods and tools
The Food and Agriculture Organization of the United Nations (FAO) and the United Nations Environment Programme (UNEP), with the financial and technical support of the Government of Italy through the “Cooperazione Italiana” and the “Instituto Agronomico L’Oltremare”, has created the Global Land Cover Network (GLCN) in response to requests by stakeholders. Specifically the objectives of the initiative is to develop a global collaboration to develop a fully harmonized approach to make the required reliable and comparable land cover and land cover change data accessible to local, national and international initiatives. In particular, GLCN is intended to support the stakeholder community in developing countries that have difficulty in producing and making accessible reliable, consistent and updated information. The GLCN has a major mandate on outreach and tools and data dissemination.
e) Training opportunities
All FAO climate impact assessment tools are intended for use by agrometeorological, agricultural and extension services professionals with sufficient background and experience in climate, crop and water management.
With particular reference to the climate impact assessment tools, FAO has developed the concept of the national turn-key crop monitoring and forecasting systems, called “Crop Monitoring Box” (CM Box), which is a training package around the FAO software suite to analyse weather data and to assess their impact (current and future) on crop production. The training covers the principles and the practice of the operation of a national crop yield monitoring and forecasting system in a food security context, in particular the interpretation of the maps and other outputs produced by the various tools. By the end of the training, national experts are expected to be able to operate the software independently, including inputting crop and weather data, and the integration of ground and satellite information. One essential ingredient of the training is the development of the capacity to prepare crop and weather reports for the national food security system. The training makes use of national datasets that are prepared by the trainees themselves before the training actually starts. The CM Box is presented as individual modules of which countries can select only one or more of them.
Training on gender mainstreaming for climate policymakers is an outstanding issue.
A key area to the success and continuity of GLCN activities has been the importance given to training and the development of national capacity in the methodologies and applications required to undertake, maintain, archive and disseminate land cover and environmental data and information. This process has been mainly achieved through regional and national training
workshops and programmes and has allowed institutions and individuals to become self sufficient and in-turn provide support and training to other GLCN partners.
GLCN also develops a number of other products to support stakeholders, these include: newsletters, distance learning tools, forums, Web pages, databases, manuals, documents and brochures as well as presentations and seminars at the main international conferences and events.
**Conclusion**
Many FAO information and data resources constitute an essential ingredient in both:
i. promoting development and dissemination of methods and tools for impact and vulnerability assessments, such as rapid assessments and bottom-up approaches, and
ii. assessing and improving adaptation planning, measures and actions, and integration with sustainable development.
FAO has a track record of applying its information and data resources successfully in country specific development facilitation, including responses to variable weather and natural disasters.
FAO’s work on climate change mitigation has been complemented during 2006 by an increasing number of climate change adaptation measures involving agriculture, forestry and fisheries and processes of institutional strengthening within these activities. The multidisciplinary approach of FAO combined with large thematic geo-referenced databases and various software applications allow to FAO contribute to the reduction of agricultural production systems’ vulnerability to climate variability and change.
A country’s ability to gather, interpret and use data for land cover change is essential for policy makers and decision makers aiming to make informed and appropriate climate change adaptation strategies. The use of agreed methodologies and standards allows data compatibility to develop regional and global data required for modeling, including the identification of climate change vulnerability hotspots. Activities are undertaken by the active collaboration of member countries with the assistance of adequate capacity building programmes. However for some developing countries additional financial support is required to allow their full participation in these programmes.
| Application | Product | Tool | Input data | Spatial scale | Target audience |
|----------------------------------------------------------------------------|-------------------------------------------------------------------------|-----------------------|-------------------------------------------------|------------------------|----------------------------------------------------------------------------------|
| Past and current vulnerability risk assessments of agriculture sector. Definition of best practices to adapt to climate change | Climate maps, Rainfall estimate, water stress maps, crop suitability, extreme weather events risk analysis, date of planting, length of growing period | AMS, AWS, CLIM, CROW, CYTA, FRE | Historical, real-time meteorological data and satellite imagery | Regional, National, sub-National, On-farm | Extension services, farmers, international disaster agencies, insurance companies |
| Short-term forecasts (1-5 days) | Rainfall estimate, water stress maps, crop suitability, extreme weather events risk analysis, yield forecast, date of planting, length of growing period | AMS, CLIM, CYTA, DFO, INS, SWG | Real-time meteorological data and satellite imagery, short-term forecasts | Regional, National, sub-National, On-farm | Early warning systems for food security and for disease outbreaks, emergency response networks, extension services, farmers, international disaster agencies, insurance companies |
| Medium range forecast (5-20 days) | Rainfall estimate, water stress maps, crop suitability, extreme weather events risk analysis, yield forecast, date of planting, length of growing period | AMS, CLIM, CYTA, DFO, INS, SWG | Real-time meteorological data and satellite imagery, short-term forecasts | Regional, National, sub-National, On-farm | Early warning systems for food security and for disease outbreaks, emergency response networks, extension services, farmers, international disaster agencies, insurance companies |
| Seasonal climate projections (1-6 months) | Water stress maps, crop suitability, extreme weather events risk analysis, yield forecast, date of planting, length of growing period | AMS, AWS, CLIM, CYTA, DFO, SWG | Historical meteorological data, seasonal climate forecasts | Regional, National, sub-National | Early warning systems for food security and for disease outbreaks, emergency response networks, extension services, farmers, international disaster agencies, insurance companies |
| Climate change scenarios (2015, 2030, 2050, 2070) | Future weather, water stress maps, crop suitability | AMS, AWS, CLIM, CROW, CYTA, SWG | Historical meteorological data, global and regional climate models | Global, Regional, National, sub-National | Strategic planners at regional and national level, decision-makers at all levels of government, NGOs, and communities, insurance and financial markets |
Table 1. Link between Application, Product, Tool, Input data, and users of climate information.
Acronyms: AMS = AgroMetShell; AWS = Agroclimatic Water Stress Maps; CLIM = FAOClim database; CROW = Crop Growing Period; CYTA = Crop Yield Trend Analysis; DFO = Dynamic Farming Optimization; FRE = FAO Rainfall Estimate; INS = Weather-based Yield Index for Crop Insurance; LOC = New_LocClim; PCA = Pixel Clustering Analysis; SWG = Stochastic Weather Generator.
Disaster Risk Reduction Tools and Methods for Climate Change Adaptation
Inter-Agency Task Force on Climate Change and Disaster Risk Reduction
“The view that disasters are temporary disruptions to be managed only by humanitarian response, or that their impacts will be reduced only by some technical interventions has been replaced by the recognition that they are intimately linked with sustainable development activities in the social, economic and environmental fields. So-called ‘natural’ disasters are increasingly regarded as one of the many risks that people face.”
I. Introduction
Floods, storms, droughts, and extreme temperatures strike communities around the globe each year. The top ten disasters of 2004, in terms of the number of people affected, were all weather and climate-related. These types of disasters have occurred throughout history but with total damages amounting to US$130 billion from just these ten events, it is clear that the necessary steps to reduce disasters have not yet been taken.\(^2\) As climate change begins to manifest itself—in the form of increased frequency and intensity of hazards such as floods, storms, heat waves, and drought—the need for communities to address climate risks is becoming urgent. The coming decades are likely to bring, among other changes, altered precipitation patterns so that many areas will experience more frequent floods and landslides, while others will experience prolonged drought and wildfires.\(^3\)
As many communities are not prepared to cope with climate disasters facing them today, an ongoing challenge is to build their resilience. In answer to this challenge, disaster risk reduction (DRR)\(^4\) aims to address a comprehensive mix of factors contributing to communities’ vulnerabilities. There are numerous tools and methodologies that have been developed to put this approach into practice. The value of DRR and the experiences gained by DRR practitioners have been increasingly tapped by organizations active in climate change adaptation. For example, UNDP, OECD, the World Bank, and others have recently explored linkages between the two (see references).
This paper provides a brief description of DRR and then reviews a selection of tools that can provide an effective framework for combining the knowledge and experiences from the disaster management and climate change communities to build adaptive capacity.
II. The Disaster Risk Reduction Approach
The disaster management community has been evolving. Until the 1990s, disaster management was primarily focused on the response of governments, communities, and international organizations after
\(^1\) ISDR, 2004
\(^2\) www.cred.be, see 2004 statistics
\(^3\) IPCC, 2001
\(^4\) The International Strategy for Disaster Reduction, 2004, defines disaster risk reduction as: “The systematic development and application of policies, strategies and practices to minimise vulnerabilities and disaster risks throughout a society, to avoid (prevention) or to limit (mitigation and preparedness) adverse impact of hazards, within the broad context of sustainable development.”
disasters. This included the humanitarian aspects of relief, such as providing medical care, food and water, search and rescue, and containing the secondary disasters (e.g. fires that occur following an earthquake). Even now, only a tiny amount of humanitarian funding is spent on disaster risk reduction. Although the international community has increasingly realized that countries experience disasters differently, the unfortunate truth is that poorer countries are hit hardest, as they do not have sufficient resources to prepare for disasters. In addition, the socio-economic impacts following a disaster may linger far longer in poorer nations. A UNDP report states, “In 1995, Hurricane Luis caused US$ 330 million in direct damages to Antigua, equivalent to 66 percent of GDP. This can be contrasted with the larger economy of Turkey that lost between US$ 9 billion and US $13 billion in direct impacts from the Marmara earthquake in 1999, but whose national economy remained largely on track.” The same report found that “while only 11 percent of the people exposed to natural hazards live in countries classified as low human development, they account for more than 53 percent of total recorded deaths.”
Disaster risk reduction is increasingly recognized as a major factor in achieving sustainable development, although the systematic integration of DRR into development planning and activities remains a challenge. Time and again, investments in development have been wiped away by disasters, and these damages have only increased as countries grow. According to Munich Re, the recorded economic value of disaster damage has increased from US$ 75.5 billion in the 1960s to US$ 659.9 billion in the 1990s. These figures do not account for the losses suffered by communities in terms of lost lives and livelihoods.
To reduce human and economic losses, the *Hyogo Framework for Action 2005-2015: Building the Resilience of Nations and Communities to Disasters*, commits countries and agencies to: integrate DRR into sustainable development; develop and strengthen institutions, mechanisms and capacities to build resilience; and systematically incorporate DRR into emergency preparedness, response and recovery programmes. States have agreed to taking the lead in achieving these goals by:
- Strengthening policies and institutions
- Identifying, assessing and monitoring risk and enhancing early warning
- Using knowledge, innovation and education to build a culture of safety
- Reducing underlying risk factors, such as environmental degradation
- Strengthening preparedness for effective response
**Focus on communities and vulnerability**
One of the underlying principles of DRR is to consider disasters as a result of a community’s vulnerability. Vulnerability has been defined as “*a set of conditions and processes resulting from physical, social, economical, and environmental factors, which increase the susceptibility of a community to the impact of disasters.*” Taken from this standpoint and incorporating the resources within the community, risk can be defined as follows:
\[
\text{RISK} = \text{HAZARD} \times \frac{\text{VULNERABILITY}}{\text{CAPACITY}}
\]
By analyzing vulnerabilities and capacities, a fuller picture emerges of how to reduce disaster risks. The DRR approach considers a comprehensive range of vulnerability factors and aims to devise strategies that safeguard life and development before, during, and after a disaster. This approach is useful to the climate change community because, whereas the climate change debate and work has largely taken place at the
---
5 UNDP, 2004
6 UNDP, 2004. Amounts in 2002 US dollars.
7 *Ibid*
international and national levels and focused on impacts/hazards, disaster managers have long experience working at the local level on the vulnerabilities that turn an impact into a disaster. Although a national disaster reduction strategy should be in place, DRR activities are often focused on specific locations, addressing the particular vulnerabilities and capacities of the community, its culture and processes. The rationale behind any action and how it is implemented should be firmly rooted in the beneficial impacts that can be realized for the community, and for the most part, these benefits should be measurable. The success of disaster risk reduction activities depends to a large extent on the participation of community members. Adaptation to climate change risks may require effecting changes within local communities—by combining local knowledge and know-how with external information. Or adaptation may simply require increasing the scale of current climate risk reduction efforts by intensifying today’s efforts or expanding to other areas practices to deal with well-known hazards. By adopting the DRR focus of vulnerability reduction and making use of the specific tools developed for DRR, the climate change community can benefit from the vast experience gained in the reduction of hydro-meteorological risks.
III. Disaster Risk Reduction Tools
One common characteristic of DRR tools, as shown in the examples in the annex, is the emphasis on taking a holistic view of disaster risk reduction and the importance of linking with diverse stakeholders. Even for those tools with a narrower target group (e.g. climate forecasters or water utilities), the process requires drawing on wide-ranging sources of knowledge for successful risk reduction in the community. This attempt to analyze risk from diverse perspectives makes the tools suitable for climate change adaptation as impacts will affect various sectors and communities.
DRR tools have been developed by a range of institutions, including research centers, government agencies, the UN, NGOs, and IGOs. These include tools targeted for use at the international to the local levels, implemented in cooperation with diverse partners, and in response to numerous hazards. This paper does not attempt to catalogue the abundance of tools on offer.\(^8\) Instead, it looks at one or two specific examples for each aspect of DRR and briefly examines the links with climate change adaptation.
**Policy and institutions.** It is critical that decision makers at all levels are committed to disaster risk reduction, so that resources and planning guidance are provided. Just as important is the participation and understanding of individuals at the local level where disasters are felt. This category includes the country’s overall policies, the legislative process, and the institutional framework for implementing measures. The tools that have been developed for policy and institutions are aimed at mainstreaming disaster risk reduction into development planning from the national to community level. This aims to bring about a “culture of safety and resilience”.
Because of their comprehensive nature, these tools focus on the *process* of decision-making. For example, the methods recommend piggybacking on existing institutional structures and becoming integrated within national decision-making calendars\(^9\), rather than creating extra workloads through parallel activities. They aim to create an overall picture of risk and the options for reducing them. Through integration with existing development plans, the disaster risk reduction strategies explicitly support national goals. Furthermore, the process outlined in these tools is multidisciplinary, so that planners clearly see how activities in one sector may influence risks in another. The methods highlighted in the annex give an overview of priorities, potential actions, and roles and responsibilities. These tools can be utilized at various levels so that commitment is built throughout the system. For example,
---
\(^8\) See [www.proventionconsortium.org](http://www.proventionconsortium.org) and [www.unisdr.org](http://www.unisdr.org) for resources.
\(^9\) The United Nations Development Assistance Framework (UNDAF) is a good example of an existing development policy tool into which DRR could be incorporated.
SOPAC’s Comprehensive Hazard and Risk Management (CHARM) is implemented through a series of workshops aimed at broad stakeholder consultations at the national and regional levels.\(^{10}\)
These methodologies follow the same general consultative process as existing tools for climate change, such as guidelines on NAPAs and national communications. However, there are great opportunities for synergy between the two political frameworks, as traditionally disaster management has involved ministries of interior, civil defense and health, while the focal point for climate change is usually the ministry of environment. DRR tools encourage the engagement of officials from all relevant sectors, including finance and planning, in addition to interaction with National Hydrometeorological Services (NMHSs), which are the main providers of weather and climate data and information.
**Risk identification and early warning.** This is a familiar area when thinking of disaster management activities—assessing the risks facing a community and determining which ones are likely to affect people. Science and technology are important in understanding the physical processes behind hazards and how they will interact with community infrastructure and activities. For example, an extensive network of monitoring technology may be required for meteorologists and hydrologists to gather data on climate hazards and to build a picture of climate change trends. At the local level, this information is supplemented by community members’ historical knowledge on events such as floods or droughts. Again, vulnerability must be added into the equation because the mere presence of a hazard does not automatically translate into a risk. Risk is *the probability of harmful consequences, or expected losses (deaths, injuries, property, livelihoods, economic activity disrupted or environment damaged) resulting from interactions between natural or human-induced hazards and vulnerable conditions*.\(^{11}\) Communities need information both on hazards and their vulnerabilities to determine priorities for reducing their risk.
The tools for risk identification may include national assessments to gain a broad understanding of risk for entire sectors or geographical regions. Or, as shown in ADPC’s *Community-Based Disaster Risk Management Field Practitioners’ Handbook*, a tool can provide a framework for assessment teams working in a participatory manner with individual communities. Local knowledge is combined with scientific understanding and advanced technologies to generate a fuller picture of risks. This particular guideline is very specific, carefully describing exercises that help to build a common understanding of risk (e.g. through creating seasonal calendars or histories of floods/droughts over the last decades). Once they are understood and there is a system for monitoring them, it is also important to establish a communication system for early warning. The WMO *Guidelines for Climate Watches* are directed at national meteorological services to support the clear communication of climate information to users in a timely manner.
Including climate change in the disaster risk reduction framework enhances the analysis because climate change is likely to bring hazards for which there is no existing experience. For example, sea level rise or extreme events that go beyond today’s boundaries will require planners to look outside of currently applied risk reduction measures. Climate change, along with urban growth, economic globalization, and emerging health issues are all combining to rapidly change the nature of communities’ vulnerability.
**Knowledge management and education.** Supporting the local community’s involvement is crucial for implementing strategies that will lead to a culture of safety. This area of disaster risk reduction includes managing the information and data that has been gathered, educating people about their risks, and building people’s capacity to devise and implement risk reduction measures. The information and knowledge should not flow in only one direction; planners must also learn about the community’s needs and wants so that they can better support development and risk reduction. These experiences can then be shared with other communities and successes replicated.
\(^{10}\) ADB, 2002
\(^{11}\) ISDR online library: http://www.unisdr.org/eng/library/lib-terminology-eng%20home.htm
An important step is to translate risk information into dialogue with communities. The Emergency Management of Australia’s guide on community awareness goes through the steps of identifying the target audience, the best methods of communication, and evaluating the results. As people are constantly bombarded by information, it is not sufficient to merely send messages out. Instead, these tools stress the importance of defining *what action* needs to be taken by the community, whether that is to change their behavior or to examine their disaster risks more closely. The process outlined by knowledge management and education tools, such as WMO’s guidelines, requires cooperation between scientists and practitioners so that the necessary technical information is conveyed in a form that community members can use. It also requires regular assessments by practitioners and end users to improve efficiency and strengthen interactions. The WMO *Guide to Climatological Practices* offers the World Climatic Atlas Project, climate maps, interpretation of climatological information, and climate classifications (e.g. bioclimatic, genetic, and special classifications).
Conveying the concepts and risks associated with climate change to people at the local level is a challenge for the adaptation community. The uncertainty regarding impacts on any particular location presents a unique hurdle in making climate change relevant to people’s daily lives when they may be focused on their daily needs. Short-term considerations will take precedence over adapting to impacts that may not be immediately apparent. Linking into existing disaster risk reduction efforts for climate variability and extreme events is a good entry point for building understanding and adaptive capacity.
**Reducing Underlying Risks.** Risks must not only be identified and institutional capacity in place; action to reduce the factors that increase risk is necessary. This includes measures in environmental management, poverty reduction, protection of critical facilities, networking and partnerships, and financial and economic tools to ensure a safety net in case of disasters. Applications will be most effective if they build on local knowledge, respect local cultures, and provide multiple benefits. For example, conserving wetlands reduces risks through flood mitigation and storm protection while also providing livelihood support, water purification, and erosion control. Measures may strive to reduce the extent to which a planned development project will increase a community’s vulnerability. In this case, a risk assessment should be conducted as part of the project’s evaluation (e.g. planners for a waterfront property development should consider how sea level rise and storms may affect future residents), much the same way an environmental impact assessment or cost-benefit analysis are now often included. There are also measures to reduce the risks already existing in infrastructure and systems throughout the world, for example through retrofitting or enforcing land use zones.\(^{12}\)
This type of DRR tool is by necessity sector-focused because the tools aim to develop concrete, detailed measures. This normally involves specialized knowledge and skills. The Pan American Health Organization (PAHO)’s guidelines, which have been used throughout Latin America and the Caribbean, focus on drinking water and sewerage systems. This tool guides a team within a water company through a vulnerability analysis and in devising risk reduction measures. It takes them through the process of identifying strengths and weaknesses in the physical infrastructure and organizational systems.\(^{13}\) This tool can help ensure that measures are in place to guarantee that drinking water supplies are protected from likely hazards and that the sewerage system would not break down in the event of a disaster leading to the spread of epidemics. Using this tool for adaptation could involve, for example, looking at how climate change will affect the water company’s ability to maintain service with any changes in water resources. Another example from Switzerland discusses how more than 6% of the country is prone to slope instability. Regional authorities produced hazard maps and developed a system of three land use
\(^{12}\) Reducing the development project-induced vulnerability is “prospective risk management”, while reducing existing vulnerability is “compensatory risk management”. See UNDP, 2004.
\(^{13}\) PAHO, 1998
zones (indicated by three different colors on the maps) where construction could be undertaken without restriction, with certain safety measures, or not at all.\textsuperscript{14}
As efforts within the adaptation field proceed from awareness and training to implementing measures on the ground, these tools will be useful for organizations aiming to tangibly reduce vulnerability. PAHO’s guideline looks at likely impacts on water systems from various hazards, including earthquakes, floods, and droughts. By including climate change as a factor in designing risk reduction, the water system could be further strengthened and the margin of safety broadened.
\textit{Preparedness and response}. DRR preparedness and response tools are often used ahead of a disaster to be ready when a hazard strikes. Preparedness can mean having sufficient relief supplies and medical care, in addition to establishing coordination mechanisms between key organizations and individuals. This is the traditional realm of disaster management, which recent disasters like Hurricane Katrina have shown is extremely vital to limiting damage in the hours and days afterwards. The reconstruction and recovery period is the most opportune time to incorporate risk reduction. Political will and public awareness are high and often additional resources are available. However, there is great pressure to get homes assembled and infrastructure systems running very quickly so there can be a return to normalcy, with the result that DRR does not often take place during recovery. If risk reduction is not incorporated at this time, it’s likely that vulnerabilities will merely be rebuilt rather than reduced.
Aside from large catastrophes, the damage from small- and medium-scale recurrent disasters is often devastating and flies under the radar of the international community. These impacts are likely to accumulate, with the result that a vicious cycle ensues in which successive disasters erode the community’s resilience and more losses are suffered with each event. Preparedness and response are essential for communities facing such hazards.
Preparedness and response tools may include guidelines for needs assessments and recovery planning, standards for humanitarian relief, and checklists for preparedness. The International Federation of the Red Cross and Red Crescent Societies (IFRC) developed a self-assessment to support national organizations in analyzing their policies and plans, organizational structure, and capacities to respond to a disaster. One area for improvement for these tools would be to ensure that communities are prepared not only for disasters they have faced in the past but also new hazards that may accompany climate change. Climate change and DRR organizations could look beyond the usual severity of hazards and their usual areas of impact to jointly consider new risks and the preparedness mechanisms necessary to address them.
IFRC’s \textit{Guidelines for Emergency Assessment} is designed for generalists and based on decades of experience following emergencies. Another tool developed by ECLAC, on the other hand, targets specialists for socio-economic impact assessment. It aims to provide insight into how disasters impact society directly and indirectly on a longer time horizon. The capacities necessary for adapting to climate change could be included in such an assessment, including factors like livelihoods and community organization, among others. The output from these assessments should feed into reconstruction plans.
\section*{III. Conclusion}
Climate change is recognized as an emerging risk that must be included in current DRR and development planning. Policy makers and practitioners working on climate change adaptation should benefit from the experiences and knowledge amassed by the DRR community in dealing with extreme weather events and recurrent hydro-meteorological hazards. Utilizing DRR tools developed for existing risks is one such
\textsuperscript{14} Raetzo, et al, 2002
opportunity. The tools presented in this paper are only a small selection of those used by the DRR community. New risks and the aggravation of existing risks posed by climate change need to be more comprehensively addressed in DRR tools. Committed individuals and organizations working in disaster risk reduction and climate change are steadily coming together toward integrated climate risk management. This collaboration will help to give communities a broader understanding of their vulnerabilities, while at the same time expanding effectiveness by working with partners in the fields of development, environment, poverty reduction, financial planning, and health. By focusing on decreasing vulnerabilities to current weather and climate related risks communities will benefit now and be prepared for the risks posed by climate change.
## Selected DRR Tools
### Political commitment and institutional aspects
| Title | South Pacific Applied Geoscience Commission (SOPAC)’s Comprehensive Hazard and Risk Management (CHARM) |
|-------|--------------------------------------------------------------------------------------------------|
| Description | CHARM is defined as a comprehensive hazard and risk management tool for use within an integrated national development planning process. It aims to facilitate greater collaboration between risk reduction projects at all levels (though mostly at the national level with participation from stakeholders for decision-making) and across sectors to enhance sustainable development. CHARM takes all hazards into account across the whole country. |
| Appropriate use | This tool can be used for mainstreaming disaster risk reduction into ongoing national development planning processes. It aims to address all hazards including natural and human-induced, and also to help identify measures that can be implemented in all phases of disaster management (prevention, preparedness, response, and recovery). The emphasis is on bringing a wide range of stakeholders together for risk reduction to enhance effectiveness of the combined efforts. |
| Scope | National level |
| Key output | The immediate output of the CHARM process is to develop a matrix summarizing national risks and risk reduction measures (or “treatment options”) that considers the activities of all agencies. Planners then target the gaps identified in the matrix.
Step 1 – Context established
Step 2 – Risks identified
Step 3 – Risks analyzed
Step 4 – Risks evaluated
Step 5 – Risks treated and results evaluated |
| Key input | Step 1 – Identification of national development priorities, organizational issues, and initial risk evaluation criteria
Step 2 – Identification of hazard, vulnerable sectors, and impacts
Step 3 – Assessment of risks with stakeholders based on agreed indicators, such as frequency of hazards, potential impacts, etc.
Step 4 – Determination of acceptable levels of risks and priorities for action
Step 5 – Selection of risk reduction measures; Assignment of roles and responsibilities for all partners; Evaluation against agreed criteria |
| Ease of use | Readily usable by those with experience in policy analysis, developing work plans, and inter-agency planning |
| Training required | Knowledge of tools for each step is needed (e.g. to rank development challenges, develop budgets) |
| Training available | Training is available through broad stakeholder consultation workshops involving both national and regional stakeholders. SOPAC has also developed a manual. |
| Computer requirements | Word processing and spreadsheets |
| Documentation | SOPAC, 2001. *Comprehensive Hazard Risk Management Regional Guidelines for Pacific Island Countries*. Suva: South Pacific Applied Geosciences Commission.
Guideline and manual available in print or on CD (see Contacts below) |
| Applications | CHARM has been used for planning in Palau, Kiribati, Vanuatu, Fiji, and Tonga, and it has also been aligned to the Joint Australia-New Zealand Risk Management Standard |
|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Contacts for framework, documentation, technical assistance | SOPAC Secretariat
Private Mail Bag, GPO
Suva, Fiji Islands
Tel: +679 338 1377
Fax: +679 337 0040
Atu Kaloumaira, Community Risk Programme Advisor
Email: firstname.lastname@example.org
Noud Leenders, Community Risk Management Advisor
Email: email@example.com |
| Cost | Free |
| References | see Documentation |
## Risk Identification and Early Warning
| Title | Asian Disaster Preparedness Center (ADPC)’s Community-Based Disaster Risk Management Field Practitioners’ Handbook |
|-------|------------------------------------------------------------------------------------------------------------------|
| Description | The handbook briefly explains the concept of community-based disaster risk management (CBDRM) and provides practical tools that can be applied in community-level programming. The Handbook is divided into four parts: 1) an introduction to CBDRM; 2) specific step-by-step exercises; 3) cross-cutting issues of gender and communication; and 4) disaster risks in Southeast Asia.
The tools in Section 2 cover seven types of activities in CBDRM:
1. Selecting the community
2. Rapport building and understanding the community
3. Participatory disaster risk assessment
4. Participatory disaster risk management planning
5. Building/training a community disaster risk management organization (CDRMO)
6. Community-managed implementation
7. Participatory monitoring and evaluation
The resource pack for risk identification (Step 3) includes instructions and guiding questions for the most commonly used participatory assessment tools, e.g. constructing timelines, hazard maps, rankings, and calendars. |
| Appropriate use | This handbook is a comprehensive how-to guide that can be used to assist project teams working at the local level to ensure the participation of community members in reducing disaster risks. Each of the seven steps, particularly Step 3, is clearly outlined, along with simple instructions for group exercises, information to gather, and stakeholders to involve. |
| Scope | Community level |
| Key output | Overall: “The CBDRM process should lead to progressive improvements in public safety and community disaster resilience. It should contribute to equitable and sustainable community development in the long term.”
Step 1 – Priority vulnerable communities identified
Step 2 – Trust between community and project members; understanding of community needs among project members
Step 3 – Disaster risks identified and community members understand these risks
Step 4 – Community disaster risk management plan
Step 5 – CDRMO established and equipped with skills to implement their disaster risk management plan
Step 6 – Planned activities implemented effectively and on time, with participation of stakeholders
Step 7 – Appropriate indicators of program success developed and progress measured, with participation of stakeholders |
| Key input | Step 1 – Information on various criteria developed by decision makers
Step 2 – Information about the community and efforts to develop relationships/understanding with community members
Step 3 – Range of qualitative and quantitative data about the hazards, vulnerabilities, and capacities in the community
Step 4 – Dialogue among stakeholders to identify needed measures
Step 5 – Identification of CDRMO members and training |
| **Step 6** | Responsibilities carried out by members; periodic reviews |
| **Step 7** | Range of qualitative and quantitative data about activities’ impacts; dialogue between stakeholders |
**Ease of use**
- Readily usable
**Training required**
- Some training or experience in working at the local level would be useful
**Training available**
- Contact Zubair Murshed at firstname.lastname@example.org or email@example.com
**Computer requirements**
- none for community risk identification exercises
- word processing and spreadsheet skills for program planning and implementation, depending on complexity of local activities
- GIS optional for community disaster risk assessment (Step 3)
**Documentation**
- Imelda Abarquez and Zubair Murshed, 2004. *Community-Based Disaster Risk Management: Field practitioners’ handbook*, Bangkok: Asian Disaster Preparedness Center. Can be downloaded from http://www.adpc.net/pdr-sea/publications/12Handbk.pdf
**Applications**
- This methodology has been used in several communities throughout South and Southeast Asia.
**Contacts for framework, documentation, technical assistance**
- Information Manager, PDR SEA
Asian Disaster Preparedness Center (ADPC)
P.O. Box 4, Klong Luang, Pathumthani 12120, Thailand.
Tel.: (66-2) 516-5900 to 5910,
Fax: (66-2) 524-5360,
Email: firstname.lastname@example.org, Website: www.adpc.net
**Cost**
- Free
**References**
- Arcilla, M. J. D., Delica, Z. G. et al (Eds), 4B: *Project Development, Monitoring and Evaluation in Disaster Situations, 1998*: Quezon City, Philippines, Citizen’s Disaster Response Center. Gutteling and Wiegman, 1996, *Exploring Risk Communication: Advances in natural and technological hazards research*, Kluwer Academic Publishers, Dordrecht, The Netherlands.
| Title | World Meteorological Organization’s *Guidelines on Climate Watches* |
|-------|---------------------------------------------------------------------|
| Description | The guidelines describe how to establish a climate watch system and the information required in a climate watch. Governments typically react to extreme climate events through “crisis management” rather than through continuous risk reduction. Decision makers have cited the lack of information about approaching climate hazards with sufficient notice to take action. Climate watches aim to deliver this necessary, accurate information to end-users through the national meteorological services (NMSs) in a timely and useful manner. |
| Appropriate use | This tool targets “the special situation and needs of smaller NMSs, which have limited resources” in establishing the system and issuing climate watches. The process is based on continuous collaboration with climate information users, and it should serve as a mechanism to initiate preparedness activities to limit impacts from climate anomalies (e.g. excessive rainfall over several months). The guideline discusses the rationale for a climate watch system, current activities and capacity in NMSs, characteristics and operation of a climate watch system, format and criteria for issuing a climate watch, and various annexes, including examples of climate watches.
Climate watch format:
- a standard heading, issuing authority, and time and date of issue
- areas for which the advice is current (the appropriate regions)
- period during which the climate watch is valid
- where appropriate, an indication of the reason for the climate watch, which may include graphical information
- relevant skill of long range forecasts
- possible follow-on effects of the climate anomaly
- date at which the next update will be issued |
| Scope | National level; meteorological services |
| Key output | Information about significant climate anomalies for the forthcoming season(s) that may have substantial impacts on a sub-national scale.
A. Establishment of national climate watch system
B. Capacity built for the climate watch system
C. Operation of national climate watch
D. Climate watch system evaluated |
| Key input | A. A network of observation stations; an understanding of the current and recent past climate of the region in question; linkage with regional/global monitoring systems; dissemination channels to reach users; partnerships with key stakeholders
B. Understanding of users’ needs; criteria for issuing a Climate Watch defined (e.g. average rainfalls below a certain level for the season); technical training; strengthening of communication links
C. Monitoring and analysis of climate data; communication with other organizations that maintain their observation systems; communication with intermediaries to translate information for user groups
D. Periodic reviews of the system and process; dialogue with users on their needs to identify gaps in dissemination or content
E. |
| **Ease of use** | Usable by national meteorological services |
|-----------------|------------------------------------------|
| **Training required** | Requires expertise in meteorology/climatology and understanding of climate information users’ needs |
| **Training available** | (see Contacts) |
| **Computer requirements** | Software for forecasting; word processing |
| **Documentation** | WMO, 2005. *Guidelines on Climate Watches*, Geneva: World Meteorological Organization. [http://www.wmo.ch/web/wcp/wcdmp/html/Guidelines%20on%20Climate%20Watches.pdf](http://www.wmo.ch/web/wcp/wcdmp/html/Guidelines%20on%20Climate%20Watches.pdf) |
| **Applications** | |
| **Contacts for framework, documentation, technical assistance** | Omar Baddour, Chief, World Climate Data and Monitoring Programme
WMO, 7bis Ave. de la Paix
C.P. 2300, CH-1211, Geneva 2, Switzerland
Tel: (41-22) 730-8268 or 730-8214 Fax: (41-22) 730-8042
E-mail: email@example.com |
| **Cost** | Free |
| **References** | (See references and links in document)
Technical documents published under the WMO World Climate Data and Monitoring Programme (WCDMP)
[http://www.wmo.ch/web/wcp/wcdmp/html/wcdmpreplist.html](http://www.wmo.ch/web/wcp/wcdmp/html/wcdmpreplist.html) |
| Title | EMA’s *The Good Practice Guide: Community awareness and education in emergency management* |
|-------|------------------------------------------------------------------------------------------|
| Description | During the emergency period, a well-prepared community can reduce the impacts from the disaster. Community members often play a large role in providing relief for each other. This tool presents best practices, ideas, plans, and suggestions for educating the community on disaster preparedness, rather than a how-to guide on communications. The broad framework can be easily adapted for specific communities.
The guide provides the following information:
1. Introduction to the issue and how to get people’s attention
2. Planning a campaign, with information on a range of communication tactics
3. Evaluating a campaign
4. Working with the media, partners and sponsors, and the community
5. Information resources |
| Appropriate use | The guide aims to assist in planning and implementing community awareness and education campaigns. It is aimed at local government authorities, health services, police, fire services, schools, and other community organizations.
It lays out the basic steps of an awareness campaign, describes communication tactics (e.g. print/electronic communications, give-aways, special events, etc.), and outlines a method for evaluating the campaign’s performance. |
| Scope | Local level |
| Key output | Step 1 – Target audience identified
Step 2 – Target audience’s needs and wants identified
Step 3 – Key message developed
Step 4 – Measurable objectives identified
Step 5 – Tactics chosen
Step 6 – Required resources secured
Step 7 – Awareness and education campaign implemented
Step 8 – Awareness and education campaign evaluated and documented – results available |
| Key input | Step 1 – Information on vulnerable groups and potential partners in reaching them
Step 2 – Discussions with community representatives and members; Review of existing sources of information (newspapers, radio, etc.)
Step 3 – Identification of hazards and priority messages
Step 4 – Development of campaign objectives and concrete indicators to measure changes
Step 5 – Identification of effective information sources and delivery methods for the target audience, as well as the required resources
Step 6 – Partnerships developed; Information on available staff and financial resources
Step 7 – Commitment of staff and volunteers; Definition of roles, responsibilities, and a timetable for activities
Step 8 – Review of the campaign against indicators, e.g. through surveys, observation, or discussions |
| Ease of use | Readily usable |
| **Training required** | none |
|----------------------|------|
| **Training available** | see Contacts below |
| **Computer requirements** | none |
| **Documentation** | EMA, 2000. *The Good Practice Guide: Community awareness and education in emergency management*, Canberra: Emergency Management Australia. [http://www.crid.or.cr/digitalizacion/pdf/eng/doc12728/doc12728.htm](http://www.crid.or.cr/digitalizacion/pdf/eng/doc12728/doc12728.htm) |
| **Applications** | Based on EMA’s experience in Australia, but easily adaptable to other contexts |
| **Contacts for framework, documentation, technical assistance** | Emergency Management Australia
PO Box 1020 Dickson, Australian Capital Territory 2602, Australia
Tel: (61-2) 6256 4600 Fax: (61-2) 6256 4653
Email: firstname.lastname@example.org |
| **Cost** | Free |
| **References** | References included in document on case studies, additional methodologies, communication tips, etc.
Documents on local risk management, community education, community preparedness, and related sites (mostly in Spanish): [http://www.crid.or.cr/crid/MiniKitCommunityParticipation/documentos_interes_participacion_comunitaria_ing.html#capacitacion](http://www.crid.or.cr/crid/MiniKitCommunityParticipation/documentos_interes_participacion_comunitaria_ing.html#capacitacion)
EMA publications on community evacuation coordination, flood warnings, and other response activities at: [www.ema.gov.au](http://www.ema.gov.au) |
## Risk Management Applications
| Title | Pan American Health Organization (PAHO)’s *Natural Disaster Mitigation in Drinking Water and Sewerage Systems: Guidelines for Vulnerability Analysis* |
|-------|----------------------------------------------------------------------------------------------------------------------------------|
| Description | These guidelines provide the basic tools to evaluate the vulnerability of a drinking and sewerage system to various natural hazards. These systems are vital to development, as well as to ensuring a return to normalcy following a disaster. Conducting this vulnerability analysis helps identify preparedness and mitigation measures to limit risks. It also identifies the response mechanisms that should be put into action in the event of a disaster. The risk of damage to water systems increases with factors such as uncontrolled growth in urban areas, deficiencies in infrastructure, and climate change.
The guide is divided into four sections:
- Planning
- Principles of vulnerability analysis
- Description of hazards and impacts
- Conducting a vulnerability analysis for specific hazards |
| Appropriate use | The tool is ideally used during the disaster preparedness phase to identify and implement mitigation measures. It is aimed at engineers and technical personnel of water service companies to project how the water systems will perform in the event of the disaster and to minimize damage. Vulnerability and probabilities of damage are expressed as various formulae.
The guide provides an overview for each section with issues to consider at each step. It also includes checklists (e.g. Characteristics of an emergency operations center and the emergency committee; Components of an emergency response plan), matrices to describe system vulnerabilities (formats provided in annexes), and extensive information on impacts on water systems from earthquakes, volcanoes, hurricanes, floods, etc. in Chapter 3 and annexes. |
| Scope | Water systems (with coverage being sub-national, municipal, etc.) |
| Key output | - Planning – Emergency committee established within the water company, with roles and responsibilities defined; Emergency operations center established; Partnerships with national organizations established.
- Vulnerability analysis – Identification and quantification of deficiencies in the physical system and the organization’s capacity to provide services in a disaster; Strengths of the physical system and the organization identified; Recommendations for mitigating disaster impacts.
- Mitigation and emergency response plans for *administration/operational* aspects – Identification of roles and responsibilities, resources required, and measures to reduce vulnerability. Measures may include: improvements in communication systems, provision of auxiliary generators, frequent line inspections, detection of slow landslides, repair of leaks, and planning for emergency response.
- Mitigation and emergency response plans for *physical* aspects – Identification of roles and responsibilities, resources required, and measures to reduce vulnerability. Measures may include: retrofitting,
| **Key input** | • Planning – Information on: national standards, institutional coordination, and resources available for preparedness and response; and dialogue with partners
• Vulnerability analysis – Information on: organizational and legal aspects, availability of resources, hazards and likely impacts on the water system, current state of system and operating requirements, sensitivity of components to hazards, and the response capacity of the services.
• Mitigation and emergency response plans – Information from the vulnerability analysis, priorities for implementing measures, and resources available. |
| **Ease of use** | Can be used as an overview for the emergency committee, although the vulnerability analysis should be conducted by a team of specialists. |
| **Training required** | Vulnerability analysis requires extensive experience in the design, operation, maintenance, and repair of a drinking water and sewerage system’s components. |
| **Training available** | The Virtual Campus of Public Health is a consortium of institutions led by PAHO/WHO for continuing education. [http://www.campusvirtualsp.org/eng/index.html](http://www.campusvirtualsp.org/eng/index.html) |
| **Computer requirements** | Various specialized software, word processing, and spreadsheets |
| **Documentation** | PAHO, 1998. *Natural Disaster Mitigation in Drinking Water and Sewerage Systems: Guidelines for Vulnerability Analysis*. Washington, DC: Pan American Health Organization, Regional Office of the World Health Organization. [http://www.paho.org/English/DD/PED/natureng.htm](http://www.paho.org/English/DD/PED/natureng.htm) |
| **Applications** | Used throughout Latin America and the Caribbean. Case study in documentation from Limon, Costa Rica, to assess earthquake vulnerability. |
| **Contacts for framework, documentation, technical assistance** | Emergency Preparedness and Disaster Relief Coordination Program, Pan American Health Organization
525 Twenty-third Street, N.W., Washington, D.C. 20037, USA
Fax: +1 202-775-4578 E-mail: [email@example.com](mailto:firstname.lastname@example.org)
Contact lists for the Americas during a disaster: [http://www.paho.org/english/DD/PED/contactos.htm](http://www.paho.org/english/DD/PED/contactos.htm) |
| **Cost** | Free |
| **References** | Bibliography available in document |
| Title | Economic Commission for Latin America and the Caribbean (ECLAC)’s Handbook for Estimating the Socio-Economic and Environmental Effects of Disasters |
|-------|----------------------------------------------------------------------------------------------------------------------------------|
| Description | One of the problems following disasters is that damaged areas are often reconstructed quickly and without adequate resources. The result is that vulnerability is reconstructed rather than reduced. This tool helps to assess the direct and indirect socio-economic impacts of disasters, and to identify the most affected areas and priority areas for recovery. It outlines the conceptual and general methodological aspects of estimating the asset damage, losses in the flows of goods and services, as well as any effects on the macroeconomy. The handbook is divided into five sections:
1. Methodological and conceptual framework
2. Assessing impacts in social sectors
3. Assessing impacts on infrastructure
4. Assessing impacts in economic sectors
5. Assessing impacts in cross-sectoral areas, such as the environment, gender, and employment |
| Appropriate use | This type of assessment should follow the emergency phase of a man-made or natural disaster, so it will not interfere with urgent humanitarian activities. Sufficient quantitative information on damages is also more likely to be available after that period. The tool is good for organizations that want to understand a wider range of disaster risks. By assessing the direct and longer-term indirect socio-economic impacts, organizations then have a better idea of how to reduce the risks in future programs that may have development or environmental goals. The tool can be adapted to comprehensively assess socio-economic impacts of climate change. Sections 2-5 include a definition of the sector, an overview of likely direct and indirect damages, the quantitative and qualitative information needed, possible information sources, general instructions on analyzing the data, and issues to consider in assessing macroeconomic impacts arising from damages in that sector. It is not a step-by-step guide, but rather gives an overview of general steps to be taken in each assessment. |
| Scope | National or sub-national level; sectoral |
| Key output | A measurement, summarized in table form and in monetary terms where possible, of the impacts of disasters on the society, economy and environment of the affected country or region. Results are divided into direct, indirect, and macroeconomic effects (employment, the balance of payments, public finances, and prices and inflation). The disaster may also have benefits, so the assessment refers to the net effect. The assessment identifies the key geographical areas and sectors affected, together with corresponding reconstruction priorities. It can provide a way to estimate the country’s capacity to undertake reconstruction on its own and the extent to which financial and technical cooperation are needed. For the longer term, it may identify the public policy changes and development programs to address these needs. |
| Key input | Quantitative and qualitative information on conditions both before and following the disaster. The assessment team must decide on the balance between precision and speed in conducting the assessment. “Shadow prices” may be used to try to take into account the indirect effects and externalities of disasters. |
| Ease of use | Experience with economic valuation and assessing damage in specific sectors |
| **Training required** | Specialist knowledge in each sector |
|----------------------|------------------------------------|
| **Training available** | Instituto Latinoamericano y del Caribe de Planificación Económica y Social (ILPES), ECLAC’s training division, offers courses on various economic and social issues of the region.
ILPES, Av. Dag Hammarskjöld 3477, Vitacura, Casilla 179-D, Santiago, Chile
Fax: (56-2) 206-6104, Tel: (56-2) 210-2506/7
Email: email@example.com |
| **Computer requirements** | Various software programs are recommended for some assessments, e.g. Redatam by CELADE (see References) or other GIS programs (ArcView, MapInfo, IDRISI, or GISMAP) |
| **Documentation** | ECLAC, 2003. *Handbook for Estimating the Socio-Economic and Environmental Effects of Disasters*, Santiago, Chile: Economic Commission for Latin America and the Caribbean.
www.proventionconsortium.org/toolkit.htm
Hardcopies available at: ECLAC Publications, Casilla 179D, Santiago, Chile
Email: firstname.lastname@example.org
Fax: + 56 2-210-2069 |
| **Applications** | The handbook has been used throughout Latin America and the Caribbean. Assessments following the Indian Ocean disaster also used the methodology, particularly in the cases of Indonesia and India. |
| **Contacts for framework, documentation, technical assistance** | Ricardo Zapata-Martí, Focal Point for Disaster Evaluations
Economic Commission for Latin America and the Caribbean
Av. Presidente Masaryk 29,
11570 México, D.F.
Apartado Postal 6-718, México D.F.
Telephone: +52 55-5263-9600, Fax: +52 55-5531-1151
E-mail: email@example.com, firstname.lastname@example.org |
| **Cost** | Free |
| **References** | Redatam software: http://www.eclac.cl/redatam/default.asp?idioma=IN
The Handbook, sample reports, and case studies: http://siteresources.worldbank.org/JNTDISMGMT/Resources/guidelines.htm |
| Title | IFRC's Guidelines for Emergency Assessment |
|-------|------------------------------------------|
| Description | These guidelines provide advice on the organization of emergency assessments, starting with an introduction of key concepts and then outlining each step. The steps are roughly laid out in the order required during an assessment. The chapter on fieldwork notes some basic principles that should underlie activities, such as participation, inclusion or marginal groups, looking out for biases, etc. Results of the general assessment can indicate where more technical assessment is needed. The framework can be easily adapted to incorporate climate change issues as it provides fairly general guidelines on the assessment process. |
| Appropriate use | Aimed at generalists in the Red Cross Red Crescent community conducting an assessment to provide an overview of the situation. The guidelines cover the following steps, some of which would overlap:
- Planning
- Office tasks
- Fieldwork (organization and management)
- Analysis
- Reporting
The chapter on fieldwork includes detailed descriptions of various types of information gathering exercises and issues to consider for each one, including tips on establishing trust, cultural sensitivities, suggested questions, and extensive checklists that were compiled by sector specialists. It gives very clear, easily understandable directions for carrying out activities.
The chapter on analysis provides worksheets team members may use in synthesizing information. These are largely based on IFRC’s vulnerability and capacity framework (see References). |
| Scope | Local affected areas |
| Key output |
- Planning – Determination of whether an assessment is needed, objectives and terms of reference, and type of assessment (rapid/detailed/continual).
- Office tasks – Arrangements for coordination, required resources identified, team assembled and briefed, key locations identified.
- Fieldwork – Sufficient information gathered in selected locations on issues identified during planning phase.
- Analysis – Identification of the main problems, affected populations, and local capacity; Recommendations for further actions.
- Reporting – Clear, concise reports following a recommended format: summary; background information; details and assumptions; needs, coping strategies, and assistance; and program proposals. |
| Key input | The guidelines recommend that each of these steps are generally undertaken sequentially, so that the output of the planning phase is used as an input to the office-based tasks, and so on.
- Planning – Information from secondary sources on the nature of the emergency and urgency of an assessment
- Office tasks – Objectives and terms of reference; Information on potential team members’ skills |
| **Fieldwork** | • Fieldwork – Secondary information, interviews with community members and authorities, group exercises, household visits, etc. |
| **Analysis** | • Analysis – Summaries of information that have been checked for consistency, discussion among team members. |
| **Reporting** | • Reporting – Results of the analysis. |
| **Ease of use** | Readily usable by anyone conducting an assessment. |
| **Training required** | None |
| **Training available** | Contact regional and country offices: [http://www.ifrc.org/who/delegations.asp](http://www.ifrc.org/who/delegations.asp) |
| **Computer requirements** | None, although word processing and spreadsheets may be useful for analysis and reporting. |
| **Documentation** | IFRC, 2005. *Guidelines for Emergency Assessment*. Geneva: International Federation of the Red Cross and Red Crescent Societies. [http://www.proventionconsortium.org/files/tools_CRA/IFRC-guidelines-assessments-LR.pdf](http://www.proventionconsortium.org/files/tools_CRA/IFRC-guidelines-assessments-LR.pdf) |
| **Applications** | Based on IFRC’s experience in conducting assessments following disasters around the world. |
| **Contacts for framework, documentation, technical assistance** | International Federation of Red Cross and Red Crescent Societies
PO Box 372, CH-1211 Geneva 19, Switzerland
Tel: +41 22 730 4222 Fax: +41 22 733 0395
E-mail: [email@example.com](mailto:firstname.lastname@example.org) Web site: [www.ifrc.org](http://www.ifrc.org) |
| **Cost** | Free |
| **References** | IFRC, 1999. *Vulnerability and capacity assessment: an International Federation guide*. Geneva: International Federation of the Red Cross and Red Crescent Societies
[http://www.ifrc.org/what/disasters/dp/planning/vcaguidelines.asp](http://www.ifrc.org/what/disasters/dp/planning/vcaguidelines.asp)
Sphere Project, 2003. *Humanitarian Charter and Minimum Standards in Disaster Response*. Geneva: Sphere Project.
[http://www.sphereproject.org/handbook/index.htm](http://www.sphereproject.org/handbook/index.htm)
IFRC, 1999. *Code of conduct for the International Red Cross and Red Crescent Movement and Non-Governmental Organizations in Disaster Relief*. Geneva: International Federation of the Red Cross and Red Crescent Societies.
[http://www.ifrc.org/publicat/conduct/code.asp](http://www.ifrc.org/publicat/conduct/code.asp)
IFRC, 2000. *Better Programming Initiative: options for better aid programming in postconflict settings*. Geneva: International Federation of the Red Cross and Red Crescent Societies. |
References and Further Reading
ADB, 2002. “Community Risk Management for Pacific Islands”. Proceedings of the Regional Consultation Workshop on Water in Small Island Countries, Sigatoka, Fiji, 29 July-3 August 2002.
http://www.adb.org/Documents/Events/2002/Water_Small_Island/Theme2/risk_mgt_fij.pdf
Benson, C. and J. Twigg, 2004. *Measuring Mitigation: Methodologies for assessing natural hazard risks and the net benefits of mitigation—a scoping study*. Geneva: ProVention Consortium
http://www.proventionconsortium.org/files/measuring_mitigation/Measuring_Mitigation_report.pdf
IPCC, 2001. *Climate Change 2001: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Third Assessment Report of the Intergovernmental Panel on Climate Change*. Cambridge, UK: Cambridge University Press
ISDR, 2004. *Living with Risk: A global review of disaster reduction initiatives*. Geneva: United Nations International Strategy for Disaster Reduction.
http://www.unisdr.org/eng/about_isdr/bd-lwr-2004-eng.htm
Pew Center, 2005. Hurricanes and global warming: Q&A. Pew Center on Global Climate Change
http://www.pewclimate.org/hurricanes.cfm
Raetzo H., O. Latelin, D. Bollinger, and J.P. Tripet, 2002. “Hazard Assessment in Switzerland – Codes of practice for mass movements” in *Bulletin of Engineering Geology and the Environment*, 61(3):263-268, August 2002.
Sperling, F. and F. Szekely, 2005. *Disaster Risk Management in a Changing Climate*. Informal Discussion Paper prepared for the World Conference on Disaster Reduction on behalf of the Vulnerability and Adaptation Resource Group (VARG). Washington, D.C. http://www.climatevarg.org
Twigg, J., 2004. *Disaster Risk Reduction: Mitigation and preparedness in development and emergency programming*. London: Overseas Development Institute
UK Met Office, 2005. Media toolkit on climate change.
http://www.metoffice.com/corporate/pressoffice/weatherguide/climatechange.html
UNDP, 2002. “A Climate Risk Management Approach to Disaster Reduction and Adaptation to Climate Change”, UNDP Expert Group Meeting on Integrating Disaster Reduction with Adaptation to Climate Change, Havana, June 19-21, 2002 http://www.undp.org/bcpr/disred/documents/wedo/icrm/riskadaptationintegrated.pdf
UNDP, 2004. *Reducing Disaster Risk: A challenge for development*. New York: United Nations Development Programme, Bureau for Crisis Prevention and Recovery. http://www.undp.org/bcpr/disred/rdr.htm
Wisner, B., P. Blaikie, T. Cannon, and I. Davis, 2004. *At Risk: Natural hazards, people’s vulnerability and disasters* (*2nd ed*). London and New York: Routledge
WMO, 1983. *Guide to Climatological Practices*. Geneva: World Meteorological Organization, 2nd Ed.
http://www.wmo.ch/web/wcp/ccl/GuideHome/html/wmo100.html
Tools and Methods
Abarquez, I. and Z. Murshed, 2004. *Community-Based Disaster Risk Management: Field practitioners’ handbook*, Bangkok: Asian Disaster Preparedness Center.
http://www.adpc.net/pdr-sea/publications/12Handbk.pdf
ECLAC, 2003. *Handbook for Estimating the Socio-Economic and Environmental Effects of Disasters*, Santiago, Chile: Economic Commission for Latin America and the Caribbean. www.proventionconsortium.org/toolkit.htm
EMA, 2000. *The Good Practice Guide: Community awareness and education in emergency management*, Canberra: Emergency Management Australia. http://www.crid.or.cr/digitalizacion/pdf/eng/doc12728/doc12728.htm
IFRC, 2005. *Guidelines for Emergency Assessment*, Geneva: International Federation of the Red Cross and Red Crescent Societies. http://www.proventionconsortium.org/files/tools_CRA/IFRC-guidelines-assessments-LR.pdf
PAHO, 1998. *Natural Disaster Mitigation in Drinking Water and Sewerage Systems: Guidelines for Vulnerability Analysis*. Washington, DC: Pan American Health Organization, Regional Office of the World Health Organization.
http://www.paho.org/English/DD/PED/natureng.htm
SOPAC, 2001. *Comprehensive Hazard Risk Management Regional Guidelines for Pacific Island Countries*. Suva: South Pacific Applied Geosciences Commission.
UNDG, 2004. *Common Country Assessment and United Nations Assistance Development Framework: Guidelines for UN Country Teams preparing a CCA and UNDAF*. New York: United Nations Development Group. [http://ww.undg.org/content.cfm?id=177](http://ww.undg.org/content.cfm?id=177)
WMO, 2005. *Guidelines on Climate Watches*. Geneva: World Meteorological Organization [http://www.wmo.ch/web/wcp/wcdmp/html/Guidelines%20on%20Climate%20Watches.pdf](http://www.wmo.ch/web/wcp/wcdmp/html/Guidelines%20on%20Climate%20Watches.pdf)
Introduction
Different climatic regimes lend themselves to different trends in hydrometeorological extremes, some of which may pose considerable risks to life, infrastructure, socio-economic development and the environment. On the other hand, windows of opportunity afforded by favorable climatic conditions need to be seized in advance to enhance socio-economic development. Positioning climate information and services as effective tools to leverage opportunities as well as risk management has therefore a major thrust in WMO programmes, aimed at contributing to the well being of its Members. WMO in collaboration with its Members are the original climate networkers constantly striving to keep pace with the scientific and technological advancements and are a natural partner in dealing with climate related issues. It has a clear future goal, of reaching the benefits of rapidly advancing knowledge on climate to each and every section of the society.
A burgeoning variety of tools and processes are being developed to improve decision-making to reduce risks and avail opportunities associated with climate variability and change. Acknowledging that a much wider array of tools and approaches to adaptation exists, WMO adaptation related tools primarily focus on the specific context of capacity building, proactive role and awareness raising to support adaptation to climate variability and change.
As many communities are not prepared to cope with climate disasters facing them today, an ongoing challenge is to build their resilience. In answer to this challenge, disaster risk reduction activities should address a comprehensive mix of factors contributing to communities’ vulnerabilities. There are numerous tools and methodologies that have been developed to put this approach into practice. The value of disaster risk reduction and the experiences gained by its practitioners have been increasingly tapped by organizations active in climate change adaptation. In this context, WMO believes that the Nairobi Work Programme will facilitate identification of options for extending, improving and linking different screening tools developed by other organizations involved in adaptation, such as UNDP, OECD, the World Bank, and others and explored linkages among their tools. It is also expected that reports of these organization to the UNFCCC Secretariat will highlight some of the common problems and issues in developing and implementing adaptation tools.
In the following paragraphs, some notable activities of WMO relevant to Methods and Tools for the implementation of the Nairobi Work Programme are highlighted, with indications of the potential contributions of the WMO and the National Meteorological and Hydrological Services (NMHSs).
Methods and Tools
Climate Watches
Weather extreme events such as hurricanes, thunderstorms, tornadoes, etc. require weather watches for which most NMHSs issue early warnings and undertake special monitoring. In a similar manner, ‘climate watches’ deal with climatic extremes like heavy monsoons, flooding, cold waves, heat waves, droughts, etc., which require long-term monitoring with historical observations and its integration into the context of global climate patterns. By incorporating recent climate analysis as well as outlooks, climate watches serve as advisories and forewarnings of climate anomalies, therefore enable continuous and timely climate related risk assessment and management to avoid damages to life and property. The necessary mechanisms have already been put in place in some parts of the world to issue climate watches (e.g., the North American Drought Monitor, the IGAD Climate Prediction and Applications Centre (ICPAC) and the SADC
Drought Monitoring Centers in Gaborone, Botswana). WMO works with NMHSs and many institutions in the world to issue regional climate watch bulletins. Through its programs World Climate Data Management Programme (WCDMP) and Disaster Risk Reduction Programme (DRR) in collaboration with the Commission for Climatology (CCI) and NMHSs, WMO has planned for the coming four years period 2008-2011 to establish and implement climate watch systems at national levels. The main focus for these efforts is to improve preparedness and reduce socio-economic vulnerability to climate hazards in developing and least developed countries. Through Risk Reduction (DRR) Programme, other agencies are expected to be part of the implementation process of climate watches including resource mobilization, partnership for an integrated early warning system as well as the outreach of the decision makers at regional and national levels. Additional information climate watches can be found in the attached Annex.
**RClimDex**
There is a general consensus within the climate community that any change in the frequency or severity of extreme climate events would have profound impacts on nature and society. It is thus very important to analyze past data to find extreme events and understand future trends. The monitoring, detection and attribution of changes in climate extremes usually require daily resolution data which are observed by NMHSs. Under the supervision of WMO, 27 core indices have been defined based on daily temperature values or daily precipitation amounts to find extreme events and changing trends. Some are based on fixed thresholds that are of relevance to particular applications. WMO in cooperation with Environment Canada has developed two software packages for data homogenization (RHTest) and indices calculation (RClimDex) based on a very powerful and freely available statistical package R which runs under both Microsoft Windows and Unix/Linux. The RClimDex provides a friendly graphical user interface to compute all 27 core indices. This software will allow all interested parties to benefit from improved monitoring of change with broader spatial coverage that is currently unavailable.
**Climate Information and Prediction Services Project (CLIPS)**
The 12th World Meteorological Congress (1995) considered that the provision of climate information and predictions would improve economic and social decision making, and that this would support sustainable development, and established a Climate Information and Prediction Services (CLIPS) project within the World Climate Applications and Services Programme (WCASP). WCASP and CLIPS build on the rapidly developing atmospheric and oceanographic research as well as the wealth of data, experience and expertise within the NMHSs and related entities and provide a framework to deliver operational user-targeted climate services. This programme has successfully demonstrated the immense potential of the concept in several regions across the globe, and a global network of CLIPS Focal Points has been established to ensure national and regional coordination of climate products and services. Capacity building and training are integral components of WCASP/CLIPS. The CLIPS project can thus be an effective framework within which regional climate change information and the associated adaptation issues can be integrated. Development of training curricula, training workshops and regional showcase projects, which are key components of CLIPS, need substantial resource mobilization to cater to the growing needs of climate information providers as well as user sectors, particularly in the Developing Countries and the Least Developed Countries.
**Regional Climate Outlook Forums (RCOFs)**
Specific institutional frameworks can be established, with appropriate stakeholders taking the lead, to address relevant climate change issues at the local and sector levels. In this context, the Regional Climate Outlook Forums (RCOFs), a concept conceived and supported by WMO as part of Climate and Prediction Services (CLIPS) activities, need special mention. RCOFs constitute an important vehicle in developing countries for providing advanced information on the future climate information for the next season and beyond, and for developing a consensus
product from amongst the multiple available individual predictions. RCOFs stimulate the development of climate capacity in the NMHSs and facilitate end-user liaison to generate decisions and activities that mitigate the adverse impacts of climate variability and change and help communities to build appropriate adaptation strategies. There is a great potential for the regional climate activities that currently take place under RCOFs and through CLIPS training to expand, through the actions of the WMO regional associations and the NMHSs (facilitated by the Secretariat) to expand the use of currently available tools (e.g., PRECIS, MAGIC, etc.) to more countries and to include information on climate change scenarios assembled by World Climate Research Programme (WCRP) such as climate projections created for the IPCC Fourth Assessment Report (AR4). This would enable NMHSs to contribute to their national communications to the UNFCCC and to develop or enhance their dialogue with users of climate information on climate risks and vulnerability, and would also support improved regional coordination on climate matters, standardization of tools and increased evaluation (feedback) on model outputs. This evolution from the current state (ability in some sub-regions to undertake RCOFs and develop seasonal predictions) would require technology transfer (to enhance computational capability) including hardware, software, models and data storage devices; stable Internet; ability to download data through the Internet; trained climate experts; research. WMO will continue to support the RCOFs initiatives as they contribute significantly to building capacity in the NMHSs.
**The Observing system Research and Predictability Experiment (THORPEX): A Global Atmospheric Research Programme**
THORPEX, a part of the WMO World Weather Research Programme (WWRP), is an international research and development programme responding to the weather related challenges of the 21st century to accelerate improvements in the accuracy of 1-day to 2-week high impact weather forecasts for the benefit of society, the economy and the environment. THORPEX research topics include: global-to-regional influences on the evolution and predictability of weather systems; global observing system design and demonstration; targeting and assimilation of observations; societal, economic and environmental benefits of improved forecasts. The programme establishes an organizational framework that addresses weather research and forecast problems whose solutions will be accelerated through international collaboration among academic institutions, operational forecast centres and users of forecast products. THORPEX contributes to the development of a future global interactive multi-model ensemble forecast system, which would generate numerical probabilistic products, available to all WMO Members including developing countries. The purpose is to provide accurate, timely, specific and definite weather warnings in a form that can be readily used in decision support tools, to improve and demonstrate such tools in order to reduce the impact of natural hazards and to realize societal and economic benefits of improved weather forecasts.
**WMO Disaster Risk Reduction Programme**
From 1980 to 2005, natural disasters worldwide have taken the lives of nearly two million people and produced economic losses above one trillion (or one thousand billion) US dollars. During this period, weather-, water- and climate-related hazards and conditions accounted for 89% of total number of disasters, 72% of loss of life and 75% of total economic loss. However, over the last few decades, significant developments with monitoring, detecting, analyzing, forecasting and warning of weather-, water- and climate-related hazards have led to significant opportunities for reducing impacts of related disasters. For example, over the last 25 years, there has been nearly a 4-fold increase in the number of disasters and a 5-fold increase in the associated economic losses, whereas the loss of lives has in fact decreased to nearly one-third of its previous value. This is due to several factors, a critical one being the continuous development of natural hazard monitoring and detection and of development of specific end-to-end early warning systems, such as those for tropical cyclones.
The international movement in disaster risk reduction is supported by the Hyogo Framework of Action 2005-2015, drafted and approved at the World Conference for Disaster Reduction, Kobe, Japan, January 2005, which represents a set of outcomes and results that must be achieved if disaster risk is to be reduced. The HFA describes a range of key thematic areas that need to be addressed, particularly in high-risk nations and communities. These include:
- Governance: organizational, legal and policy frameworks;
- Risk identification, assessment, monitoring and early warning;
- Knowledge management and education;
- Reducing underlying risk factors; and
- Preparedness for effective response and recovery.
Implementation of HFA is a critical contribution to development of capacities for climate adaptation and climate-related risk management. The overall framework of DRM seeks to reduce the likelihood of undesired, negative outcomes such as disasters in the course of pursuing positive goals. This involves three types of actions and activities including, risk identification, risk reduction and risk transfer.
- Risk identification involves the identification of risk levels and the risk factors that cause losses. Risk identification creates the evidence base needed to support risk reduction and risk transfer decision and activities;
- Risk reduction involves measures to prevent losses. Examples of such measures include hazard-resistant infrastructure development, land use planning and zoning, early warning systems based on sound science but targeted at mobilizing action at the local level. Other measures include educational and preparedness programmes for a wide variety of actors such as decision makers, operational emergency planning and response staff and the development of contingency plans;
- Risk transfer involves the use of financial mechanisms to share risks and transfer them among different actors (e.g., at-risk populations, government, private sector). Examples of such tools include weather derivatives, catastrophe bonds and different types of insurance.
WMO, through its Fourteenth Congress (Cg-XIV, May 2003) established a new cross-cutting Disaster Risk Management Programme, (now changed to Disaster Risk Reduction Programme after Congress Fifteenth, 07-25 May 2007) with the vision to strengthen further international and national collaboration in disaster risk management. This Programme addresses capacity development of NMHSs and their partnerships in supporting disaster risk management (DRM) decisions at the national level in the complete cycle of disaster risk management including prevention and mitigation as well as emergency preparedness, response, recovery and reconstruction. With the threat of the climate change and its potential impacts on the trends and severity of natural hazards, WMO is deeply committed to ensure that the latest knowledge and capacities in climate are translated into operational products that would enable our Members to enhance their capacities in climate-related risk management.
WMO Disaster Risk Reduction Programme addresses seven priority areas, to provide systematic support to strengthen Members’ NMHSs capacities for strengthened disaster risk reduction. These include:
(a) Mainstreaming technical capacities such as hydro-meteorological risk assessment and early warning systems in the national disaster risk management plans, legislations and development planning. (Adaptation Planning);
(b) Strengthening capacities for meteorological, hydrological and climate-related hazard monitoring, databases, and methodologies for hazard analysis in support of risk identification, risk reduction and risk transfer activities. (Data and Observations, Methods and Tools);
(c) Strengthening capacities for operational meteorological, hydrological and climate-related hazard early detection and warnings built upon strong governance, organizational and operational processes (Adaptation Planning and methods and Tools);
(d) Strengthening capacities for provision of meteorological services in support of pre- and post-disaster emergency response and relief operations (Methods and Tools);
(e) Facilitation of partnerships among NMHSs and other key national agencies for a more coordinated approach to disaster risk management (Adaptation Planning);
(f) Strengthening educational and training programmes of NMHSs and their key stakeholders in DRM such as authorities, emergency response operators and media (Adaptation Planning and Socio-economic Information);
(g) Development of public outreach programmes and materials (Environmental and Socio-economic Information).
**Climate Modeling and Downscaling**
Concerted efforts are being made by some of the NMHSs and leading international climate modeling groups, under the coordination of the WCRP, to develop Regional Climate Models so that they become capable of providing regional scale (typically 25 x 25 km, and higher resolution with appropriate computing facilities), climate information for impact studies, and to facilitate their use within the modest computational infrastructure of the developing countries. Global efforts can be spearheaded by WMO to bridge the existing gaps between developed and developing countries in their understanding of climate change impacts through capacity building and regular updates of occurrence of extreme events and associated damages. Developing countries NMHSs may be provided with appropriate tools to respond rapidly to trends and developments of regional scenarios, changing needs, emerging issues and specific challenges. In particular, the application of the regional climate models in developing countries need adequate local observational data for model evaluation, and regional expertise to diagnose and interpret the simulated regional features. In order for the regional models to become reliable tools to generate high-resolution climate scenarios, these models need comprehensive validation for specific applications, nesting within higher resolution verified global models and the developing countries need assistance from the modeling groups to incorporate user feedback in resolving the model deficiencies, which can be facilitated by the WMO and WCRP. Regional climate models provide more useful local information needed by policy makers and planners on adaptation policies and to enhance the capacity of communities to cope with the future. Since fine resolution climate change information for use in impact studies can also be obtained via sophisticated statistical downscaling methods, coordinated efforts must also be undertaken to use these methods to develop and implement useful and plausible regional scale climate scenarios. These methods are computationally inexpensive with respect to regional climate models and they can be used to provide site-specific information, which can be critical for many climate change impact studies. Consequently, a coherent strategy is needed to facilitate the transfer of expertise from developed countries and to provide access to downscaling tools in developing countries with limited or modest computational resources, since all downscaling methods are complementary.
| Title | World Meteorological Organization’s Guidelines on Climate Watches |
|-------|------------------------------------------------------------------|
| Description | The guidelines describe how to establish a climate watch system and the information required in a climate watch. Governments typically react to extreme climate events through “crisis management” rather than through continuous risk reduction. Decision makers have cited the lack of information about approaching climate hazards with sufficient notice to take action. Climate watches aim to deliver this necessary, accurate information to end-users through the national meteorological services (NMSs) in a timely and useful manner. |
| Appropriate use | This tool targets “the special situation and needs of smaller NMSs, which have limited resources” in establishing the system and issuing climate watches. The process is based on continuous collaboration with climate information users, and it should serve as a mechanism to initiate preparedness activities to limit impacts from climate anomalies (e.g. excessive rainfall over several months). The guideline discusses the rationale for a climate watch system, current activities and capacity in NMSs, characteristics and operation of a climate watch system, format and criteria for issuing a climate watch, and various annexes, including examples of climate watches.
Climate watch format:
- A standard heading, issuing authority, and time and date of issue
- Areas for which the advice is current (the appropriate regions)
- Period during which the climate watch is valid
- Where appropriate, an indication of the reason for the climate watch, which may include graphical information
- Relevant skill of long range forecasts
- Possible follow-on effects of the climate anomaly
- Date at which the next update will be issued |
| Scope | National level; meteorological services |
| Key output | Information about significant climate anomalies for the forthcoming season(s) that may have substantial impacts on a sub-national scale.
A. Establishment of national climate watch system
B. Capacity built for the climate watch system
C. Operation of national climate watch
D. Climate watch system evaluated |
| Key input | A. A network of observation stations; an understanding of the current and recent past climate of the region in question; linkage with regional/global monitoring systems; dissemination channels to reach users; partnerships with key stakeholders
B. Understanding of users’ needs; criteria for issuing a Climate Watch defined (e.g. average rainfalls below a certain level for the season); technical training; strengthening of communication links
C. Monitoring and analysis of climate data; communication with other organizations that maintain their observation systems; |
| **Communication** | communication with intermediaries to translate information for user groups
D. Periodic reviews of the system and process; dialogue with users on their needs to identify gaps in dissemination or content |
|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Ease of use** | Usable by National Meteorological Services |
| **Training required** | Requires expertise in meteorology/climatology and understanding of climate information users’ needs |
| **Training available** | (see Contacts) |
| **Computer requirements** | Software for forecasting; word processing |
| **Documentation** | WMO, 2005. *Guidelines on Climate Watches*, Geneva: World Meteorological Organization. [http://www.wmo.ch/web/wcp/wcdmp/html/Guidelines%20on%20Climate%20Watches.pdf](http://www.wmo.ch/web/wcp/wcdmp/html/Guidelines%20on%20Climate%20Watches.pdf) |
| **Applications** | |
| **Contacts for framework, documentation, technical assistance** | Omar Baddour
Chief, World Climate Data and Monitoring Programme
WMO, 7bis Ave. de la Paix
C.P. 2300, CH-1211, Geneva 2, Switzerland
Tel: (41-22) 730-8268 or 730-8214
Fax: (41-22) 730-8042
E-mail: [email@example.com](mailto:firstname.lastname@example.org) |
| **Cost** | Free |
| **References** | (See references and links in document)
Technical documents published under the WMO World Climate Data and Monitoring Programme (WCDMP)
[http://www.wmo.ch/web/wcp/wcdmp/html/wcdmpreplist.html](http://www.wmo.ch/web/wcp/wcdmp/html/wcdmpreplist.html) |
SPREP submission on Nairobi Work Programme
The Secretariat of the Pacific Regional Environment Programme (SPREP) welcomes this opportunity to provide information in relation to adaptation activities in the Pacific Islands region. We also note that Pacific Island Member States of SPREP have already provided collective views in the context of a submission by the Alliance of Small Island States (AOSIS) with which SPREP is in agreement. SPREP is a regional organisation established by the governments and administrations of the Pacific region to look after its environment. It has grown from a small programme attached to the South Pacific Commission (SPC) in the 1980s into the Pacific region’s major intergovernmental organisation charged with protecting and managing the environment and natural resources. It is based in Apia, Samoa, with over 70 staff. SPREP’s mandate is to promote cooperation in the Pacific islands region and to provide assistance in order to protect and improve the environment and to ensure sustainable development for present and future generations.
(a) Existing and emerging assessment methodologies and tools.
Adaptation to climate change has been a major preoccupation for SIDS in the Pacific region for many years. All Pacific SIDS have carried out their first national communications to the UNFCCC and a number have carried out other in-depth studies related to adaptation. Most Pacific SIDS identified numerous adaptation activities that should be implemented in the near to medium term. Most of these proposed activities have strong community based components, as the majority of the activities fall within the following sectors: coastal zone management, water resources management, food security and human health – all of which are directly linked to the communities, their well-being, livelihoods and prospects for sustainable development.
The assessments carried out in the context of the first national communications followed models using simple simulations, allowing participants to make predictions on climate change impacts on vulnerable areas. Vulnerability assessments highlighted the key sectors mentioned above, but also looked at coral reefs, agriculture and biodiversity. Some examples of the findings of these assessments included a decline in fruit crops production and low export sales due to drought and low rainfall in previous years, and loss of agricultural land due to intrusion of seawater through flooding, inundation, and coastal erosion especially in the atoll islands.
There were however some limitations to the models, and other modalities were attempted in some Pacific SIDS, such as integrated risk reduction approaches, through the use of Climate Change Adaptation through Integrated Risk Reduction framework and methodology, to demonstrate a risk-based approach to adaptation and to mainstreaming adaptation. A number of case studies were carried out to demonstrate why reducing climate-related risks should be an integral part of sustainable development and practical means of how to do this. Climate-related risks are already high for island communities, as well as for basic infrastructure. Risks are likely to increase considerably under current climate change scenarios, as well as under observed climate variability and extreme events. Studies have shown that for some infrastructure projects, it is possible to avoid most of the costs attributable to climate change, and to do so in a cost-effective manner. Climate proofing undertaken at the design stage of the project is one approach to achieve this. However, it is also notable that the costs involved with this approach should still be considered as eligible activities for funding under the various adaptation funds.
(b) Opportunities, gaps, needs, constraints, and barriers.
The lessons learned from the various projects and programs in the Pacific can be summed up as follows: in the past most studies of adaptation options for Pacific SIDS have largely focused on adjustments to sea-level rise and storm surges associated with tropical cyclones. There was an early emphasis on protecting land through ‘hard’ shore-protection measures rather than on “soft measures” or on other options such as accommodating sea-level rise or retreating from it, although the latter has become increasingly important on continental coasts. However, later vulnerability studies conducted for selected small islands show that the costs of overall infrastructure and settlement protection is a significant proportion of GDP, and well beyond the financial means of most small island states. More recent studies since the IPCC TAR have identified major areas of adaptation, including water resources and watershed management, reef conservation, agricultural and forest management, conservation of biodiversity, energy security, increased share of renewable energy in the energy supply, and optimized energy consumption. The emphasis has thus become more broad-based and looks at climate change impacts from a more comprehensive perspective.
From a systemic perspective these lessons direct Pacific SIDS and their communities, within their means and with international technical and financial support, to:
- increase the ability of islands’ physical infrastructure to withstand the impacts of climate change. For example, building and zoning codes that seek to climate-proof infrastructure;
- increase the flexibility of potentially vulnerable systems that are managed by Governments or communities, through adjustments in management practices, such as changes in use or location;
- enhance the adaptability of vulnerable natural systems, by reducing stresses due to non-climatic effects, such as pollution impacts on coral reefs, and improving overall resource management practices;
- reverse trends that increase vulnerability by reducing human activity in vulnerable areas, preserving natural systems that protect against hazards, such as preventing sand mining, and ensure that the incidence of “scoring own goals” is reduced;
- improve public awareness and preparedness by informing the public about risks and possible consequences of climate change, setting up early-warning and monitoring systems for extreme weather events, and by developing overall communications strategies that make climate change science accessible to the average citizen.
But that is perhaps where some of the biggest gaps exist. The lessons learned present major challenges for Pacific SIDS to address. Since proposed adaptation strategies have focused on reducing vulnerability and increasing resilience of systems and sectors to climate variability and extremes through mainstreaming adaptation, there is a need to ensure appropriate participatory modalities for these strategies. The early projects carried out in the context of the first national communications allowed for in-depth community participation, mainly due to the fact that only small site-specific examples could be studied under the limited funding available. For a broader nation-wide adaptation strategy to follow similar patterns would require some adjustments for many Pacific SIDS Governments and would certainly require technical and financial assistance. Consultative practices vary greatly throughout the regions, and have deep political-cultural roots. Particularly for the archipelagic and multi-island jurisdictions there are practical issues impeding conducting such in-depth consultations with all their communities. The costs of transportation and lodging and the possible need for outside expertise makes the development of a national adaptation strategy quite a daunting and expensive task if it is to be done comprehensively.
That being said, many Pacific SIDS are developing national sustainable development strategies (NSDS) or their equivalent. Those processes would provide opportunities to also include adaptation to climate change as part of the overall sustainable development strategy. Additional financing would need to be made available through international assistance, as this would entail additional costs to that allocated for the NSDS.
In addition to work on adaptation in the region, serious gaps exist in the scientific and meteorological work that the region requires in addressing climate variability and predicting extreme events, and which are of direct relevance to planning and implementing adaptation.
In response to interest from the regions, WMO embarked on work to assist SIDS in all regions to access the GCOS network. In the Pacific PI-GCOS has been in existence since 2002 with a steering committee forming its Action Plan and Implementation Plan.
Under the latter, a list of 31 projects were identified (with initial indicative budgets) to meet needs in areas ranging from research and policy development, to technical capacity building in observation networks and enhancement of operational early warning systems.
Its main achievements to date have been the enhancement of the capacity in nine Pacific SIDS in seasonal climate prediction, the rescue and management of historical climate data and improvement of access to data, as well as a marked improvement in the maintenance and increased output from GCOS identified GUAN and GSN stations in the Pacific. These achievements have been undertaken also in ways that have built local capacities in consideration also of the need for sustainability and appropriateness of these works.
It is a major contributor thus to cooperation and partnership for climate change work particularly in taking stock of, and supporting, the technical and scientific level needs for climate information and applications. At its formative meetings in 2000-3 the then PI-GCOS Steering Committee\(^1\) decided to prepare project proposals with concrete and achievable targets, and with full budgets. These include pilot projects assessing the impacts of climate variability and change on ocean and island ecosystems, expansion and enhancement of climate prediction, along with operational training programmes to incorporate some of the new knowledge gained from this research within national climate centers of Pacific SIDS. Unfortunately, the large majority of the most key projects identified have not received funding and this remains a major barrier for work in the region.
The Implementation Plan reaffirms that PI-GCOS is intended to be a long term, user driven operational system capable of providing the comprehensive observations required for monitoring the climate system, for detecting and attributing climate change, for assessing the impacts of climate variability and change, and for supporting research toward improved understanding, modelling and prediction of the climate system. Its nesting within the climate change programme of SPREP ensures that the gaps in scientific knowledge and information in this area are addressed and that it provides and builds linkages across to other areas of efforts in climate change.
(c) Possible ways to develop and better disseminate methods and tools.
The development of adaptation methods and tools has been an ongoing effort in most Pacific SIDS, but has until recently not been recognized as such, especially not in the context of climate change adaptation. Island communities have adapted to changing circumstances and have developed traditional means of
\(^1\) Then known as the Pacific Islands Regional GCOS Implementation Team (PIRGIT) now known simply as the PI-GCOS Steering Committee (SC).
coping in past generations. However, those adaptations have occurred within often-manageable timeframes, and the scope and pace of climate change reinforces the need for more rapid development of tools and options for communities, given the urgency of action on climate change adaptation.
SPREP recognizes the work done by the Expert Group on Technology Transfer on adaptation technology. However, it is also clear that a broader package of training and capacity building, as well as research on local level modification of technologies is needed. That would allow practitioners in different sectors in Pacific SIDS to use technologies to plan for and implement adaptation in their communities.
In this regard, SPREP is interested in the proposal from AOSIS, and which was echoed during the Dialogue on Long-term Cooperative action to address climate change in Bonn in May 2007, that an Adaptation Experts Group, similar in mandate and structure to the EGTT, be set up within the FCCC process.
A special report from the IPCC on the climate change implications for SIDS was also proposed in that Dialogue Meeting, and should also be considered as an option for pursuing other ways and means of steering the development of adaptation methods, tools and technologies.
Pacific SIDS have on numerous occasions called for the establishment of more targeted cooperation and technical advice from the UN system, not only for climate change but for sustainable development in general. The rapid growth of information and communications technologies is dramatically changing socio-economic and political structures of most nations. One of the main constraints of Pacific SIDS in particular is the lack to access to this Information and Communications Technology (ICT). There is a strong recognition that proper deployment of ICT must be one of the priorities for sustainable human development. Therefore Pacific SIDS through AOSIS welcomed the establishment in 1998 of SIDSnet which has provided an important basis for further action in its First phase. Phase Two of SIDSnet was launched by the UN in 2001, but the project ran out of funding in 2006. It has remained dormant since then.
There was a very good rationale in having this dedicated network for SIDS. Not all countries were or are enjoying these opportunities in ICT. Very few people in Pacific SIDS can take advantage of them. Many elements contribute to the constraint of both the use and the growth of ICT in these countries. These constraints include limitations of bandwidth, minimal access to computers and computer peripherals, not enough telephone lines, lack of technical and managerial expertise and too little private sector involvement. There is a need to strengthen local capacity to gain beneficial use of Internet, related information technologies and management practices. Otherwise, those countries could be marginalized in the new globalization fostered by ICT.
SPREP is raising this issue in the context of the planned workshop for SIDS on Article 6 of the Convention, where inter alia the re-structuring of Cci.net is going to be discussed. It is very likely that the participants at that meeting will call for a strengthening of the FCCC outreach to SIDS by making this network more like SIDSnet role as information gateway for SIDS, and improve information quality and quantity to facilitate activities of AOSIS and other beneficiaries such as governments, their related agencies and institutions, civil society non-governmental organizations and the private sector. Using such information gateways would greatly assist in better disseminating these tools for adaptation.
Nevertheless, there are other traditional methods that should still be utilized such as mentoring, training the trainers and specialized workshops on adaptation for climate change country team members from Pacific SIDS.
(d) Training opportunities.
Training and awareness raising are of great importance in planning adaptation, but a major gap exists through the inability of many Pacific SIDS Governments to retain personnel trained in climate change matters. Personnel trained as part of enabling activities or other projects learn valuable skills that are in short supply in the region. Certain specialist professions such as coastal zone managers or coastal engineers are mostly unavailable to Pacific SIDS Governments. This is of course a wider problem than climate change responses and relates to the overall national and regional strategies for education for sustainable development, which is the subject of on-going debate in the region.
In addition, the assessment and transfer of environmentally sound technologies for adaptation to climate change poses a complex challenge for Pacific SIDS. First, there is a lot of uncertainty regarding site-specific vulnerability and subsequently what adaptation will be required at the local level to the impacts of climate change. This uncertainty carries over to the identification of appropriate adaptation measures, options and technologies, as well as to the stakeholders that are affected. A national and local community discussion on hard technologies, which may not be appropriate, versus the importance of soft technologies needs to be encouraged. This is particularly true since there are potential synergies between mitigation and adaptation which may have either positive or negative effects. For example, the work on bio-fuels has highlighted the potential for soil conservation as an adaptation measure to be integrated into what is largely a mitigation activity.
A major opportunity for furthering the development of appropriate climate change skills in Pacific SIDS resides in the University Consortium of the Small Island States (UC-SIS). Established in the context of the Mauritius International Meeting on SIDS in 2005, the consortium brings together 5 regional and national SIDS universities and builds on their relative strengths to offer enhanced educational opportunities for SIDS. The FCCC should consider liaising with the UC-SIS for the purpose of identifying training opportunities.
Furthermore, in 2005 the Pacific Islands Forum Leaders endorsed the Pacific Regional Framework for Action on Climate Change, which established a series of priorities on climate change for the region. These priorities include:
1. Implementing adaptation measures
2. Contributing to mitigation of GHG emissions
3. Improving our understanding of climate change
4. Education and awareness
5. Improving decision making and good governance
6. Partnership and cooperation
Under each of these priorities it is envisaged that project activities will be undertaken by PICs nationally and regionally, supported by the relevant regional organizations. In addition it should be noted that the in order to ensure appropriate coordination of activities under the Framework, a Pacific Climate Change Roundtable (PCCR) should be established. Since responsibility for the Framework’s regional and international actions can and should be shared by the region’s organisations, SPREP has been called upon to convene regular meetings the PCCR inclusive of all regional and international organizations with active programmes on climate change in the Pacific region to:
• help update the Pacific SIDS on regional and international actions undertaken in support of the Framework;
• voluntarily lead or collaborate in implementing and monitoring actions relevant to their priorities and work programmes; and
• agree on mechanisms for measuring progress, identifying difficulties, and addressing actions needing special attention.
The PCCR should meet at least once a year, and should also afford the Pacific SIDS the opportunity to prepare for the annual meetings of the Conference of the Parties to the UNFCCC. This would afford the region significant opportunities for training and awareness raising, and for sharing information on best practices and new and emerging adaptation methods, tools and technologies. |
Volunteer Handbook
Safety First!
UM Upper Chesapeake Health
Volunteer Services Association
compassion | discovery | excellence | diversity | integrity
# Table of Contents
## INTRODUCTION:
- Emergency Codes:
- Initiate an EMERGENCY RESPONSE page 5
- All Codes page 6-7
- Our Vision page 8
- Our Mission page 9
- Working Toward Equity, Diversity, and Inclusion page 10
- Thank You for Supporting Your Community Hospitals page 11
- Volunteer Services Team page 12
- UM Upper Chesapeake Health Leadership Team page 13
## VOLUNTEER INFORMATION:
- UM UCH Volunteer Dress Code page 15-16
- Hand Hygiene page 17-21
- Volunteer Safety and Security page 22-25
- Interim Life Safety Measures (ILSM) page 26-27
- Personal Protective Equipment page 28-29
- Occupational Health Exposures page 30-32
- Cough Etiquette & Glove Use page 33-35
| Topic | Page |
|--------------------------------------------|-------|
| Transmission and Isolation Precautions | 36-37 |
| Harm Prevention Strategies: | |
| Infectious Waste | 38 |
| Body Mechanics to Prevent Injuries | 39-40 |
| Rapid Response & Codes: | |
| Rapid Response | 41-42 |
| Code YELLOW | 43 |
| CODE PINK – Infants and Children | 44 |
| Code RED | 45-46 |
| Fire Extinguisher Use | 47-48 |
| CODE SILVER – Active Shooter | 49 |
| Bomb Threat | 50 |
| Chemical Safety | 51-52 |
| HIPPA Privacy & Confidentiality | 53-58 |
| Tips for Good Interaction with People | 59-61 |
| THE JOINT COMMISSION STATEMENT | 62 |
| CONFIDENTIALITY AGREEMENT - Between | 63-66 |
| Volunteers and UM UCH | |
| Release and Waiver of COVID-19 Liability | 67-68 |
As the front cover of this handbook suggests – Safety First! We want all of our volunteers to enjoy their work and be safe. This section covers Emergency Codes, important for volunteers to understand as it is the hospitals “short hand” for announcing different types of emergencies. You will get a card to wear with your badge reminding you of the codes and what they mean as well as some important phone numbers to have when you are volunteering with us.
It’s important that you know something about our Mission, Vision and Values as we UM Upper Chesapeake is a proud member of the University of Maryland Medical System. Our mission, vision and values help guide us in our business of caring for people in our community.
Finally, we would like to thank you for volunteering and put some faces to names for both our Volunteer Services team as well as the leaders of UM UCH.
How to start an Emergency Response
THE NUMBER TO CALL on any Hospital Telephone to initiate EMERGENCY PROTOCOLS
Dial 3333 on any landline or call 443-643-3333
GIVE the operator your name & location and tell the nature of the emergency you are reporting!
| Emergency Code | For ALL Codes dial 3333 |
|----------------------|----------------------------------------------------------------------------------------|
| **Code Red** | Alert to fire, smoke or excessive heat |
| **Code Blue A** | Adult – Cardiac Arrest (Respiratory Arrest) |
| **Code Blue C** | Child (<8 years old) – Cardiac Arrest (Respiratory Arrest) |
| **Code Pink** | Attempted/Actual Infant or Child Abduction |
| **Code Green** | Disruptive or Combative Person |
| **Code Silver-Active Shooter** | Hostile Person/Possible Weapon – Secure-Run-Hide-Fight-Return |
| **Code Purple** | Urgent Security Response needed |
| **Code Yellow** | Internal or External Disaster – Emergency Operations Implemented |
| **OB Stat** | Obstetrician assistance needed Immediately |
| **Lockdown** | External threat – Stay Safe INSIDE |
| **Tornado Warning** | Tornado Warning in effect – Seek Shelter away from windows and doors |
| **Evacuation** | Evacuation Order is in effect – Evacuate Immediately |
For All Codes dial x 3333 on a landline – or 443-643-3333
For All Codes
• Remain calm; reassure patients and visitors of the code and follow team member directions
• Designated team members and security will report to code scenes
• Keep land lines open
• Hospital barrier doors will close; close all hallway doors
• Check the yellow Quick Reference Chart in your department for more information
Our Vision
We build upon our tradition of excellence in patient care and innovation, to be a national leader in the transformation of health care.
compassion | discovery | excellence | diversity | integrity
Our Mission
To purposefully advance the shared principles that are foundational to our work:
Compassionate, High-Quality Care
We are unrelenting in our dedication to compassionate, high-quality, patient- and family-centered care.
Commitment to Community
We are inherently entwined in the social fabric of our communities and demonstrate an unwavering commitment to the health and well-being of Marylanders.
Health Care Transformation
Leveraging our scale and geographical reach, we transform the way we deliver health care to bring more value to our patients and their communities.
Discovery-Based Medicine
Blazing new trails in medicine is inherent in us. We invest in and partner with those who are committed to the highest ideals of innovation, discovery-based medicine and health education.
The past few years have been a wake-up call to the inequities, injustices and systems in place that create barriers for many people based on race, ethnicity, background, beliefs, disability, gender, sexual orientation; and gender identity.
During this time, our nation began to acknowledge the injustices that many have long endured, and we saw the global pandemic’s more severe impact the most vulnerable members of our society.
At University of Maryland Medical System (UMMS), we have been evaluating how we address equity, diversity and inclusion at all levels and locations of our organization.
We have developed a multi-year plan, backed by a $40 million investment, that outlines our commitment to equity in care delivery, diversity in our workforce, meaningful investments in local communities, and expanded opportunities for minority-owned businesses.
Thank You for Supporting Your Community Hospitals
Thank you for supporting your local community medical campuses of University of Maryland Upper Chesapeake Health (UM UCH). Our volunteers work in many areas from inpatient to outpatient bringing warmth and encouragement to both our patients and team members. Whatever time commitment you make, please know you have an impact. Our goal is to match you to a volunteer opportunity that complements your interests and brings you satisfaction and joy.
Your safety, as well as patients and team member safety, are a top priority. In order to provide you the safest environment, you need to be aware and observant of safety issues that may be encountered on our campuses. This handbook outlines the important safety issues you need to understand, and it is a helpful resource for your annual education review as well.
The Volunteer Services department is a resource of support for you, so do not hesitate to contact our department for anything. The UM Harford Memorial Hospital Volunteer Office is open weekdays – 6:30 am - 3 pm. 443-843-5355
The UM Upper Chesapeake Medical Center Volunteer Office is open weekdays 6:30 am – 5 pm. 443-643-1725
Thank you for your service to our patients and the community. If we can help you in any way, or if you have any ideas to improve our program, please share!
UM UCH Volunteer Services Team
Volunteer Services Team
Martha Mallonee
Director, Volunteer Services and Community Engagement
443-643-1730
Deb Bedard
Manager, Volunteer Services
443-643-1732
Debbie Stout
Volunteer Services Coordinator
443-843-5355
Sandy Schissler
Volunteer Services Assistant
443-643-1725
UM Upper Chesapeake Health Leadership Team
Elizabeth Wise, FACHE, MSN, MBA
President/CEO
Michelle D’Alessandro, DPN, RN, NEA-BC
Chief Nursing Officer
Fermin Barrueto, M.D., MBA Sr. Vice President, Chief Clinical Officer
Colin Ward, DrPh,MHS
Senior Vice President
Chief Operating Officer
Faheem Younus, M.D. FACP, FIDSA, CPE
Vice President of Quality/Chief Quality Officer
Mark Shaver, MBA
Senior Vice President, Strategy, Physician Services and Business Development.
Marco Priolo
Vice President
Chief Financial Officer
Antonio DePaolo
Vice President
Transformation & Continuous Improvement
Toni Shivery, MS, SPHR, SHRM-SCP
Vice President Human Resources
Ken Ferrara, MBA
Vice President/Executive Director of the UCH Foundation
Stephanie Dinsmore, MBA, CPA
Vice President Physician Services at Upper Chesapeake Medical Center
Volunteer Safety Information
This section offers some essential information and expectations for the volunteer such as how to dress when you are working with us, how important it is to practice hand hygiene, safety and security issues around the campus and within the areas you may find yourself working.
There is some detail on proper cough etiquette and glove use, avoiding isolation areas for patients and what to do if you find yourself seeing infectious waste like blood or other bodily fluids.
We also revisit the codes in more detail so you can understand what to do if you are involved in a code.
UM UCH Volunteer’s Dress Code
• A neat, clean, professional appearance is required.
• For the easiest compliance, we will provide 2 branded shirts and ask that you wear with it with tan or black slacks and comfortable, clean, non absorbent, closed toed shoes.
• Always wear your identification badge and the Emergency Code “badge buddy” when volunteering.
• No jeans, sweat pants, shorts, flip flops or sandals.
• Hair should be neat, clean, and pulled back from your face.
• Avoid long necklaces, loose bracelets and long earrings due to the type of area where you work. These items can get caught in machinery, personal protective equipment, hospital blankets, or accidentally pulled by patients you are assisting.
UM UCH Volunteer Dress Code
• Avoid perfume and cologne (strong fragrances in general) while volunteering. Many people have allergies to different scents.
• The supervisor of your work area may have other requirements for shoes, head protection, etc. to insure that you are comfortable, safe and do not ruin your clothes while volunteering. Please adhere to those requirements.
• If you have any questions or need guidance, please let us know.
The most basic and essential infection prevention practice!
Hand Hygiene
• Required – Hand Washing
• Before entering and when leaving patient’s environment
• Before wearing and after removing gloves, for both sterile and non-sterile activities
• After contact with objects in patient’s environment
• Why?
• Germs can be spread by contaminated hands and cause outbreaks in patient environment that is already frequently contaminated
• Gloves may have microscopic tears
• Contaminated gloves could contaminate hands during removal
Hand Hygiene
• Alcohol based hand rub (preferred)
– Dispense one pump of product and rub hands until dry
– Rub on all the surfaces of your hands; wrist, palms, top, fingertips, and thumbs
• Soap and water required:
– When hands are visibly soiled
– Rub all surfaces of hands for 15 - 20 seconds with soap, then rinse
– Dry hands first, then use same towel to turn off faucet
Hand Hygiene Technique
Soap & Water
• Turn on water and adjust temperature—avoid using “hot” water.
• Wet hands and wrists thoroughly, pointing fingers toward the bottom of the sink to ensure maximum hand coverage.
• Dispense soap onto hands by swiping hands under dispenser.
• Scrub each hand with the other, covering all surfaces of hands and fingers, and under nails, creating as much friction as possible; continue scrubbing for 15-20 seconds.
• Rinse hands thoroughly by holding them under running water with elbows higher than hands so water can flow off hands into the sink.
• Dry wrists and hands with paper towel, working from wrists to fingertips.
• Use paper towel to turn off faucets.
• Dispose of paper towel in waste receptacle.
Hand Sanitizer
• Assure hands are free of any visible debris.
• Apply only enough product to cover all surfaces of hands and fingers.
• Rub hands together-covering all surfaces of hands and fingers and allow to air dry.
If you have either direct or indirect patient contact, including food services personnel, (i.e. handle patient care supplies):
- Natural nail tips must be kept to no longer than ¼ inch in length.
- Hands must be clean and gloved when directed.
- Artificial nails may not be worn by those who have direct patient contact or handle sterile supplies.
Always wear your photo ID Badge.
Ensure that others around you have a badge; visitors should have a sticker badge as well.
Ask for Security to escort you to your car if you are uncomfortable walking to your vehicle, especially after dark.
Keep the doors of your car locked with windows up.
Keep valuables in your car out of sight.
Keep yourself and other Team Members safe by being aware of your surroundings at all times. If you see something or someone suspicious, notify Security Services at 443-843-5314 at UM HMH, and 443-643-2444 at UM UCMC.
Help Keep Hallways and Fire Exits Clutter Free!
Oxygen Cylinder Storage
FULL = ≥ 2000psi
Needle in GREEN
PARTIAL = > 500psi <2000psi
Needle in WHITE
EMPTY = ≤ 500psi
Needle in RED
It’s very important that oxygen tanks be stored properly according to their “fill”
• PARTIAL Tanks (>500psi <2000psi) are stored in designated racks labeled “PARTIAL”
• LOCATED ON THE TOP RACKS
• FULL Tanks (≥2000psi) are stored in designated racks labeled “FULL”
• LOCATED ON THE BOTTOM RACKS
• EMPTY Tanks (≤ 500psi) are stored in designated racks labeled “EMPTY”
• LOCATED IN THE SOILED UTILITY ROOM
Security and Shuttle Bus Services
Prevention is Key!
Security strategic posts
- Main lobbies
- Emergency Department entrance
Regular Patrols / Video Surveillance throughout each campus
Emergency buttons are located throughout the parking areas. Look for the blue lights.
Telephone numbers: UM UCMC – 443-643-2444, UM HMH – 443-843-5314
(please put the numbers in your phone, but they are also on your emergency badge)
Shuttle Bus makes rounds at UM UCMC throughout the day from Westgate Lot to the front entrance of Ambulatory Care Center and the Main Hospital Entrance weekdays from 5:30 am – 9 pm.
Interim Life Safety Measures (ILSM)
The Joint Commission tells us that when we have known disruptions to usual fire safety features, we must implement Interim Life Safety Measures (ILSM).
Construction activities that interfere with Life Safety, such as those that block hallways, change exit routes or interfere with fire safety systems, are considered such disruptions.
It is important that everyone pay close attention to signage, emails from your supervisor, and any other ILSM communications.
Examples of ILSM Actions
**Disruption:**
Exit paths are temporarily changed
**ILSM:**
Know changes to escape routes (signage posted), make sure they stay clear.
**Disruption:**
Fire detection, suppression or alarm systems are shut down for needed work.
**ILSM:**
Rounds are made every two hours to look for possible fire safety issues, usually by Security, and control the storage of combustibles and ensure emergency exits are unobstructed.
**Disruption:** The end of a hall is blocked, making a temporary dead-end.
**ILSM:**
Pay attention to signage informing occupants of the temporary condition and help remind patients and visitors in that area of that condition.
Personal Protective Equipment (PPE)
- **Gloves** – Use when touching blood, body fluids and non-intact skin
- **Gowns** – Use when contact of clothing/exposed skin with blood/body fluids is anticipated
- **Mask and goggles or a face shield** – Use during activities likely to generate splashes or sprays of blood/body fluids
- Removal of PPE must be in a manner that prevents self contamination.
Keep Yourself Safe From Germs
Occupational Safety and Health Administration states that
- eating
- drinking
- applying cosmetics or lip balm
- handling contact lenses are prohibited in work areas where there is a likelihood of exposure to blood or other potentially infectious materials.
Be sure that you are following this directive in clinical areas, patient care areas, desks/counters and medication carts/areas.
IT’S THE LAW and it protects YOU!
What is An Occupational Exposure?
An occupational exposure occurs when blood or other potentially infectious materials comes in contact with the skin, eyes or mucous membranes.
In the event that you receive a needle stick, are cut by contaminated glass, or are exposed to blood or a potentially infectious body fluid, report immediately to the Occupational Health Nurse, your department Supervisor, and Volunteer Services at x5355 at HMH and x1725 at UCMC.
A “Report of Occupational Injury or Illness” MUST be filed and designated procedures must be followed as defined in the Exposure Control Plan.
Most exposures do NOT result in HIV infection. The risk of becoming infected with HIV after a needle stick or cut from an HIV positive source is about 1 in 300.
What Should I Do if I’m Exposed?
An occupational exposure is considered a medical emergency. You must contact OCCUPATIONAL HEALTH immediately (443-643-3428) so that evaluations of your exposure can occur and medical treatment (if applicable) can be provided.
– If it is after 4pm Monday – Friday or on a weekend, contact the Administrative Coordinator (AC on call) on your emergency badge
Wash the exposed area with soap and water for 3 minutes and let it bleed freely. If you are splashed in the eyes, mouth or nose, rinse the area thoroughly with water.
Standard Precautions
Required by Occupational Safety & Health Administration (OSHA)
Assumes that every person is potentially infected or colonized with an organism that could be transmitted in the health care setting. We call these Hospital Acquired Infections (HAI).
- Wear Personal Protective Equipment (PPE) appropriately
- Use of a safe eating/drinking area
- Be cautious of handling anything sharp such as needles (use red sharps box for disposal)
Rules for Glove Use
• Gloves do not replace hand hygiene
• Hand sanitizer must be applied or hand washing completed:
• Before gloves are put on
• Immediately after gloves are removed
• Whenever gloves are changed
• Gloves are only to be used for a single task
• Gloves must always be changed between tasks
• Example; gloves are to be changed and hand hygiene performed after sanitizing a bed before sanitizing a wheelchair
• Gloves are to be removed and hand hygiene performed before exiting a patient room
• If a glove is damaged in use, both gloves must be removed and hand hygiene performed before putting on new gloves
• Do not wash gloves
How to Put on and Take off Gloves
Put gloves on last
Select correct type and size
Use hand sanitizer
Insert hands into gloves
If wearing gown, extend gloves over gown cuffs
To remove gloves:
Grasp outside edge near wrist
Peel away from hand, turning glove inside-out
Hold in opposite gloved hand
Slide ungloved finger under the wrist of the remaining glove
Peel off from inside, creating a bag for both gloves
Discard and use hand sanitizer
Cough Etiquette/Respiratory Hygiene
• Applies to everyone!
• Do not work or visit when sick!!
• Cover mouth and nose when coughing or sneezing using either a tissue, an elbow, or by donning a mask to contain secretions, followed by hand hygiene
Transmission-based Precautions
Also known as “Isolation”
Based on known or suspected pathogen(s) harbored by patients
The following types of precautions are used at UM UCH:
– Contact
– Enhanced Contact
– Droplet and Contact
– Airborne
– Enhanced Droplet
– And combinations of the above
Observe signs on patient room doors and follow instructions!
Isolation Precautions
Please Read Every Sign on a Patient’s Room!
Volunteers should NOT go into isolation rooms!
Please alert an EVS team member to address any fluid spills you see, especially blood. They are the experts on proper waste clean up. Call extension UCMC x3919 or HMH x6131.
- Blood or body fluids should be addressed promptly.
- Red bags are used for infectious waste clean up.
- Linens are collected in blue bags and double-bagged when heavily contaminated.
USING PROPER BODY MECHANICS AND MOVING TECHNIQUES CAN KEEP YOUR BACK HEALTHY AND HELP PREVENT INJURIES.
PRACTICE HEALTHY BODY MECHANICS:
Using good posture when you stand, sit and walk helps maintain the natural “S” curve of your back.
**SITTING** - Keep your feet rested on the floor with hips and knees bent at a 90 degree angle.
**REACHING** - Keep feet shoulder width apart, get close to the item you are reaching for, and DO NOT TWIST at the waist to reach the object – MOVE your entire body through the reach.
**LIFTING** - Size up the load before lifting, keep your back straight and lift by bending and straightening at your knees and hips. Keep the load close to your body. If possible get help, or use a cart or a lifting device when moving an object. When lifting a patient, get help to avoid patient and self injury.
**AVOID INJURY** – Volunteers should never be asked to lift a patient.
• If standing or sitting for prolonged periods of time change position and/or shift weight every 10-15 minutes.
• It is better to push something than to pull it.
• Stress and poor diet can contribute to back problems; eat healthy and get some exercise.
• Keep your work environment free of hazards and clutter.
• Help each other to limit injury to you, your co-workers and, of course, the patients we serve!
Rapid Response Team (RRT)
- Team arrives within 5 minutes of being called
- Assess and recommend treatment
- The Rapid Response Team responds to ALL areas on campus including parking lots
The Rapid Response Team are nurses, respiratory therapists and doctors from the ICU or ED who come to assist during a Rapid Response.
ANYONE can call a Rapid Response
Dial 3333
Patients and Families can activate RRT
Patients and families DO NOT need permission from the care team to activate a Rapid Response.
Dial 3339
Rapid Response Checklist
- Does patient need directions to ED; arrived at an incorrect entrance~ Escort patient to ED; walking or wheelchair or call ED to see if someone can come get patient
- Does patient need a wheelchair because they cannot walk~ Escort patient to ED in wheelchair if you can, or call ED and ask if someone to come get patient
- Is patient bleeding large amounts; puddles on the floor~ Call x3333 Rapid Response
- Is patient having difficulty breathing and cannot complete sentences~ Call x3333 Rapid Response
- Patient is having active chest pain and appears to be uncomfortable~ Call x3333 Rapid Response
- Any patient with signs of a STROKE; slurred speech, weakness on one side of body, drooping of smile, etc.~ Call x3333 Rapid Response
Code Yellow - a code that is called to alert team members to start preparing for normal operations to be impaired due to a pending emergency or an internal or external disaster, such as:
- Mass casualty/patient surge
- Power outage/generator failure
- Blizzard/inclement weather
- Malware attack/computer virus
- Anything that seriously impacts normal hospital operations
EOP (Emergency Operations Plan) – An EOP provides the structure and processes the organization utilizes to respond to, and initially recover from an event. It follows the four phases of Mitigation, Preparedness, Response, and Recovery.
Fun Fact: The initial COVID Response was the longest Code Yellow EVER recorded.
Code Pink – Dial 3333 landline (443-643-3333)
Code PINK is an actual or attempted infant or child abduction.
Suspicious Behavior:
• Person(s) is/are physically carrying an infant instead of using a bassinet.
• Person(s) is/are attempting to leave the facility with an infant on foot, rather than by wheelchair.
• Person(s) is/are carrying large packages (i.e. gym bag), particularly if they are "cradling" or "talking" to it.
Please respond immediately to the nearest exit or hallway. BE ALERT for any suspicious person(s) carrying any package – not just an infant or child!
Notify Security Services IMMEDIATELY (UCMC x3401, HMH x5022), if you observe any such behavior. If the person is attempting to leave the building, try to prevent them from leaving if safe to do so. Ask for ID, don’t follow them outside the building and note as many physical descriptions as possible.
Never Delay in Reporting a Fire or Seeing Smoke!
SEE FIRE --- INITIATE Code Red
SEE SMOKE --- INITIATE Code Red
SMELL SMOKE?
• Attempt to locate the origin of the smell.
• If you investigate and think the smoke is from a fire, call 3333 and activate the pull alarm.
• Notify the department supervisor if you can not locate the smell or do not think it is related to a fire.
By knowing what to do and responding effectively, you enhance our Fire Protection Plan and provide a safe environment for everyone.
CODE Red – Alert to Fire, Smoke or Excessive Heat
Fire Emergency Response
R Rescue
A Alarm
C Contain
E Extinguish
Fire Extinguisher Use
**P**ull the pin
- Pull hard enough to break the seal
- Do this before you approach the fire
**A**im at the base of the fire,
- If fire is in the open, aim into a container.
- Aim into openings of electrical equipment.
**S**queeze the handle to discharge
- First discharge 6’ to 10’ from the fire.
- Squeeze on and off, as needed.
**S**weep side to side
- As needed to get at all of the fire.
TYPES OF FIRE EXTINGUISHERS
TYPES OF FIRES
CLASS A - Wood, paper, cloth, trash, plastics
CLASS B - Oil, gas, grease, flammable liquids
CLASS C - Electrical, energized electrical equipment
Most fire extinguishers will have a label telling you what kind of fire the extinguisher is for.
Dry Chemical Extinguisher (ABC)
Carbon Dioxide Extinguisher (BC)
HOW TO RESPOND
WHEN LAW ENFORCEMENT ARRIVES ON THE SCENE
1. HOW YOU SHOULD REACT WHEN ENFORCEMENT ARRIVES:
- Remain calm and follow officers instructions
- Immediately raise hands and spread fingers
- Keep hands visible at all times
- Avoid making quick movements toward officers such as attempting to hold on to them for safety
- Avoid pointing and/or yelling
- Do not stop to ask officers for help or directions
- When evacuating, just proceed in the direction from which officers are entering the premises
2. INFORMATION YOU SHOULD PROVIDE TO LAW ENFORCEMENT OR 911 OPERATOR IF KNOWN:
- Location of the active shooter
- Number of shooters, if more than one
- Physical description of shooter(s)
- Number and type of weapons held by the shooter(s)
- Number of potential victims at the location
The BOMB THREAT PLAN advises Team Members of the steps to take in the event of a bomb threat. As a review, these are the steps you would take if you receive a BOMB Threat over the telephone:
1. Try to keep the caller on the phone as long as possible, and
2. Ask questions to gather information, such as “where exactly is the bomb located”:
3. Write down as much information as you can remember about the caller as well as specific information regarding the bomb;
4. Dial, or have a co-worker dial, 3333 immediately to report the situation.
While it is unlikely that volunteers would handle chemicals, this safety information is helpful for any chemicals that might be in your household.
**OSHA Hazard Communication**
GHS is the classification and labeling of chemicals. Prior to 2015, depending on what country the chemical was manufactured in, that country decided if the chemical was hazardous or not, or what PPE was required, and the first aid measures if there was an exposure. OSHA stepped in and created GHS so chemical information would be the same, world-wide.
A label must include the following information:
- Product identifier
- Pictogram(s)
- Signal Word
- Hazard Statement
- Precautionary Statement(s)
- Name, address, and telephone number of the manufacturer
The product identifier on the label should match that used on the safety data sheets that are located on the units where the chemicals are being used.
Control / Minimize Exposure to Chemicals (Even at Home)
✓ Know the risks in the department you are assigned. At home, understand what products should not be used together.
✓ Ask your supervisor if you don’t know.
✓ Keep your work area clean.
✓ Practice safe work habits.
✓ Use Personal Protective Equipment (PPE), if needed. Use gloves for liquids and masks for aerosols. Protect your clothes using a gown.
✓ Don’t eat, drink, or apply cosmetics around hazardous products.
✓ YOU need to know what to do for a spill of any chemical used in your department (or home).
✓ Each department with hazardous materials is responsible to keep spill kits readily accessible and fully stocked.
This final section reviews privacy and confidentiality – critical to anyone assisting in the health care space. HIPAA stands for Health Insurance Portability and Accountability Act established in 1996. We also provide some tips on patient and visitor interaction. Hospitals are stressful places. You meet people who are worried, upset, sick and/or in pain. They are often not at their best. It’s important that you be patient and kind. Help as much as you can and know the team members in your work area who can take it to the next level if you are unable to help.
The Joint Commission is a principle oversight group for hospitals in the US and every few years, hospitals get a thorough review from the Joint Commission. You may wonder the value of what you bring to a hospital unit in your volunteer work. You provide a critical service of helping us stay compliant with a myriad of safety considerations whether you are checking on expired goods, insuring proper storage of items and providing important supplies in the arms reach of a health care provider.
You will be asked to sign a confidentiality agreement as part of our commitment to volunteering. It is included at the end of this handbook in full. You can keep this copy just so you can remind yourself what you have agreed to.
Patient Choice:
• At the time of admission, patients are provided with information about HIPAA - the Notice of Privacy Practices.
• Patients may choose to be CONFIDENTIAL - these patients will not be listed in the Hospital Patient Directory and we MUST keep their presence in the hospital CONFIDENTIAL.
• Patients electing to be confidential will have their name replaced with asterisks **** in the Patient Directory. **Do not offer/suggest patient maybe here but is “confidential”**.
• Most patients admitted to UM UCH are not listed as confidential. Only those patients that wish to be confidential are listed as such.
• HIPAA covers: all printed, electronic and spoken information regarding a patient’s medical record.
Protected Health Information or PHI.
• PHI is any information, whether spoken, electronic or written, that relates to the past, present, or future physical or mental health, or condition of an individual, as well as the provision or payment related to that health care.
• PHI is health information created or received by a covered entity*, regardless of form, that could be used directly or indirectly to identify the individual.
• Covered entities include hospitals, care providers, designated family members, third party payers, such as insurance companies, and anyone who processes health information.
Safeguards protect the privacy and confidentiality of our patients
- Ensure that information is kept out of public view/access.
- Maintain the confidentiality of your computer access codes - log off computers when you are no longer able to secure the computer information and NEVER share passwords.
- Routine audits of electronic medical record access are done to ensure that patient privacy is protected.
- Team Members that do not maintain patient confidentiality and/or do not adhere to UM UCH HIPAA policies and procedures are subject to the disciplinary process and possible termination.
Privacy/Confidential: Security in Your Work Area
• If using a computer as part of your job, always log out when leaving a computer workstation unattended, and at end of shift.
• Use unique passwords and change them frequently. Never share a password.
• Secure your handbag in a locker, if provided, or other space that’s not left unattended.
• When using your badge to access areas, do not let people without an ID badge or sticker slip in behind you.
Patient Privacy Actions:
- Don’t take photographs in the hospital that could potentially capture a patient or their data.
- Don’t access a patient’s medical record without a patient care related reason.
- Don’t include patient related information via text or social media.
- Don’t send patient information without it being encrypted and secure.
- Don’t discuss patient information in public places where others can hear or with anyone outside the patient care team (the care team could include, Security, Billing, Case Management, Nursing and Physicians).
- Never access family or friends’ records even if they say it’s okay—Medical Records can assist with releasing records to you or the Patient Portal.
- Always be on the look out for ways we can protect PHI—if you see something, report it!
Tips for Good Interaction with Guests, Visitors and Patients
MAINTAIN PRIVACY AND CONFIDENTIALITY: Knock as we enter a patient’s room; protect personal information by being careful of what we say and where we say it.
TAKE THE INITIATIVE: Find someone who can help if the customer’s need is not part of our regular job.
TREAT VISITORS AND ADULT PATIENTS AS ADULTS: Use words and voice tone that convey respect and consideration.
LISTEN AND ACT: Respond to complaints without blaming others or making excuses. Direct customers with concerns or complaints to the Guest Services Department.
SPEAK QUIETLY: Remember that noise annoys and shows a lack of concern and consideration for others.
APPLY TELEPHONE SKILLS: Remember that the Hospital’s reputation is “on the line” when we are on the phone; sound pleasant and be helpful; actively listen by repeating back to them what you think they said.
LOOK THE PART: Build confidence in the customer’s perception of our ability through appropriate dress and demeanor.
Tips for Good Interaction with Patients in the Hospital
HANDLE WITH CARE: Imagine that we are on the receiving end of a message or an action and give it the care and time we would want.
BREAK THE ICE: Make eye contact, smile, say hello, introduce ourselves, call the person by name or use ma’am or sir, and extend a few words of concern.
NOTICE WHEN SOMEONE LOOKS CONFUSED: Stop and lend a hand.
MAINTAIN DIGNITY: Give choices whenever possible; close curtains to assure privacy; treat the customer as if he/she were our family member or our friend.
TAKE TIME FOR COURTESY AND CONSIDERATION: Use kind words and polite gestures that make them feel special.
KEEP PEOPLE INFORMED: Explain what we are doing and what they can expect; reduce their anxiety by communicating what is happening.
ANTICIPATE NEEDS: Act on their behalf without waiting to be asked such as, “Would you like water?”
RESPOND QUICKLY: Remember that time passes very slowly for those who are worried or upset; keep in mind that delays are frustrating for those who need assistance or information.
Volunteers Can Understand Both Sides!
| Clinicians and hospital staff | Patients and family |
|------------------------------|---------------------|
| Know how the hospital works and how to get things done | Are strangers in this environment |
| Know who hospital staff are and what they do | Do not understand the system or culture |
| Are busy and under a lot of stress | Know about their body and life situation better than hospital staff |
| Want to provide high-quality and safe care | Do not know who different staff are and what they do |
| | May want family or friends to support them |
| | Are often in pain or uncomfortable, vulnerable, or afraid |
| | Are worried and want to do what they can for the patient (family members) |
| | Aware that hospital staff are busy and may not want to bother you |
| | Trust hospital staff to provide safe and high-quality care |
The Joint Commission standards deal with quality of care issues and the safety of the environment in which the care is provided.
When an individual has concerns about patient care and safety in the health care facility, that the facility has not addressed, he or she is encouraged to talk to the nurse manager on the unit. IF not resolved, contact the PATIENT ADVOCATE:
UM Upper Chesapeake Medical Center – 443-643-2400
UM Harford Memorial Hospital – 443-843-5618
All Volunteers must agree to the following confidentiality considerations:
I understand that, as part of my job, I will learn information about University of Maryland patients, team members, and/or business. I understand that all protected health information and some team member and business information is considered confidential in nature and I have an obligation to protect this information from inappropriate disclosure. In addition, I must comply with the UM UCH Disclosure of Protected Health Information and Minimum Necessary Use or Disclosure of Protected Health Information policies.
THEREFORE, I agree to the following:
• I accept personal responsibility to protect confidential information from inappropriate disclosure without regard to the method by which it was accessed, even if it was obtained inadvertently.
• I understand that this information may concern, but is not limited to, patients, team members, operations, medical staff and business practices. I will not seek protected health information unless I have a need to know the information in order to perform my assigned job functions, and directed to do so by my supervisor.
• If I am unsure of the confidential nature of any information, I will contact my supervisor or the Privacy Officer for clarification.
• I will protect the privacy and confidentiality of all UM Upper Chesapeake Health patients during and after my employment/volunteer affiliation. This includes but is not limited to electronic, social media, written, and verbal forms of communication.
Protecting the privacy and confidentiality applies to any individual who I come into contact with whether an acquaintance, friend, colleague, neighbor, or relative of mine. I understand that Upper Chesapeake Health may routinely monitor and audit access to protected health information for appropriateness of access.
• I will maintain the confidentiality of any unique information system Password/PIN(s) that I may be assigned.
• I will not share my unique Password/PIN(s) with any other person(s).
• I will contact the Privacy Officer immediately if I suspect that knowledge of my unique Password/PIN(s) has been gained by someone else. I understand that the purpose of this notification is to protect confidentiality by having my unique Password/PIN(s) changed.
Confidentiality Agreement Between Volunteers and UM UCH
• I understand that I am responsible for all activity logged under my Password/PIN.
• I will sign off the computer when I leave the terminal/PC, and I understand that I must log off before another user may use the equipment.
• I understand that any breach of confidentiality may result in irreparable harm to both the patient and UM Upper Chesapeake Health. I will use the E-mail system in ways consistent with UM UCH policy.
• I understand that if I breach confidentiality, UM Upper Chesapeake Health may initiate disciplinary action up to and including immediate termination of employment/volunteer affiliation.
_________________________ _________________________ _______________________
Signature of Volunteer Date Print Name
I have chosen to volunteer my services for the University of Maryland Medical System Corporation or one of its member hospitals during the COVID-19 pandemic (collectively “UMMS”). This Release and Waiver of Liability form must be agreed to and signed, as a condition of my ability to provide volunteer services.
I acknowledge that UMMS has put in place preventative measures to limit the spread of COVID-19, however, UMMS cannot guarantee I will not become infected with COVID-19. Additionally, I acknowledge that COVID-19 is highly contagious through person to person contact. I acknowledge that my service to UMMS is completely voluntary, and I assume full responsibility for my own welfare and safety while providing volunteer services on behalf of UMMS.
I acknowledge it is my responsibility to consult a physician prior to, and regarding my volunteer services at UMMS. I represent and warrant that I am in proper physical health and that I have no medical condition which would put me at an increased risk of serious, potentially fatal, complications, from COVID-19. I understand that UMMS relies on my representation of health adequate to volunteer for the organization.
I acknowledge that, while providing services to UMMS, I may be exposed to COVID-19, - which may result in infection, serious illness or, death. I understand that the long term effects of a COVID-19 infection/illness are not fully understood and that should I become infected with COVID 19 I may experience long term effects some of which may be serious. I understand that should I become exposed to or infected with COVID 19 I may unintentionally expose other to the virus, including family, friends and acquaintances. I am fully aware of and accept the potential risks and hazards of agreeing to volunteer at UMMS during the COVID-19 pandemic. I voluntarily and knowingly assume full responsibility for any and all risks associated with COVID 19, which I might incur as a result of volunteering. If I experience any dizziness, unusual pain, fever, cough, difficulty breathing or shortness of breath, chills, muscle pain, sore throat, new loss of taste or smell, gastrointestinal symptoms, or any other discomfort while volunteering, I agree to stop serving as a volunteer until my doctor has evaluated my condition and confirmed that I do not have COVID-19.
I acknowledge that the risk of becoming exposed to or infected by COVID-19 may result from the actions, omissions, or negligence of myself and others, including, but not limited to, employees of UMMS, other volunteers, and other individuals present in UMMS.
In further consideration of being permitted to volunteer, I for myself, my heirs, executors, administrators, agents, and other personal representatives, voluntarily, expressly, irrevocably and unconditionally waive and release forever any and all manner of suits, actions, causes of action, damages and claims, known and unknown pertaining to my actual or potential exposure to or infection with COVID 19, that I may have against UMMS and its parents, subsidiaries, and affiliates and any of their respective present and former officers, directors, employees, owners, shareholders, agents, attorneys, and assigns; arising from or in connection with my participation in the Volunteer Program. Without limiting the generality of the foregoing in any way, I specifically understand that I am releasing and holding harmless UMMS and its member hospitals and related health care entity’s respective directors, officers, employees, agents and assigns from financial liability for any economic harm, injury, bodily harm, emotional harm, or illness should I contract or be exposed to COVID 19 as a result of my participation in the Volunteer Program. The laws of the State of Maryland shall apply to this document.
I HAVE READ THE ABOVE RELEASE AND WAIVER OF LIABILITY AND UNDERSTAND ITS CONTENTS. I VOLUNTARILY SIGN THIS DOCUMENT WITH THE INTENT TO BE LEGALLY BOUND BY THE TERMS AND CONDITIONS STATED ABOVE.
Printed_________________________________________ Name Email ________________________________
Address ______________________________________________________________________________________
Home Phone___________________________________ Work Phone ___________________________________
Signature______________________________________ Date____________________ |
POLICY BRIEF
MOUNTAIN OBSERVATIONS: MONITORING, DATA, AND INFORMATION FOR SCIENCE, POLICY, AND SOCIETY
Observations play a key role in tracking mountain global change and its impacts, understanding the various processes and feedback mechanisms involved, and delivering more reliable projections of the future to society. This Policy Brief provides an overview of the current state of multi-disciplinary mountain observations. It represents a contribution of the Global Network on Observations and Information in Mountain Environments (GEO Mountains) to the observance of the International Year of Sustainable Mountain Development 2022.
Installation of mass balance stakes on Rikha Samba Glacier, Nepal (Photo: Jakob Steiner)
**Cover images:**
3D digital terrain representation, Rayshader. [https://www.rayshader.com/](https://www.rayshader.com/)
Snow depth at 500 m spatial resolution over the European Alps on 29 January 2018. Lievens et al. (2022). [https://doi.org/10.5194/tc-16-159-2022](https://doi.org/10.5194/tc-16-159-2022)
Daily mean streamflow in the Peyto Glacier Research Basin, Canada, over two historical periods. Pradhananga et al. (2021). [https://doi.org/10.5194/essd-13-2875-202](https://doi.org/10.5194/essd-13-2875-202)
Expansion of high-mountain vegetation in the Himalaya between 1993 and 2017. NASA Earth Observatory. [https://earthobservatory.nasa.gov/images/149312/everest-area-plant-life-spreads](https://earthobservatory.nasa.gov/images/149312/everest-area-plant-life-spreads)
Projected gridded population count data for the year 2030 across the city of Santiago, Chile, and surrounding mountains. European Commission Joint Research Centre. [https://gnrl.jrc.ec.europa.eu/download.php?ds=pop](https://gnrl.jrc.ec.europa.eu/download.php?ds=pop)
South Col automatic weather station (7945 m), Everest. Khadka et al. (2021). [https://doi.org/10.3002/wea.1931](https://doi.org/10.3002/wea.1931)
Delineating Mountains and Characterising their Topography
- High elevations and rugged topography are, among others, two common defining features of mountain terrain, and affect most processes occurring in mountain social-ecological systems.
- The extent of mountain terrain is usually mapped by applying empirical criteria to digital terrain data.
- Three alternative spatial delineations, each representing different global mountain extents, have been generated [1] and can be downloaded from the Global Mountain Explorer [2].
- The resolution and accuracy of global digital terrain data have increased considerably; the latest products, such as the 30 m-resolution FABDEM [3], will likely benefit many mountain applications in the coming years.
- A hierarchical dataset of named mountain range polygons has also recently been released by the Global Mountain Biodiversity Assessment (GMBA) [4].
In situ observations and measurements are crucial for tracking mountain climate and biodiversity change, downscaling and bias-correcting climate model outputs, calibrating remote sensing retrieval algorithms, and informing both process-based and data-driven climate impact models (e.g., cryospheric and hydrological models).
The remoteness and inhospitality of many mountain settings frequently pose practical challenges to situ measurement activities. Also, certain key variables such as precipitation are difficult to measure accurately in mountains [5], and steep topographic gradients can limit measurements’ spatial representativeness.
Deficiencies in the global coverage of freely available in situ climatological time-series records from operational stations have been identified – including with respect to space (e.g., Fig. 1), time, elevation, as well in relation to other relevant factors – such as the hydrological importance of individual mountain ranges to humanity [6].
Besides measurements made at operational sites by national and other authorities (e.g., the SNOTEL network in the United States [7]), extensive in situ monitoring in mountains is undertaken by the scientific community at sites established primarily for research purposes. For example, in the fields of ecology and biodiversity, research-focused network initiatives such as GLORIA [8] and MIREN [9] have established standard protocols which in turn have facilitated the collection and collation of datasets with research impact.
In hydrology, meanwhile, observations are often focused on experimental catchments and are increasingly being openly published for reuse by others [e.g., 10].
Given the multi-disciplinary and multi-institutional nature of in situ mountain monitoring, it has traditionally been difficult to obtain a clear overview of who is measuring what, where, when, how, and why across a given region.
With this in mind, and in response to key founding objectives of the Mountain Research Initiative (MRI) [11], the GEO Mountains In Situ Inventory [12] was developed. The inventory collates data from many institutions and databases, including the World Meteorological Organization (WMO)'s OSCAR/Surface [13], DEIMS-SDR (eLTER/iLTER) [14], the Global Runoff Data Center (GRDC) [15], and the Global Historical Climatology Network-Daily inventory [16], amongst many others. Version 2.0 (released in October 2022) contains a total of over 51,000 records, some of which correspond to multiple monitoring sites (since local networks are often represented by a single entry).
Where known, the inventory provides direct web links and/or contact information are provided to facilitate access to the corresponding data, and the research and practitioner communities are encouraged to add additional sites or improve the information available for existing sites.
While specific metadata fields (e.g., variables measured, temporal coverage, instrumentation deployed, and protocols followed) must still be further populated, the considerable number of sites represented in the inventory challenges the common perception of mountains as sparsely observed regions to some extent (Fig. 2), although accessing the corresponding data from many sites often remains challenging.
Figure 1. Mean spatial density of GHCNd stations in mountainous terrain providing daily precipitation data by GMBA mountain polygon, irrespective of record length. Source: Thornton et al. [6].
Figure 2. Locations of sites represented in the GEO Mountains In Situ Inventory, v2.0 [12].
The quantity and quality of satellite remote sensing data available over mountainous terrain have risen dramatically over recent years and decades, with the MODIS and Landsat missions now providing lengthy records that have been heavily exploited for applications like snow cover mapping and trend analysis (e.g. [17], [18]).
Increasingly, to reduce the technical and computational burden on users, remotely sensed data are provided in pre-processed or even “analysis ready” formats, including via data cubes (e.g., the Swiss Data Cube; [19]).
However, the application of remotely sensed data can be difficult in mountains; for example, optical methods are affected by clouds and shadows, and weather radar measurements are affected by topographic blocking and reflection [20].
Additionally, some important variables cannot currently be derived remotely, including Snow Water Equivalent (SWE) estimates [21], high-resolution soil moisture patterns, and vertical ground temperature profiles.
Unmanned Aerial Vehicles UAVs (commonly known as drones) and satellite constellations operated by private companies [e.g., 22] can provide higher resolution and/or frequency data than other sources, although data acquisition may be costly.
**BOX 1: “MOUNTAIN OBSERVATORIES”**
Recognising the need to characterise mountain social-ecological systems in a holistic or integrated manner, the Mountain Research Initiative’s Mountain Observatories Working Group [48] proposed the development of a global network of long-term environmental and socio-economic monitoring sites, which they referred to as “Mountain Observatories” (MOs). More specifically, these prospective MOs are defined “as sites, networks of sites, or data-rich regions where multidisciplinary, integrated observations of biophysical and human environments are conducted over a lengthy period of time in consistent ways, according to established protocols using both in situ and remote observations” [49]. Currently, with the help of GEO Mountains Inventories’ [12, 45], work is underway to identify sites that already meet these criteria (e.g., Sonnblick Observatory [50]), or else have the potential to. Thereafter, these sites will initially be grouped into a series of regional networks, the first of which is proposed to be the Central Asian Mountain Observatory Network (CAMON).
Gridded climate datasets [e.g., 23,24], which are typically available for several variables, are generated by interpolating in situ measurements using sophisticated techniques. Therefore, they are spatially and temporally complete over a given domain, although the coverage of the underlying stations strongly influences their uncertainty.
Climate reanalysis products are generated by running physics-based, coupled climate models into which in situ and remote sensing observations are continually fed (using data assimilation techniques). As such, reanalysis products also provide multi-variate historical data that are spatially and temporally complete over a given domain. The ERA5 product [25] resolves the atmosphere in 137 vertical levels and provides hourly data on a 30 kilometre grid, while ERA5-Land [26] provides hourly data for land surface variables from 1950 to present on a nine kilometre grid.
A TOMST TMS4 in situ sensor in the tundra vegetation near Abisko, northern Sweden, part of a SoilTemp long-term microclimate monitoring network (Photo: Jonas Lembrechts)
These products often need to be downscaled and/or bias-corrected prior to use in mountain applications, and inconsistencies between different products can be considerable [e.g. 27].
Global Climate Models (GCMs, e.g., CMIP6 [28]) and Regional Climate Models (RCMs, e.g., CORDEX [29]) provide future projections under various plausible greenhouse gas emission and land use change scenarios, in addition to historical reconstructions (typically from 1850 to present). Climate models also enable the mechanisms involved to be explored and disentangled, and attribution studies (which seek to quantify the respective contributions of natural and anthropogenic forcing to observed trends) to be conducted.
However, due to their coarse resolutions, GCMs require empirical “compensations” (parameterisations) to represent important smaller-scale processes such as convection and surface snow processes, and their representation of topography is heavily smoothed [30]. These factors contribute to uncertainties and biases in GCM simulations, especially in mountains.
Even RCMs provide data at far coarser spatial resolutions than the characteristic scales of key mountain processes and change impacts; often requiring additional downscaling / bias correction [31].
Moreover, for specific mountain ranges, it is currently unclear to what extent climate model ensemble members should be considered equally plausible, or whether some should be favoured over others (cf. [32]).
**BOX 2: INTEGRATING COMPLEMENTARY DATA SOURCES FOR CLIMATE IMPACT PROJECTIONS**
Both process-based and data-driven (e.g., Machine Learning) algorithms offer excellent possibilities to combine in situ and remotely sensed data for mountain applications, and thereby exploit their complementarity characteristics. Such models can fill spatio-temporal gaps in historical observations and provide one of the primary means by which local scale, decision-relevant predictions (see Box 4) of possible climate change impacts can be generated under various plausible scenarios (see e.g., [52] for glaciers). The outputs of such models thus also represent an increasingly important form of mountain data and/or information.
Citizen Science
- Citizen Science (CS), whereby the public contribute to science by collecting or analysing data, has great potential to fill key spatio-temporal gaps in mountain observations and more generally increase the quantity of data available.
- Some known examples of CS projects include Mountain Rain or Snow [33] and GlacierMap [34]; various activities are also organised by CREA Mont-Blanc [35].
- In the Community Snow Observations (CSO) [36] project, snow depth observations made by participants are assimilated into numerical models to improve estimates of Snow Water Equivalent (SWE) across large and sparsely instrumented mountain regions in North America. The value added by the citizen observations to the model predictions is quantified, enabling the observers themselves to appreciate the value of, and be credited for, their contributions.
**BOX 3: TOWARDS A DEFINITION OF ESSENTIAL / SHARED MOUNTAIN VARIABLES**
Given limited resources for monitoring and observation, scientific and policy-related applications alike could benefit greatly from efforts to identify – in a multi-disciplinary way – variables whose observation or derivation should be prioritised to provide a globally inter-comparable body of fundamental evidence on global change in mountains. Such a set of mountain-specific variables has already been proposed for aspects related to climate change and its impacts on physical components of mountain systems [51], and similar work is ongoing for variables related to biodiversity and society and economy. If corresponding minimum observational requirements for each of these variables can be defined, and the associated data collated, a global “State of the Mountains” report could be produced, as envisaged in GEO Mountains’ founding proposal of 2015. Such a report could significantly elevate the theme of mountains within global policy agendas.
Socio-Economic Data
- Integrating socio-economic with bio-physical data has been widely recognised as necessary and important [11], however, has often proven difficult in practice.
- Many socio-economic datasets (e.g., census results) are provided in spatially aggregated formats. Political boundaries that often span both mountainous and non-mountainous terrain are typically used for this purpose, which can make disaggregation to more granular and relevant spatial units within mountains technically challenging.
- Thanks largely to remote sensing, the availability of spatially distributed layers corresponding to some socio-economic variables is improving [37], and these data can be applied to answer policy-relevant questions, such as the assessment of human population and urbanisation dynamics in mountains [38].
- As for other components of mountain systems (see Box 3), a subset of highly informative or relevant socio-economic variables should be identified, specified, and collected. The attributes that such datasets must have (e.g., frequency, spatial resolution, etc.) to be useful for general applications must also be specified.
Regional, National, and Thematic Data Portals
- Given the geopolitically transboundary nature of many important mountain ranges, successful efforts have been made to collate and share mountain data and information on a regional level (e.g., ICIMOD’s Regional Database System (RDS) in the Hindu Kush Himalaya [39], and the Caucasus GeoNode [40]).
- The national data portals of some countries with substantial mountainous areas (e.g., Switzerland [41], Canada [42], and South Africa [43]) also provide much relevant data and information.
- The GEO Mountains General Inventory [44] provides a list of (and links to) various other datasets and data portals, including thematic portals, that could contribute to mountain applications. Where these resources extend beyond mountains, they should be spatially filtered using a mountain delineation. Prospective data users should always evaluate the suitability of a given data resource for their intended application(s) (see Box 4).
Derived Indicators
- In contrast to scientists who often seek to obtain and use raw or lightly processed datasets, policy and other decision makers generally require derived and/or distilled information.
- Such information is commonly presented in the form of statistics, indices, or indicators, some of which are computed specifically to respond to metrics used in global policy framework requirements.
- For example, Fig. 3 [45] provides a clear visual summary of the expected impacts of climate change on temperature, precipitation, and snow across four regions of Chile.
- Other examples include a regional socio-ecological indicator platform for the Andes developed by CONDESAN [46].
Figure 3. A summary of projected climate change impacts on mean temperature, mean annual precipitation, and median peak snow water equivalent and timing across four major regions of Chile. Source: Bambach et al. [45].
Indigenous communities have deep connections with their landscapes and extensive environmental knowledge.
Because this knowledge is traditionally shared via oral stories and can be sacred, in the absence of appropriate protocols, it has historically been challenging or inappropriate to obtain and interrelate with western scientific processes.
Novel approaches involving appropriate protocols and culturally sensitive knowledge co-production practices are now being developed and applied to combine western and Indigenous knowledge forms, as exemplified for instance by the work of the Canadian Mountain Assessment [47].
**BOX 4: THE IMPORTANCE OF SCALE AND UNCERTAINTY QUANTIFICATION**
Given the topographic and geological complexity of most mountain environments, most phenomena generally vary considerably over short distances. Many phenomena are also highly dynamic. As such, it is imperative that the spatio-temporal scales of mountain data and information are appropriate for the uses to which they are put. In addition, the various challenges associated with making mountain observations mean that data uncertainties, inaccuracies, and biases are often more substantial than elsewhere. To help users appropriately apply and interpret their products, data providers should seek to provide “quality flags”, well-described caveats, uncertainty estimates, and other guidance wherever possible.
Although mountain observation and monitoring remain challenging, a considerable amount of in situ monitoring infrastructure is in place globally, and remotely sensed data volumes are increasing rapidly. However, data availability and accessibility vary considerably according to region and discipline, and major gaps remain – especially with respect to in situ data.
More extensive data coverage and information content analyses should be conducted as a basis for substantiating and optimising investments in establishing new, and maintaining existing, mountain monitoring initiatives.
Optimal data coverage may not necessarily be uniform across all regions or disciplines. For example, in exceptionally ecologically or hydrologically important regions, monitoring of these aspects should be enhanced relative to elsewhere. Monitoring should similarly be comparatively enhanced in mountain regions which play a major role in the broader Earth System, and/or where projected warming is expected to strongly increase natural risks to societies, among other priorities.
Investments and capacity sharing activities are required not only to install and maintain monitoring infrastructure, but to support the entire data lifecycle, which also encompasses data transmission, quality control, standardisation, storage, and exchange / publication.
The potential benefits of feeding real-time streams of observational data from research-oriented sites in mountains into operational services related to weather and flood forecasting, for example, should be explored because such sites may often fill spatial gaps in operational monitoring networks.
To support more globally consistent and inter-comparable assessments of global change in mountain systems, observation campaigns should focus on agreed priority variables (“Essential or Shared Mountain Variables”); at dedicated sites, these observations (and those corresponding to other variables) can be undertaken in detail (at “Mountain Observatories”; both approaches can help maximise information content relative to cost.
The entire mountain observation community should work towards increased standardisation and interoperability in terms of both variables observed and means of data sharing and access, ideally converging to a common machine-readable metadata standard that is appropriate for both point time-series and gridded data. In this way, it may be possible to develop a single global mountain database from which data can be arbitrarily queried, retrieved, and/or processed.
Specifically, greater interdisciplinary collaboration between the biophysical sciences, the social sciences, and the humanities regarding data and data integration methodologies are required to improve our collective understanding and ability to predict future changes and their impacts in complex mountain social-ecological systems.
Improvements in monitoring, data, and information – along with adequate funding and other resources to sustain, scale, and coordinate these efforts – will help close mountain knowledge gaps identified during the IPCC’s Sixth Assessment Reports [53,54]), and may furthermore enable the production of a global “State of the Mountains” report.
The integration of multiple datasets with the latest process-based models and machine learning algorithms, along with purposeful science-policy-practice dialogues and iterative exchanges to define relevant applications, have the potential to revolutionise the translation of mountain observations into knowledge and subsequent action.
1. Sayre et al. (2018). A new high-resolution map of world mountains and an online tool for visualizing and comparing characterizations of global mountain distributions. Mountain Research and Development, 38(3), 240-249. doi: [10.1659/MRD-JOURNAL-D-17-00107](https://essd.copernicus.org/articles/special_issue871.html). Last Accessed: 29/10/2022.
2. USGS. Global Mountain Explorer. [https://rmgsc.cr.usgs.gov/gme/](https://rmgsc.cr.usgs.gov/gme/). Last Accessed: 29/10/2022.
3. Hawker et al. (2022). A 30 m global map of elevation with forests and buildings removed. Environmental Research Letters, 17(2), 024016. doi: [10.1088/1748-9326/ac4d4f](https://www.geomountains.org/resources/resources-surveys/inventory-of-in-situ-observational-infrastructure). Last Accessed: 29/10/2022.
4. Snethlage et al. (2022). A hierarchical inventory of the world’s mountains for global comparative mountain science. Scientific Data, 9(1), 1-14. doi: [10.1038/s41597-022-01256-y](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 29/10/2022.
5. Kochendorfer et al. (2017). The quantification and correction of wind-induced precipitation measurement errors. Hydrology and Earth System Sciences, 21(4), 1973-1989. doi: [10.5194/hess-21-1973-2017](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 29/10/2022.
6. Thornton et al. (2022). Coverage of in situ climatological observations in the world’s mountains. Frontiers in Climate, 4. doi: [10.3389/fclim.2022.814181](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 29/10/2022.
7. Natural Resources Conservation Service (NRCS) of the United States. SNOTEL. [https://www.nrcs.usda.gov/wps/portal/wcc/home/quicklinks/imap](https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily). Last Accessed: 29/10/2022.
8. Global Observation Research Initiative in Alpine Environments (GLORIA). [https://www.gloria.ac.at/home](https://doi.org/10.5194/essd-11-493-2019). Last Accessed: 28/10/2022.
9. Mountain Invasion Research Network (MIREN). [https://www.mountaininvasions.org/](https://www.mountaininvasions.org/). Last Accessed: 28/10/2022.
10. Earth System Science Data (ESSD). Special Issue: Hydrometeorological data from mountain and alpine research catchments. [https://essd.copernicus.org/articles/special_issue871.html](https://essd.copernicus.org/articles/special_issue871.html). Last Accessed: 29/10/2022.
11. Becker and Burgmann (2002). Global Change and Mountain Regions: The Mountain Research Initiative. (IGBP/GTOS/IHDP). [https://mountainresearchinitiative.org/images/About_MRI/Our_History/Global_Change_and_Mountain_Regions.pdf](https://mountainresearchinitiative.org/images/About_MRI/Our_History/Global_Change_and_Mountain_Regions.pdf)
12. GEO Mountains. Inventory of In Situ Observational Infrastructure, v2.0. [https://www.geomountains.org/resources/resources-surveys/inventory-of-in-situ-observational-infrastructure](https://www.geomountains.org/resources/resources-surveys/inventory-of-in-situ-observational-infrastructure). doi: [10.6084/m9.figshare.14899845.v2](https://www.geomountains.org/resources/resources-surveys/inventory-of-in-situ-observational-infrastructure). Last Accessed: 29/10/2022.
13. World Meteorological Organization (WMO). OSCAR/Surface. [https://oscarwmo.int/surface/#/](https://oscarwmo.int/surface/#/). Last Accessed: 29/10/2022.
14. Dynamic Ecological Information Management System – Site and Dataset Registry (DEIMS-SDR). [https://deims.org/](https://deims.org/). Last Accessed: 29/10/2022.
15. Global Runoff Data Centre (GRDC). [https://www.bafg.de/GRDC/EN/Home/homepage_node.html](https://www.bafg.de/GRDC/EN/Home/homepage_node.html). Last Accessed: 29/10/2022.
16. National Centers for Environmental Information. Global Historical Climatology Network daily (GHCNd). [https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily](https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily). Last Accessed: 29/10/2022.
17. Gascoin et al. (2019). Theia Snow collection: High-resolution operational snow cover maps from Sentinel-2 and Landsat-8 data. Earth System Science Data, 11(2), 493-514. doi: [https://doi.org/10.5194/essd-11-493-2019](https://doi.org/10.5194/essd-11-493-2019).
18. Notarnicola (2022). Overall negative trends for snow cover extent and duration in global mountain regions over 1982–2020. Scientific Reports, 12(1), 1-16. doi: [10.1038/s41598-022-16743-w](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 29/10/2022.
19. Swiss Data Cube (SDC). [https://www.swissdatacube.org/](https://www.swissdatacube.org/). Last Accessed 28/10/2022.
20. Germann et al. (2022). Weather Radar in Complex Orography. Remote Sensing, 14(3), 503. doi: [10.3390/rs14030503](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 29/10/2022.
21. Luojus et al. (2021). GlobSnow v3. 0 Northern Hemisphere snow water equivalent dataset. Scientific Data, 8(1), 1-16. doi: [10.1038/s41597-021-00939-2](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 29/10/2022.
22. Planet Labs. [https://www.planet.com/](https://www.planet.com/). Last Accessed: 31/10/2022.
23. European Climate Assessment & Dataset, E-OBS gridded dataset. [https://www.ecad.eu/download/ensembles/download.php](https://www.ecad.eu/download/ensembles/download.php). Last Accessed: 02/12/2022.
24. Harris et al. (2020). Harris, I., Osborn, T. J., Jones, P., & Lister, D. (2020). Version 4 of the CRU TS monthly high-resolution gridded multivariate climate dataset. Scientific Data, 7(1), 1-18. doi: [10.1038/s41597-020-0453-3](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 02/12/2022.
25. European Centre for Medium-Range Weather Forecasts (ECMWF). ERA5. [https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 02/12/2022.
26. European Centre for Medium-Range Weather Forecasts (ECMWF). ERA5-Land. [https://www.ecmwf.int/en/era5-land](https://www.ecmwf.int/en/era5-land). Last Accessed: 02/12/2022.
27. Zandler et al. (2019). Evaluation needs and temporal performance differences of gridded precipitation products in peripheral mountain regions. Scientific Reports, 9(1), 1-15. doi: [10.1038/s41598-019-51666-z](https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5). Last Accessed: 29/10/2022.
28. World Climate Research Programme (WCRP), Coupled Model Intercomparison Project Phase 6 (CMIP6). [https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip6](https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip6). Last Accessed: 29/10/2022.
29. World Climate Research Programme (WCRP). Coordinated Regional Climate Downscaling Experiment (CORDEX). https://cordex.org/. Last Accessed: 29/10/2022.
30. Baldwin et al. (2022). Outsize influence of Central American orography on global climate. AGU Advances, 2(2), e2020AV000343. doi: 10.1029/2020AV000343
31. International Centre for Integrated Mountain Development (ICIMOD). Climate change scenarios. https://www.icimod.org/initiative/climate-change-scenarios/. Last Accessed: 29/10/2022.
32. Hausfather et al. (2022). Climate simulations: recognize the ‘hot model’ problem. Nature 605, 26-29. doi: 10.1038/d41586-022-01192-2
33. Mountain Rain or Snow. https://rainorsnow.app/surveys. Last Accessed: 29/10/2022.
34. GlacierMap. https://peru.glaciers.org/. Last Accessed: 29/10/2022
35. CREA Mont-Blanc. Citizen Science. https://creamontblanc.org/en/citizen-science/. Last Accessed: 29/10/2022.
36. Community Snow Observations (CSO). https://communitysnowobs.org/. Last Accessed 28/10/2022
37. European Commission. GHSL – Global Human Settlement Layer. https://ghsl.jrc.ec.europa.eu/datasets.php, Last Accessed: 28/10/2022
38. Thornton et al. (2022). Human populations in the world’s mountains: Spatio-temporal patterns and potential controls. PLOS ONE, 17(7), e0271466. doi: 10.1371/journal.pone.0271466
39. International Centre for Integrated Mountain Development (ICIMOD), Regional Database System. https://rds.icimod.org/. Last Accessed: 28/10/2022.
40. Sustainable Caucasus, Caucasus GeoNode. https://sustainable-caucasus.unepgrid.ch/. Last Accessed: 30/10/2022.
41. Maps of Switzerland. https://map.geo.admin.ch/. Last Accessed: 28/10/2022.
42. Government of Canada. Historical Data. https://climate.weather.gc.ca/historical_data/search_historic_data_e.html. Last Accessed: 28/10/2022.
43. South African Environmental Observation Network (SAEON). Data Portal. https://catalogue.saeon.ac.za/. Last Accessed: 28/10/2022.
44. GEO Mountains. General Inventory, v1.0. https://www.geomountains.org/resources/resources-surveys/general-inventory. doi: 10.6084/m9.figshare.19322573.v3
45. Bambach et al. (2021). Projecting climate change in South America using variable-resolution Community Earth System Model: An application to Chile. International Journal of Climatology, 42(4), 2514-2542. doi: 10.1002/joc.7379
46. Consortium for Sustainable Development of the Andean Ecoregion (CONDESAN). Plataforma de indicadores socioambientales en la Región Andina. https://indicadores-andinos.condesan.org/. Last Accessed: 29/10/2022.
47. Canadian Mountain Network (CMN). Canadian Mountain Assessment. https://www.canadianmountainnetwork.ca/research/canadian-mountain-assessment-group/canadian-mountain-assessment. Last Accessed: 29/10/2022.
48. Mountain Research Initiative. Mountain Observatories. https://mountainresearchinitiative.org/activities/community-led-activities/working-groups/2097-mountain-observatories. Last Accessed: 30/10/2022
49. Shahgedanova et al. (2021). Mountain observatories: Status and prospects for enhancing and connecting a global community. Mountain Research and Development, 41(2), A1. doi: 10.1659/MRD-JOURNAL-D-20-00054.1
50. ZAMG, Sonnblick Observatorium. https://www.sonnblick.net/en/. Last Accessed: 31/10/2022
51. Thornton et al. (2021). Toward a definition of Essential Mountain Climate Variables. One Earth, 4(6), 805-827. doi: 10.1016/j.oneear.2021.05.005
52. Marzeion et al. (2020). Partitioning the uncertainty of ensemble projections of global glacier mass change. Earth's Future, 8(7), e2019EF001470. doi: 10.1029/2019EF001470
53. Hock et al. (2019). High Mountain Areas. In: IPCC Special Report on the Ocean and Cryosphere in a Changing Climate [Pörtner et al. (Eds.)], https://www.ipcc.ch/site/assets/uploads/sites/3/2022/03/04_SROCC_Ch02_FINAL.pdf
54. Adler et al. (2022). Cross-Chapter Paper 5: Mountains. In: Climate Change 2022: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Pörtner et al. (Eds.)]. https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_CCP5.pdf
Peat coring at Maua (flower) Swamp at around 4,000 m on the southern slopes of Kilimanjaro. Mustaphi et al. (2020). https://doi.org/10.1017/cwu.2020.76
Please send enquiries to:
GEO Mountains
email@example.com
Mountain Partnership Secretariat
firstname.lastname@example.org
The views expressed in this Policy Brief are those of the Authors (J.M. Thornton, E. Palazzi, and C. Adler).
The Authors accept no responsibility for the accuracy or completeness of the contents and shall not be liable for any loss or damage that may be occasioned, directly or indirectly, through the use of, or reliance on, the contents of this publication.
Suggested citation: GEO Mountains (2022). Mountain Observations: Monitoring, Data, and Information for Science, Policy, and Society. Policy Brief: International Year of Sustainable Mountain Development 2022.
Copyright © GEO Mountains, November 2022. |
Multiple Systems Estimation for Modern Slavery: Robustness of List Omission and Combination
Serveh Sharifi Far\(^1\), Ruth King\(^1\), Sheila Bird\(^2\), Antony Overstall\(^3\), Hannah Worthington\(^4\), and Nicholas Jewell\(^5\)
Abstract
Performing censuses on stigmatized or vulnerable populations is challenging, however, for such populations partial enumeration is often possible using different lists or sources. If the sources overlap then multiple systems estimation (MSE) methods can be applied to obtain an estimate of the total population. These are typically expressed by a log-linear model which permits positive/negative dependencies between lists. This paper considers issues that arise for the application of MSE to modern slavery where there is little to no overlap of individuals across lists. We investigate the robustness of MSE in terms of the importance of each list and the impact of combining lists on the estimation process. We undertake a simulation study and consider real national modern slavery data from the UK and Romania.
\(^1\)The University of Edinburgh, UK
\(^2\)Cambridge University, Cambridgeshire, UK
\(^3\)University of Southampton, Hampshire, UK
\(^4\)University of St Andrews, UK
\(^5\)London School of Hygiene & Tropical Medicine, UK
Corresponding Author:
Serveh Sharifi Far, School of Mathematics, The University of Edinburgh, JCMB, Peter Guthrie Tait Road, Edinburgh EH9 3FD, UK.
Email: email@example.com
Keywords
combining sources, estimate stability, generalized linear models, list omission.
Introduction
Modern forms of slavery persist in the 21st century despite the legislative successes of 19th century reformers in having predominantly abolished traditional slavery. Documenting and quantifying the prevalence of modern slavery is a challenging task for many reasons but not least due to the hidden nature of individuals who would be classed in this category and how victims of modern slavery are defined. Further, the nature of modern slavery means that international boundaries may be crossed with many modern slavery victims also victims of illegal trafficking (Cruyff et al., 2017; van Dijk et al., 2017 explain the context to human trafficking). However, the problem is significantly wider that the exploitation of illegal immigrants—for example, 16% of the UK’s identified potential victims of modern slavery are its own citizens. The own-citizen percentage was higher still at 32% for the 2,121 potential victims in 2017 who were children (Home Office, 2018). Major other countries-of-origin for UK-identified victims include Albania and Vietnam but these two, together with the UK itself, may have a different representation within the totality of victims (non-identified as well as identified) of modern slavery in the UK. Hence, policy initiatives for the prevention of human trafficking that have been directed at Albania and Vietnam might need re-orientation when UK’s unidentified victims are estimated by where they originated from.
In the UK, all police forces report identified victims of modern slavery to the National Crime Agency (NCA). Support, ranging in duration from 7 to 13 weeks, is available for “probable-cause” victims unless or until their final-status is determined otherwise. Overlaps between the list held by UK’s NCA and those of other service providers arise both because of the support on offer to probable-cause victims, or because these services may have referred identified potential victims to NCA for appraisal of their eligibility for support, or because police action could rescue further victims. It is this overlap of individuals observed by the different sources that permits the use of multiple systems estimation (MSE) for estimating the difficult to obtain total prevalence and associated measure of the problem within society. See Bird and King (2018) for a review of multiple systems estimation and their application to different populations; Jewell et al. (2013) for an application to estimating nonmilitary deaths in conflict; and Silverman (2020), and references therein, for discussion of their application to modern slavery.
Complexities can occur for modern slavery data as the term covers a range of different types of modern slavery, including for example, domestic/physical labour and forced prostitution. The characteristics of the type of victimisation typically varies by gender (e.g., physical vs. domestic labour) and age-group (child vs. adult female prostitution); and is also likely to determine how many other victims belong to the same cluster as the listed victim, for example, many adult males engaged in physical labour, may be co-located and controlled by a gang-master; solo female domestic slave; or a clutch of sex workers who travel between premises in different towns and may include children in their number. Professionals in different capacities may report suspect activity to authorities. For example, doctors who are made aware that a child is at risk of prostitution, or that victims of human trafficking are held at a specific location, may (or be required to) inform the relevant authorities so that a rescue can be attempted by the police. Considerations pertain to non-governmental voluntary organizations including those which might, in less extreme circumstances, be unwilling to cross-refer leading to minimal overlap between different lists, for example, in relation to voluntary organizations giving refuge to escapee women versus males, or to adults versus children.
We focus on the common issue of limited or minimal overlap (where relatively few individuals are observed across the different lists used) within modern slavery application of MSE. Multiple lists with limited or minimal overlap can occur for numerous reasons, and affect different subsets of the population. For example, as discussed above, this may be the case for lists that are held by different non-governmental voluntary organizations. This, in turn, can lead to a number of different issues when applying a MSE approach, including models being unidentifiable with inestimable parameters (Sharifi Far et al., 2019) and potentially unstable estimation of the total population size. Further, demographic information or contextual data, such as type of victimization that victims of modern slavery are subjected to and whether drug dependent, may be important determinants of capture-propensity on some but not all lists, or the interaction between different lists. If such information is available, MSE can be extended to directly incorporate such factors (King et al., 2005 demonstrates the case of MSE applied to injecting drug users). However, this leads to a further reduction in the overlaps observed between the different lists, potentially exacerbating the issues further, and introducing a greater number of parameters to estimate. Thus, within this paper, we do not consider such characteristics further, and focus on the standard cross-classification of individuals across the different lists.
Our aim in this paper is to investigate, by simulation and empirically, the impact of lists with minimal overlaps for capture-recapture estimation of
victims of modern slavery, and methods to combat effects of such phenomena on population size estimation.
**Methods**
We consider standard log-linear models for MSE, where we are able to explicitly account for dependencies between lists via associated log-linear interaction terms (Fienberg, 1972). We investigate the effect on population size estimation where there is limited overlap between the lists relating to the two specific methods of (i) list omission and (ii) list combination. In particular, we shall consider an approach where we assess the influence of the lists on the estimation process by removing each list in turn from the analysis; and the impact of combining two lists where there is limited overlap between the lists. We begin by defining the models and associated MSE approach.
**Multiple Systems Estimation**
We begin by describing the general framework for MSE. Let $K$ denote the total number of lists available in the dataset, we label the individual lists $k = 1, \ldots, K$ (with a minimum of $K = 2$ lists). We construct an incomplete $2^K$ contingency table where each element of the table corresponds to the number of individuals observed by the given list combination. The table is incomplete since we do not observe the number of individuals not observed by any of the $K$ lists, and hence taking the total population size to be equal to the total number of observed individuals will lead to an underestimate of the total population size. Mathematically, each cell is indexed in the form $\mathbf{k} \in \{0,1\}^K$, where the 1/0 correspond to the given list observing/not observing an individual, respectively. For example, when $K = 4$ the cell $\mathbf{k} = \{0,1,1,0\}$ corresponds to being observed by lists 2 and 3 but not lists 1 and 4. The cell $\mathbf{k} = \{0\}^K$ corresponds to not being observed by any of the lists.
Let $n_k$ denote the number of individuals in cell $\mathbf{k} \in \{0,1\}^K$ of the contingency table; and $\mu_k$ correspond to the mean cell count for cell $\mathbf{k}$. We specify the model as a generalized linear model, with Poisson error and log-link function, such that,
$$n_k \mid \mu_k \overset{\text{ind}}{\sim} \text{Poisson}(\mu_k), \quad \text{for } \mathbf{k} \in \{0,1\}^K.$$
(1)
Letting $\mu$ denote the column vector of the mean cell counts, $\mu_k$, we can write,
$$\log \mu = X\theta,$$
where $\theta$ denotes the column vector of log-linear parameters and $X$ is the associated design matrix describing the relationship between the (log) of the expected cell counts and the parameters. In general, $\theta$ contains an intercept term (associated with the mean cell count), main effect terms for each list (associated with the propensity of being observed by a given list) and interaction terms (associated with dependencies between the different lists). Due to the incompleteness of the contingency table, we cannot estimate the $K$-way interaction for hierarchical log-linear models.
This modelling structure permits the estimation of the total population size as follows: the log-linear parameters, $\theta$, can be estimated from the observed cell counts; given these estimates we are able to obtain the associated maximum likelihood estimate (MLE) and associated uncertainty of the unobserved cell, via the model specified in equation (1). The associated uncertainty is described via a 95% confidence interval (CI), using the standard asymptotic normality assumption and estimated standard error calculated via the Hessian matrix evaluated at the MLE of the parameters. However, we note that the estimate of the total population size (and 95% CI) is in general, dependent on the model specified in terms of the interactions present within the model. This typically leads to a two-step process, (i) identify the “best” model in terms of the interactions present in the model; then (ii) obtain an estimate of the total population size given the specified model.
To discriminate between competing models and conduct the model selection step, it is conventional to use Akaike’s information criterion, AIC (Akaike, 1974), where,
$$AIC = -2l(\hat{\theta}; n) + 2p,$$
such that $l(\hat{\theta}; n)$ denotes the log-likelihood of the model evaluated at the MLEs of the parameters denoted $\hat{\theta}$, and $p$ denotes the number of parameters in the model that is, $p = |\theta|$. The likelihood in this case simply corresponds to a product over independent Poisson terms. The AIC criterion is interpreted as a trade-off between the fit of the model to the data and the complexity of the model. The model with the smallest AIC statistic is deemed to be the “best” of the models considered, in this respect AIC assesses the relative performance of the competing models. See, for example, Coumans et al. (2017); Silverman (2014); Van der Heijden et al. (2012) for the use of the AIC statistic within the MSE context for modern slavery and other related populations; and Davison (2003) for discussion of alternative model selection tools.
In practice, it may not be feasible to fit every possible model (including/excluding interaction terms) to the data. If the dataset features many sources,
the number of possible models becomes prohibitive and so a model search algorithm is typically implemented. For example, adding/removing interaction terms in a systematic manner until no improvement in the model is detected. In this paper, we use a model selection procedure using the AIC statistic and estimate the total population size from the single “best” model in order to investigate the issues of combining and omitting lists without the additional confounding with model-averaging issues. In particular, we are interested in the influence of each individual list on the total population estimate.
**List Influence**
The pattern of the observed data, in terms of the number of individuals observed in the cross-classification across different lists is the underpinning principle permitting the estimation of the total population size via MSE. In general, situations can arise whereby, for example, there is a dominant list where a substantial proportion of individuals are observed by this single source (Cormack et al., 2000); there is substantial dependence between lists (either positive or negative: see, for example, Jones et al. (2014)); or limited overlap across lists leading to sparse contingency tables, that is, tables with a large number of zero counts (Chan et al., 2020; Sharifi Far, 2017). We focus on this last case of minimal overlap between the different lists. Issues encountered in this scenario include model fitting complexity, including for example, model identifiability and parameter redundancy (Chan et al., 2020; Fienberg & Rinaldo, 2012; Sharifi Far et al., 2019; Silverman, 2020; Vincent et al., forthcoming).
To investigate the influence of the different lists on the statistical analysis, and focussing in particular on the estimation of the total population size, we consider both a (i) “leave-one-out” approach and (ii) combining lists approach.
**Leave-one-out approach.** The leave-one-out approach involves cycling through each possible list, removing the given list, constructing the reduced incomplete contingency table from the remaining sources before conducting the statistical analysis to obtain the total population size estimate as described above. In particular, we obtain the MLE of the total population size for the model deemed optimal via the AIC statistic and an associated 95% CI. When there are $K$ lists in general, this means conducting $K$ leave-one-out contingency table analyzes. We note that for each leave-one out analysis, the total number of observed individuals is reduced (assuming that all lists observe at least one unique individual not observed by any other source). The estimates of population size from each of the $K$ leave-one-out analyzes can be compared with each other and also with the estimate of the total population size using all $K$ sources. In the simulation study, we can also compare the estimates with the (known) true population size.
Combining lists approach. In some cases, we may wish to combine two lists into a single list prior to analysing the contingency table. For example, we focus on the particular case where we may wish to do this due to the limited (or even lack of any) overlap between two (or more) of the sources used within the analysis. The new list then essentially corresponds to individuals observed by source $A$, say, or source $B$ (or both). In the case of there being no individuals observed by both these two sources, the interaction between these sources is also not estimable (shown for a saturated model by Sharifi Far (2017)). Combining the two sources automatically removes the issue of identifiability of the interaction between these two sources as this parameter is no longer present. Further, unlike the leave-one-out approach, this approach does not reduce the number of individuals observed within the new revised contingency table; however, the number of lists is reduced by one. Once again the estimates of total population size can be compared using the original all-list data and then the reduced (combined list) contingency table. For the simulation study the estimate can also be compared to the (known) true population size from which the data are simulated.
Case Studies
We consider two case studies relating to data from the UK and Romania, both with five sources. Both of these cases have minimal overlap between some of the sources. For the Romanian data, one of the lists is dominant and contains the majority of the observations.
UK Data
We consider the data presented by Silverman (2014) relating to modern slavery in the UK. The data contains five different sources corresponding to: Local Authority (LA); Non-Government organisations (NG); Police Force and/or National Crime Agency (PF); Government Organisations (GO); and the General Public (GP). For further information, including discussion of combining the police force and National Crime Agency as a single list, see Silverman (2014). The data are presented in Table 1. We note that there is no overlap between the lists LA and GP, that is, no individuals are recorded by both of these sources, and, in general, there is very little overlap between GP and the other remaining lists. Given these data, it can be shown that the interaction between LA and GP (and all higher order interactions) cannot be estimated (Sharifi Far, 2017). In our analyzes, due to the sparsity of the contingency table, we restrict the interactions to only two-way interactions between lists. When modelling the five lists, all the two-way interactions, except the LA and GP interaction, are estimable.
Table 1. UK Modern Slavery Data of Non-Zero Contingency Table Cell Entries.
| LA | LA | LA |
|----|----|----|
| NG | NG | NG |
| PF | PF | PF |
| GO | GO | GO |
| GP | GP | GP |
54 463 995 695 316 15 19 3 62 19 1 76 11 8 1 1 4 1
Note. The five lists are: LA = local authority; PF = police force and/or National Crime Agency; GO = government organisation; NG = non-government organisation; GP = general public.
Full data analysis. We initially analyze the full five-list dataset, in order to consider the robustness of the total population estimate when investigating the two issues of (i) removing each list in turn; and (ii) combining GP with each of the remaining lists in turn. We consider a model search algorithm using the AIC statistic to compare competing models. We restrict the set of models to those with two-way interactions, omitting the LA × GP interaction. The model identified as optimal has the following six two-way interactions and associated direction of the interaction (+ve = positive interaction and –ve = negative interaction): LA × NG (+ve); LA × PF (+ve); NG × GO (–ve); NG × GP (–ve); PF × GP (–ve); GO × GP (–ve). All interactions identified relating to either GO or GP correspond to negative interactions (so being identified by either of these sources leads to a lower chance of being observed by the other data source where there is an interaction). Conversely, interactions that involve only the lists LA, NG, or PF have positive relationships. Given the above model, the corresponding MLE for the total population is 11,313 with the 95% CI (9,750, 12,876).
Omitting lists: “Leave-one-out”. We consider the influence of each list on the estimate of the population size, by omitting each list in turn. The estimates and 95% CIs are presented in Table 2, along with the sign of the included interaction terms. The population size estimates are highly variable for the different omitted lists. Identifying structured patterns within the output is non-trivial: omitting lists masks patterns in the cell entries, for example, a previous overlap between two lists becomes an observation in a single list when one of the sources is left-out, and different models (and interactions) will be identified given these changes. In all cases where interactions are chosen in both the full five-list and reduced four-list dataset analyzes, the direction of the interactions remains consistent (except for NG × GO
Table 2. MLEs and Associated 95% CIs for the Total Population Size for the UK Data, and Corresponding Model Selected in Terms of Interaction Terms Present with Associated Estimated Sign of the Interaction.
| Omitted list | Population estimate | 95% confidence interval | Model |
|--------------|---------------------|-------------------------|-------|
| — | 11,313 | (9,750, 12,876) | LA × NG (+ve); LA × PF (+ve); NG × GO (−ve); NG × GP (−ve); PF × GP (−ve); GO × GP (−ve) |
| LA | 18,945 | (11,740, 26,150) | NG × PF (+ve); NG × GP (−ve); PF × GO (+ve); PF × GP (−ve); GO × GP (−ve) |
| NG | 31,118 | (18,893, 43,343) | LA × PF (+ve); PF × GO (+ve) |
| PF | 32,042 | (13,781, 50,304) | LA × NG (+ve); NG × GO (+ve); NG × GP (−ve) |
| GO | 10,202 | (8,061, 12,343) | LA × NG (+ve); LA × PF (+ve); NG × GP (−ve); PF × GP (−ve) |
| GP | 11,015 | (9,447, 12,583) | LA × NG (+ve); LA × PF (+ve); NG × GO (−ve) |
Note. The first row (denoted by a “—”) gives the results of the complete five-list analysis; the remaining rows are the results of omitting each list in turn.
interaction which is positive when PF is removed). In every instance, the reduced dataset includes interactions in common with those identified for the full five-list dataset. When omitting lists LA, NG, and PF (that exhibit positive interactions between them in the full analysis) the reduced datasets lead to different set of interactions to the full dataset (but with some common interactions). The comparison when removing lists GO and GP is more straightforward: the model identified is simply the reduced model from the five-list analysis, omitting the interaction terms associated with the omitted list. For these latter two cases, the estimate of the population size is similar to the estimate from the five-list analysis. List PF has the largest number of observations—removing this list provides an estimate where the 95% CI does not include the population estimate obtained in the five-list analysis.
Combining lists. The GP list has very little overlap with the other lists and no overlap with LA. Therefore, we combine this list with each of the other lists in turn and estimate the total population size from the reduced contingency table. The corresponding MLEs of the population size, 95% CIs and selected interaction terms are given in Table 3. To clearly denote which
Table 3. MLEs and 95% CIs of the Population Size for the UK Data Given the Model Selected, and Corresponding Model Selected in Terms of Interaction Terms Present (Estimated Sign).
| Combined lists | Population estimate | 95% confidence interval | Model |
|----------------|---------------------|-------------------------|-------|
| — | 11,313 | (9,750, 12,876) | LA × NG (+ve); LA × PF (+ve); NG × GO (−ve); NG × GP (−ve); PF × GP (−ve); GO × GP (−ve) |
| GP.LA | 16,071 | (12,661, 19,481) | GP.LA × GO (−ve); NG × PF (+ve); PF × GO (+ve) |
| GP.NG | 12,661 | (10,920, 14,403) | LA × GP.NG (+ve); LA × PF (+ve); GP.NG × GO (−ve) |
| GP.PF | 13,180 | (11,343, 15,017) | LA × NG (+ve); LA × GP.PF (+ve); NG × GO (−ve) |
| GP.GO | 14,394 | (11,862, 16,926) | LA × NG (+ve); LA × PF (+ve); NG × PF (+ve); NG × GP.GO (−ve) |
Note. The first row (denoted by a “—”) gives the results of the complete five-list analysis; the remaining rows are the results of combining list GP with each of the other lists. For the model description we denote the combined lists by the combined “dotted” abbreviations.
lists have been combined we “dot product” the list names, for example, the combination of GP and LA is denoted by “GP.LA”. The largest deviation from the population size estimate of the five-list dataset is observed when LA and GP are combined. These two lists have no overlap and their interactions with the other lists are in opposite directions. This appears to have resulted in some interactions cancelling each other out. For instance, in the five-list analysis GP × NG has a negative interaction whilst LA × NG has a positive interaction, once combined GP.LA has no interaction with NG. This has further impact on the remaining interactions between the non-combined lists with clear changes in the selected interaction terms. For the combinations of GP with NG and GO the interactions for the combined model appear more predictable: where the uncombined lists displayed interactions, the combined lists share those same interactions. The combination GP.PF lies somewhere in between the above cases: the majority of interactions can be anticipated from the original interactions, but there are also some changes in the interactions of the uncombined lists. Overall, when compared to the leave-one-out method there appears to be less variability in the range of estimates.
**Romania Data**
We consider data collected for Romania in 2015. Five lists are included corresponding to: Police/agency against trafficking in persons and border police (PF), International organization for Migration (IM), Non-Governmental organisations (NG), Foreign Authorities (FA), and Other (OT). A total of 879 individuals are observed, with the majority of these obtained by list PF (a total of 806 individuals are observed by PF; of these 758 are only identified by PF). Thus, PF dominates the other lists. IM observes a total of 48 individuals (one individual is unique to IM); NG observes 25 individuals (19 of these are observed by at least one other list); FA observes 72 individuals (all these individuals are observed by at least one other list); and OT has 66 individuals (with 34 only observed by OT).
**Full data analysis.** We conduct an analysis of the full five-list dataset. We restrict the model search to those including two-way interactions, and use the AIC statistic to determine the interactions present. The model selected as “best” had interactions: \( \text{PF} \times \text{IM} \) (−ve); \( \text{PF} \times \text{NG} \) (−ve); \( \text{PF} \times \text{FA} \) (+ve); \( \text{PF} \times \text{OT} \) (−ve); \( \text{IM} \times \text{FA} \) (+ve); \( \text{NG} \times \text{FA} \) (+ve); \( \text{NG} \times \text{OT} \) (−ve); \( \text{FA} \times \text{OT} \) (+ve). The associated estimate of the population size is 921, with 95% CI (879*, 993). We truncate the lower limit of the 95% CI to the observed number of individuals (indicated by *). We will use this estimate as a baseline to investigate the impact of removing each of the lists in turn and secondly combining PF with each of the other lists in turn (chosen since PF has the smallest percentage overlap with each of the other lists).
**Omitting lists:** “Leave-one-out”. The population size estimates, 95% CIs and selected model when each list is omitted in turn, are given in Table 4. Removing the dominant list PF (of which 86% of the individuals on this list are only seen on this list) leads to a substantial decrease in the estimate of the total population. This is unsurprising given the dominance of this list in observing individuals. In particular, this source alone records 74% of all individuals observed; and 68% of all individuals observed are only observed by this list. Omitting the other lists leads to estimates similar to the estimate obtained when using all five lists. We note that removing the OT list leads to a larger and highly imprecise estimate of the population size. In line with the observations from the UK data, there is generally agreement across the different omissions in the interactions identified: where an interaction is identified to be present, the sign of the interaction remains consistent whenever the interaction is detected. On removing lists that have a negative interaction with the dominant list PF (i.e. IM, NG, and OT) the interaction terms identified are
Table 4. Results for the Romanian Data in Terms of the MLEs and Associated 95% CIs for the Total Population Size Given the Model Selected, and Corresponding Model Selected in Terms of Interaction Terms Present with Associated Estimated Sign of the Interaction.
| Omitted lists | Population estimate | 95% confidence interval | Model |
|---------------|---------------------|-------------------------|-------|
| — | 921 | (879*, 993) | PF × IM (−ve); PF × NG (−ve); PF × FA (+ve); PF × OT (−ve); IM × FA (+ve); NG × FA (+ve); NG × OT (−ve); FA × OT (+ve) |
| PF | 258 | (142, 374) | IM × FA (+ve); IM × OT (+ve); NG × FA (+ve) |
| IM | 971 | (742, 1,200) | PF × NG (−ve); PF × FA (+ve); PF × OT (−ve); NG × FA (+ve); NG × OT (−ve); FA × OT (+ve) |
| NG | 923 | (842, 1,005) | PF × IM (−ve); PF × FA (+ve); PF × OT (−ve); IM × FA (+ve); FA × OT (+ve) |
| FA | 1,035 | (895, 1,175) | PF × NG (−ve); PF × OT (−ve); IM × NG (+ve); IM × OT (+ve) |
| OT | 2,915 | (845*, 5,638) | PF × IM (−ve); PF × FA (+ve); IM × FA (+ve); NG × FA (+ve) |
Note. The first row (denoted by a “—”) gives the results of the complete five-list analysis; the remaining rows are the results of omitting each list in turn. When the lower bound of the confidence interval was truncated to the number of observed individuals, it is indicated by *.
typically those identified by the five-list analysis with those featuring the omitted list removed. For list FA which originally displayed a positive interaction with the dominant list PF, and contains no unique individuals, the selection of interactions is somewhat different amongst the remaining lists.
**Combining lists.** For the Romanian data, PF has minimal overlap with the other lists: only 48 individuals observed by list PF are observed by another list, which corresponds to only 6% of individuals observed by PF. We investigate the effect of combining PF with each of the other lists. Whilst this approach is similar to that of the UK data (combining with a minimally overlapping list), here there is a structural difference in that the list also accounts for the majority of observations. The corresponding results are given in Table 5. The estimates obtained in each of the combined list analyzes are reasonably consistent with substantially overlapping 95% CIs (compared with the estimate using all five lists). The largest discrepancy arises when combining list PF with list FA. This is potentially due to the complex relationship between these two lists: of the eight interactions
Table 5. Results for the Romanian Data in Terms of the MLEs and Associated 95% CIs for the Total Population Size Given the Model Selected, and Corresponding Model Selected in Terms of Interaction Terms Present with Associated Estimated Sign of the Interaction.
| Combined lists | Population estimate | 95% confidence interval | Model |
|----------------|---------------------|-------------------------|-------|
| — | 921 | (879*, 993) | PF × IM (−ve); PF × NG (−ve); PF × FA (+ve); PF × OT (−ve); IM × FA (+ve); NG × FA (+ve); NG × OT (−ve); FA × OT (+ve) |
| PF. IM | 1,087 | (879*, 1,400) | PF. IM × NG (−ve); PF. IM × FA (+ve); PF. IM × OT (−ve); NG × FA (+ve); FA × OT (+ve) |
| PF. NG | 904 | (879*, 1,647) | PF. NG × IM (−ve); PF. NG × FA (+ve); PF. NG × OT (−ve); IM × FA (+ve); FA × OT (+ve) |
| PF. FA | 1,679 | (912, 2,446) | PF. FA × IM (+ve); PF. FA × OT (−ve); IM × OT (+ve); IM × NG (+ve) |
| PF. OT | 1,139 | (879*, 1,585) | PF. OT × NG (−ve); PF. OT × FA (+ve); IM × FA (+ve); NG × FA (+ve) |
Note. The first row (denoted by “—”) gives the results of the complete five-list analysis; the remaining rows are the results of combining list PF with each of the other lists. For the model description we denote the combined lists by the combined “dotted” abbreviations. When the lower bound of the confidence interval was truncated to the number of observed individuals, it is indicated by *.
Identified in the five-list analysis, seven feature PF, FA, or both. Once again the interactions identified (and associated sign) are remain fairly consistent across analyzes. As for the UK data, the combining of lists leads to less variable estimates of population size compared to omitting lists.
The case studies suggest that analyzes should be conducted with some caution in the presence of minimally overlapping sources. In particular, omitting sources with limited overlap can lead to different behaviors in the estimate of the population size. Alternatively, combining a list with limited overlap to another list appears to provide less variable estimates. Thus, how we deal with such sources can have a significant impact on the population size estimate—and some sensitivity of the analyzes should be conducted. To investigate the impact further where the observed contingency tables are more “controlled”, we conduct a simulation study, motivated by the larger UK data.
Simulation Study
The simulation study is motivated by the UK dataset with five sources, which represents a common structure between the victims-of-slavery sources—in particular when there are two sources for which no individuals are observed in common (sources LA and GP). We use the fitted model to the full five-list data analysis (so that there are six interaction terms with non-zero effects) as the generating model within the simulation study and use the same list names for simplicity. We set the true population size to be equal to 11,313. We generate 500 datasets from the given (conditional Multinomial) model. Only the cell count corresponding to cell $k = \{0, 0, 0, 0, 0\}$ is unknown. For each simulated dataset, we repeat the model search algorithm to identify the model deemed optimal using the AIC statistic, and estimate the associated population size and associated 95% CI. We then remove each list in turn and repeat the analysis; before combining list GP (which has the smallest expected overlap) with each of the other lists and again repeat the model-fitting process. Finally, within the simulation study to consider the impact of the model selection process we also fit the generating model, or an alternative form of the model when a list is omitted or lists are combined. When omitting lists, the alternative model corresponds to the generating model but with all interactions involving the omitted list removed; for combined lists for the alternative model we include all possible two-way interactions (there are six in total). Note that we only use the simulated datasets for which we do not observe any potential identifiability problems to remove any possible confounding errors entering the simulation study. Thus, 30% of the simulated models in removing lists, and 55% of models in combining lists are used. For further discussion on identifiability, see for example, Vincent et al. (forthcoming).
Omitting Lists: “Leave-one-out”
For each simulated dataset we calculate the ratio of both the population estimate omitting the given source to the estimated total using all five lists; and the true population size (11,313). We plot these estimates against two further statistics corresponding to (i) the proportion of the total number of observed individuals by the source that is subsequently omitted; and (ii) the proportion of overlap for the given list that is omitted (i.e., the proportion of individuals observed by the given list that are also observed by at least one other list). These results are plotted in Figure 1, where the left-hand plots, (a) and (c), correspond to the associated population size ratio for the estimate using all five lists plotted against (i); and the right-hand plots (b) and (d), correspond to the population size ratio for the estimate with the true simulated population size plotted against (ii). The black dots show the same quantities for the original UK data.
Figure 1. Ratio of estimated total population size using only four of the lists to the estimate obtained using all five lists plotted against proportion of individuals observed by omitted list (a) or proportion of overlap of omitted list (c); and similar plot for the ratio of estimated total population size against true simulated value plotted against proportion of individuals observed by omitted list (b) or proportion of overlap of omitted list (d). The black dots show the same quantities for the original UK data.
The relationships observed in the plots are similar when considering the true population size; or the estimated population size using all five lists (i.e., the columns in the figure are similar): although, the variability appears to be slightly greater when using the true population size. In general, the greater the proportion of individuals observed by a given list, then omitting that list leads to a greater variability in the estimate of the population size. Further, within this simulation study the variability of the estimates appears to be more dependent on the number of individuals observed by the given list that is omitted, as opposed to the proportion of overlap for that list—this is demonstrated by relatively similar estimates for LA and GP which observe the smallest number of individuals but have very different overlap patterns. Finally, we comment
that there does not appear to be any structural over or under estimate of the population size when omitting any of the particular lists. However, we do note that underestimates do have a lower bound (i.e., the total number of individuals observed by the sources); whereas overestimates have no such bound and thus overestimates may be larger in magnitude.
To investigate further the performance of the estimates, we consider the 95% CIs of the estimated population sizes and compare these with the true simulated population size. When considering all five lists, 69% of the 95% CIs contained the true value of the parameter. This is less than the nominal 95% level that we would expect and would indicate perhaps that there are further potential issues (e.g., relating to model selection; see below for further discussion). However, we are primarily concerned with the impact of omitting each list, and thus we use this 69% as a comparison when we subsequently omit each list. The coverage probabilities in each case correspond to: 69%, 63%, 41%, 58%, and 63% when removing GP, GO, PF, NG, and LA, respectively. Further, the median of the length of these CIs for the models with five lists is 3,341. Similarly, the median of the length of the 95% CIs after removing GP, GO, PF, NG, LA is respectively 3,451, 5,076, 12,058, 4,277, 3,402. Thus, omitting the list GP leads to very similar performance as the full five lists (in terms of coverage and precision of the estimate) and suggests that the additional information that this list provides is minimal. Omitting lists GO, LA, and NG leads to a relatively similar reduction in performance in terms of reduced coverage probabilities and precision. However, omitting list PF leads to a significant decrease in performance—this list also corresponds to the list that observes the greatest number of individuals.
Finally, we consider the impact of the model selection process by simply using the generating model or alternative form of the model. For the generated and alternative models for the reduced four-list sources when omitting a list, the coverage probabilities were significantly higher and equal to 97% with using the five lists and 97%, 93%, 100%, 96%, and 91% when removing GP, GO, PF, NG, and LA, respectively. Further, the median of the length of the 95% CIs are increased to: 14,483 with five lists, 15,049, 22,105, 55,400, 31,362, 20,781 after removing GP, GO, PF, NG, and LA. Thus model selection has a significant impact on the performance of the MSE approach—we return to this issue in the discussion section.
**Combining Lists**
For each simulated dataset, the list GP is combined with each of the other four lists in turn and the associated total population size is estimated. Figure 2 provides the corresponding plots of the ratio of the estimated population size
Figure 2. Ratio of estimated population size using four lists with GP combined with each source in turn with (i) the estimated population size using all five sources (on the left-hand side); and (ii) the true population size used to simulate the data, compared with the percentage of the list GP overlap with the given list it is combined with. The black dots show the same quantities for the original UK data.
using the combined lists compared to the estimated total using all five lists (in the left-hand plots); and the true population size used to simulate the data (in the right-hand plots), compared to the percentage of overlap of the source GP with the list it is combined with. The black dots show the same quantities for the original UK data. As for the above case of omitting the lists there is greater variability in the ratio of the estimated population size with the true value, compared to the case when we consider the estimated value using all five lists. Interestingly, within this simulation study there appears to be a clear and consistent overestimate of the total population size when we combine the GP list with the LA list—for which in the real UK data there was no overlap observed. However, combining the list GP with the other lists (GO, PF, and NG) appears to provide less biased estimates of the total population size, and a reduced level of variability in the ratios. There also appears to be a slight decrease in the variability of the estimated ratio as the proportion of overlap of the combined lists increases.
For the set of retained datasets, the 95% CIs for the estimated population size using all the five lists, include the true value of the population size in 67% of the simulated datasets, with a median length of 3,281. After combining GP with LA, NG, PF, GO this coverage probability is reduced to 23%, 51%, 33%, and 54%, respectively. The median length of the 95% CIs were 7,480, 4,033, 3,814, 4,058, respectively after combining GP with LA, NG, PF, and GO. Thus combining GP with each of the other sources leads to substantially worse performance in terms of coverage probabilities, particularly for LA and PF. With regard to LA (for which this has very small overlap across the simulations), not only does combining the GP list with the LA list lead to poor estimation of the total population size (i.e., a general overestimate and substantially reduced confidence interval performance) but the uncertainty of the estimate is also relatively large. Finally, to provide some insight into the impact of model selection within the analyzes we also consider the generating and associated alternative models. In these cases the coverage probability are significantly increased to 90% (for the generating model) when using the five lists and 92%, 97%, 94%, and 93% after combining GP with LA, NG, PF, and GO, respectively, for the reduced model. The corresponding median length of the 95% CIs are also increased to 13,310 when using all five lists, and 24,055, 15,702, 15,868, 15,183 after combining GP with LA, NG, PF, and GO, respectively. This is a similar observation as for the case of omitting lists but without any a large increase in the size and variability of the length of the CIs.
The simulation studies suggest that the population size estimates can be sensitive to a number of different factors, including the number of sources that we include in the analysis and how we define a single source (i.e., a combined source). In general, assuming that we fit the generating model or the alternative version of this model (when we omit or combine a list) the corresponding population size estimates appear to be reasonable with generally good coverage probabilities. However, when adding the associated model search algorithm (using the AIC statistic as the criteria) the performance drops significantly and also appears to overestimate the precision of the resulting estimates.
**Discussion**
Collecting data from the different (and potentially diverse) sources and conducting the collation across the different lists requires resources. These resources may be limited, for example, in terms of person time or money. Thus, understanding the importance of different lists can have a direct impact on future data collection, and allocation of resources. Questions may particularly
be raised in relation to sources that, for example, observe only a relatively small number of individuals, or those which have minimal overlap with other sources, since MSE relies on overlap between courses in order to estimate the total population size. This latter situation is very common in modern slavery applications—in this paper we considered the robustness of MSE in the case of small overlap between sources.
To reduce the minimal overlap between sources two approaches can be adopted: remove a source; or combine a source with another. This latter step may be done prior to any analysis being conducted, as may be done not only where there is minimal overlap but also in the opposite case where the overlap is substantial as was the case with UK data where the police force data was combined with the National Crime Agency data (Silverman, 2014). The analyzes conducted within this paper suggests a note of caution with regard to the application of MSE to modern slavery data. In particular changes to the lists (omitting a list or combining two lists) could potentially have a significant impact on the total population size estimate—although combining lists appeared to have a lesser effect than simply omitting a list. Overall the model selection algorithm implemented—and in particular the use of the AIC statistic commonly used within MSE approaches, see Coumans et al. (2017); Silverman (2014); Van der Heijden et al. (2012)—had a significant effect on the performance of the MSE. It is possible that several competing models may be regarded to fit the observed data equally well but yet have very different estimates for the population size. These observations lead us to make the following minimum recommendations when implementing an MSE approach:
1. Fit multiple models to the data to investigate the sensitivity of the estimates to the different models—this would particularly include “similar” (i.e., neighbouring) models;
2. Investigate the robustness of the estimate by omitting each source in turn and repeating the analysis;
3. Combine pairs of sources together and again investigate the robustness of the parameter estimates; and
4. Conduct a simulation study to gain an understanding of the performance of the analyzes (e.g., using the MLEs of the fitted model as the generating model, as for the simulation study conducted within this paper based on the UK data).
The above aim to provide a greater understanding of the particular dataset and analysis. If similar estimates are obtained under the different scenarios there is some reassurance in the approach being robust. However, deviations
may indicate some particular interesting aspect of the data. For example in our case for PF from the Romanian data lead to a significant decrease in the population estimate—and on inspection this was most likely due to the large number of (unique) individuals observed by this source. This in turn may be investigated to understand why so many individuals are only observed by PF, for example.
The simulation study suggests that the use of the AIC statistic as the model selection criteria may not be optimal, leading to poor coverage estimates of the true population size and over-confidence in the estimate. Alternative criteria exist, such as the Bayesian information criterion (BIC; Schwarz, 1978) and Focused information criterion (FIC; Claeskens & Hjort, 2003). Thus these different criteria could also be investigated within the exploratory analyzes and added to the list of recommendations above. Further, with regard to model selection, an additional approach to consider includes a model-averaging approach, and thus removing the reliance on a single model. A weighted average over the set of plausible models can be calculated, so that the population size estimate includes both parameter \textit{and} model uncertainty. See for example, Buckland et al. (1997) in the classical framework and Hoeting et al. (1999); King and Brooks (2001); Madigan and York (1997) in the Bayesian framework. If the set of plausible models all provide similar estimates of the total population size, then so too will the model-averaged estimate; however if the estimates differ between models the model-averaged approach will provide a weighted point estimate but typically have an associated significantly larger uncertainty interval to convey this additional uncertainty. In this latter circumstance it is useful to not only provide a single model-averaged estimate but also the set of most likely models and their associated estimate.
Another issue that we have not considered within this data analyzes but may arise relates to cross-referrals, where one (or more lists) may refer individuals to other agencies but not vice-versa leading to asymmetry. For example, cross-referrals by another list to police may almost always be made when a child is or has been at risk. Cross-referral is also more likely when there is the prospect that an intelligence-led police raid could lead to the rescue of a clutch of other victims of modern slavery (Bird, 2019). Reports on MSE estimation of modern slavery, such as in Serbia and Ireland (23 Romanian men exploited in a waste recycling plant), for the United Nations Office on Drugs and Crime mention the context of annual counts for rescued victims being inflated by a particularly successful police operation. More generally, we acknowledge that MSE needs to evolve to take into account the underlying networks by which victims came to be listed. For example, a rescued victim may provide information leading to the rescue of other individuals, so that individuals are not independent of each other.
Hence, in addition to list-membership and (selective) cross-referrals, consideration may also need to be given to the size and context of the rescued victim-network that selective cross-referral gives rise to. The current presentation of the data in terms of simply the presence of individuals on different lists discards the temporal information, so that it is not possible to take into account (or estimate) referrals between lists; or possible relationships between identifications. Worthington et al. (2019) discuss similarities with ecological capture-recapture data where such temporal information is available and could provide insight/motivation for extended MSE models if such temporal information is available for modern slavery data. The challenges of modern slavery motivates further developments of MSE to incorporate the above particular complexities of the different processes acting on the and between the different lists used to identify victims.
**Declaration of Conflicting Interests**
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
**Funding**
The author(s) received no financial support for the research, authorship, and/or publication of this article.
**ORCID iDs**
Serveh Sharifi Far [ID](https://orcid.org/0000-0001-8403-6286)
Sheila Bird [ID](https://orcid.org/0000-0001-6378-8382)
Hannah Worthington [ID](https://orcid.org/0000-0001-5452-3032)
**References**
Akaike, H. (1974). A new look at the statistical model identification. *IEEE Transactions on Automatic Control*, 19(6), 716–723.
Buckland, S. T., Burnham, K. P., & Augustin, N. H. (1997). Model selection: An integral part of inference. *Biometrics*, 53(2), 603–618.
Bird, S. M., & King, R. (2018). Multiple systems estimation (or capture-recapture estimation) to inform public policy. *Annual Review of Statistics and its Application*, 5(1), 95–118.
Bird, S. M. (2019). Public health perspective on UK-identified victims of modern slavery. Submitted to *Crime Delinq*.
Chan, L., Silverman, B., & Vincent, K. (2020). Multiple systems estimation for sparse capture data: Inferential challenges when there are non-overlapping lists. *Journal of the American Statistical Association*. Advance online publication. https://doi.org/10.1080/01621459.2019.1708748.
Claeskens, G., & Hjort, N. L. (2003). The focused information criterion (with discussion). *Journal of the American Statistical Association, 98*, 879–899.
Cormack, R. M., Chang, Y.-F., & Smith, G. S. (2000). Estimating deaths from industrial injury by capture-recapture: A cautionary tale. *International Journal of Epidemiology, 29*(6), 1053–1059.
Cruyff, M., van Dijk, J., & van der Heijden, P. G. M. (2017). The challenge of counting victims of human trafficking: Not on the record: A multiple systems estimation of the numbers of human trafficking victims in the Netherlands in 2010-2015 by year, age, gender, and type of exploitation. *Chance, 30*(3), 41–49.
Coumans, A. M., Cruyff, M., Van der Heijden, P. G. M., Wolf, J., & Schmeets, H. (2017). Estimating homelessness in the Netherlands using a capture-recapture approach. *Social Indicators Research, 130*(1), 189–212.
Davison, A. C. (2003). *Statistical models*. Cambridge University Press.
Fienberg, S. E. (1972). The multiple recapture census for closed populations and incomplete 2k contingency tables. *Biometrika, 59*(3), 591–603.
Fienberg, S. E., & Rinaldo, A. (2012). Maximum likelihood estimation in log-linear models. *Annals of Statistics, 40*(2), 996–1023.
Hoeting, J. A., Madigan, D., Raftery, A. E., & Volinsky, C. T. (1999). Bayesian model averaging: A tutorial. *Statistical Science, 14*, 382–401.
Home Office. (2018). *2018 UK annual report on modern slavery*. https://www.gov.uk/government/publications/2018-uk-annual-report-on-modern-slavery
Jewell, N. P., Spagat, M., & Jewell, B. L. (2013). Multiple systems estimation and casualty counts: Assumptions, interpretations and challenges. In T. Seybolt, J. Aronson, & B. Fischoff (Eds.), *Counting civilian casualties: An introduction to recording and estimating nonmilitary deaths in conflict* (pp. 185–211). Oxford University Press.
Jones, H. E., Hickman, M., Welton, N. J., De Angelis, D., Harris, R. J., & Ades, A. E. (2014). Recapture or precapture? Fallibility of standard capture-recapture methods in the presence of referrals between sources. *American Journal of Epidemiology, 179*(11), 1383–1393.
King, R., Bird, S. M., Brooks, S. P., Hutchinson, S. J., & Hay, G. (2005). Prior information in behavioural capture-recapture methods: Demographic influences on drug injectors’ propensity to be listed in data sources and their drugs-related mortality. *American Journal of Epidemiology, 162*(7), 694–703.
King, R., & Brooks, S. P. (2001). On the Bayesian analysis of population size. *Biometrika, 86*(3), 615–633.
Madigan, D., & York, J. C. (1997). Bayesian methods for estimation of the size of a closed population. *Biometrika, 84*, 19–31.
Schwarz, G. E. (1978). Estimating the dimension of a model. *Annals of Statistics, 6*(2), 461–464.
Sharifi Far, S. (2017). *Parameter redundancy in log-linear models* (PhD thesis). University of St Andrews.
Sharifi Far, S., Papathomas, M., & King, R. (2019). Parameter redundancy and the existence of maximum likelihood estimates in log-linear models. *Statistica Sinica*. Advance online publication. https://doi.org/10.5705/ss.202018.0100.
Silverman, B. (2014). *Modern slavery: An application of multiple systems estimation*. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/386841/Modern_Slavery_an_application_of_MSE_revised.pdf
Silverman, B. (2020). Model fitting in multiple systems analysis for the quantification of Modern Slavery: Classical and Bayesian approaches. *Journal of the Royal Statistical Society: Series A*, 183(3), 691–736.
Van der Heijden, P. G. M., Whittaker, J., Cruyff, M., Bakker, B., & Van der Vliet, R. (2012). People born in the Middle East but residing in the Netherlands: Invariant population size estimates and the role of active and passive covariates. *Annals of Applied Statistics*, 6(3), 831–852.
van Dijk, J. J., Cruyff, M., van der Heijden, P. G. M., & Kragten-Heerdink, S. L. J. (2017). Monitoring Target 16.2 of the United Nations sustainable development goals; a multiple systems estimation of the numbers of presumed human trafficking victims in the Netherlands in 2010-2015 by year, age, gender, form of exploitation and nationality. United Nations Office on Drugs and Crime. *Research Brief*. https://tinyurl.com/y9mpkach
Vincent, K., Sharifi Far, S., & Papatthomas, M. (forthcoming). Common methodological challenges encountered with multiple systems estimation studies. Submitted to *Crime & Delinquency*.
Worthington, H., McCrea, R. M., King, R., & Vincent, K. (2019). How ideas from ecological capture-recapture models may improve multiple systems estimation analyses. Submitted to *Crime & Delinquency*.
**Author Biographies**
**Serveh Sharifi Far** is a university teacher in Statistics in the School of Mathematics at the University of Edinburgh. Her research interests include parameter redundancy, multiple systems estimation, and analysis of categorical data.
**Ruth King** is the Thomas Bayes’ chair of Statistics in the School of Mathematics at the University of Edinburgh. Her research interests include statistical modelling, capturerecapture data, multiple systems estimation and missing data applied to problems in ecology, epidemiology and healthcare.
**Sheila Bird** is an honorary professor at College of Medicine and Veterinary Medicine, University of Edinburgh and visiting scientist at MRC Biostatistics Unit, University of Cambridge, CB2 0SR. She is a biostatistician, formerly Programme Leader at MRC Biostatistics Unit, Cambridge.
**Antony Overstall** is an associate professor of Statistics in the Southampton Statistical Sciences Research Institute at the University of Southampton. His research interests are statistical modelling and computation.
**Hannah Worthington** is a lecturer in Statistics in the School of Mathematics and Statistics at the University of St Andrews. Her research interests include hidden Markov models applied to problems in ecology, capture-recapture data, incorporating individual heterogeneity and multi-state modelling.
Nicholas Jewell is chair of Biostatistics and Epidemiology at the London School of Hygiene and Tropical Medicine, after a long career as professor of Biostatistics and Statistics at the University of California, Berkeley. His research interests include statistical issues associated with infectious diseases and epidemiology and counting challenges in human rights arenas. |
Anti-inflammatory Properties of an Active Sesquiterpene Lactone and its Structure-Activity Relationship
Yang Hu¹, Fei Zhang¹, Chaofeng Zhang*¹ and Mian Zhang¹
¹State Key Laboratory of Natural Medicines, Research Department of Pharmacognosy, China Pharmaceutical University, Longmian Road 639, Nanjing 211198, PR China
²Jiangsu Simcere Pharmaceutical Group Ltd., Xuanwu Avenue No. 699-18, Nanjing 210042, PR China
Abstract
A sesquiterpenoid, 2α-hydroxyl-3β-angeloylcinnamolide (HAC) was isolated from the Chinese medicinal herb *Polygonum jucundum* Lindex. (Polygonaceae) with anti-inflammatory activities in vivo. In the present study, we investigated the anti-inflammation effects of HAC on lipopolysaccharide (LPS)-induced murine RAW264.7 cells. As the results, we found that HAC dose-dependently decreased NO over-production with IC₅₀ value of 17.88 μM but showed very weak inhibition on TNF-α release with IC₅₀ value of 98.66 μM. Meanwhile, eight novel derivatives modified at C-2 position of HAC were synthesized to further explore the structure-activity relationships (SARs) of HAC on anti-inflammation effects. Compound PJH-1, an acetyl ester of HAC, showed better inhibition on over-production of NO and TNF-α (IC₅₀, 7.31 and 3.38 μM, respectively). Furthermore, we demonstrated that HAC and PJH-1 attenuates the mitogen-activated protein kinases (MAPK) signaling pathways through blocking the phosphorylation of ERK, p38, JNK/MAPK. We also found that the structure of PJH-1 are more stable than that of HAC in cell medium, these finding are useful to develop in vitro molecular mechanism research of HAC. In a conclusion, our studies enhance the understanding of anti-inflammation activities of HAC and lead to the discovery of novel derivatives as potential anti-inflammation agents.
Keywords: 2α-hydroxyl-3β-angeloylcinnamolide (HAC); Anti-inflammation effects; iNOS expression; Mitogen-activated protein kinases (MAPK); Chemical-structural modification.
Introduction
Inflammation is the first response of a tissue to injury, it can be classed as both acute and chronic inflammations. Chronic inflammation is a persistent one, which cause progressive damage to the body [1]. Macrophages play a key role in the specific and non-specific immune responses during the inflammation process, after macrophages are activated by LPS, large amounts of the cytokines and inflammatory mediators will be released [2-4]. Among the many pro-inflammatory mediators, NO is a key one in inflammation reactors [5] which is a free radical produced from L-arginie by nitric oxide synthases (NOS) and is known to regulate various physiological functions in many tissues [6], however, excessive NO has been implicated in various pathological processes. The inhibition of NO overproduction has been suggested to be an important therapeutic approach for treatment of inflammation [7,8]. Expression of the iNOS in macrophages is regulated mainly at the induction of transcription factors through mitogen-activated protein kinases (MAPKs).
The aerial parts of *Polygonum jucundum* Lindex. (Polygonaceae) is used as traditional Chinese herbs for inhibiting inflammation, lowering serum cholesterol levels, and treating rheumatism [9-11]. In our previous study, a drimane-type sesquiterpenoid, 2α-hydroxyl - 3β-angeloyl-cinnamolide (HAC) from *P. jucundum*, was identified with anti-inflammatory effects by oral administration effects by oral administration at dose of 50-200mg/kg in mouse, and a sensitive and rapid LC-MS method was developed to study its pharmacokinetics and distribution in rats [12-14]. Up to now, some natural sesquiterpenoids were shown to possess significant inhibitions on pro-inflammation mediator production [15-17]. The drimane-type sesquiterpenoids has been identified with anti-inflammatory properties [18]. Therefore, in this study, we investigated the effects of HAC on the release of LPS-induced pro-inflammatory mediators and explored the molecular mechanism in terms of inflammatory signaling pathways. Meanwhile, eight novel derivatives modified at carbon-2 position of HAC were synthesized (Scheme 1) to further explore structure-activity relationships (SARs) of HAC on LPS-activated RAW264.7 cells. These studies also lead to a better understanding of the structure-activity relationship for the sesquiterpene lactones family and the discovery of novel derivatives as potential anti-inflammation agents.
Materials and Method
Cell culture
RAW 264.7 cell line was obtained from the Cell Bank of Chinese Academic of Sciences, Shanghai, China. The cells were cultured in DMEM, supplemented with 10% fetal bovine serum, 100 units/mL of penicillin, 100 μg/mL of streptomycin, 2 mM L-glutamine, and 1 mM nonessential amino acids, incubated at 37°C in a humidified atmosphere containing 5% CO₂.
Anti-inflammatory effect on RAW264.7 cells
Cytotoxicity assay: Cell viability was assessed by the MTT staining method [23]. All samples were firstly dissolved in DMSO and then diluted with DMEM, the final concentration of DMSO in tested samples is less than 0.1%. Briefly, cells at 1×10⁵ cells/mL were seeded into 96-well microplates and treated with various samples at 100 μM for 24 h. The culture medium was eliminated and 100 μL/well of 5 mg/mL solution of MTT with PBS buffer (10 mM Na₂HP0₄, 1 mM KH₂PO₄, 137 mM NaCl and 2.7 mM KCl, PH 7.4) was added to the cells which were then incubated at 37°C for 4 h. The supernatant was eliminated and the colored metabolite was dissolved in DMSO (100 μL/well). Absorbance was measured at 570 nm with the aid of a microplate reader.
*Corresponding author: Chaofeng Zhang, State Key Laboratory of Natural Medicines, Research Department of Pharmacognosy, China Pharmaceutical University, Longmian Road 639, Nanjing 211198, PR China; Tel/Fax: (86)-25-86185140; E-mail: email@example.com
Received July 01, 2015; Accepted August 11, 2015; Published August 14, 2015
Citation: Hu Y, Zhang F, Zhang C, Zhang M (2015) Anti-inflammatory Properties of an Active Sesquiterpene Lactone and its Structure-Activity Relationship. Med chem 5: 354-360. doi: 10.4172/2161-0444.1000286
Copyright: © 2015 Hu Y, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Nitrite and TNF-α measurement: Nitrite was measured by adding 50 μL of the Griess reagent to 50μL of medium for 5min [20]. The optical density at 540 nm (OD540) was measured with a microplate reader (Epoch, Bio-Tek, USA). Concentrations were calculated by comparison with the OD540 of a standard solution of sodium nitrite prepared in culture medium. The levels of TNF-α in the RAW264.7 cell culture medium were measured by ELISA assay kits according to the manufacturer’s instructions [21].
Western blot analysis: After indicated treatment, the cells were harvested and then lysed immediately by sonication in cold PBS containing 1% phenylmethanesulfonyl fluoride (PMSF). The lysate was centrifuged at 12,000 rpm for 5 min, and the supernatant was collected and the total protein concentration was determined with a Bradford protein assay kit. After dissolved in SDS/PAGE loading buffer and boiled for 3 min at 100°C, 25 μg of proteins were resolved on SDS/PAGE and then electrotransferred onto a nitrocellulose membrane. The membrane was washed with Tris-buffered saline Tween (TBST). Non-specific sites on the membrane were blocked by incubating the membrane in the blocking solution containing 5% non-fat dry milk in TBST for 60 min. The membrane was washed and incubated in diluted respective primary antibody at 4°C overnight. The membrane was washed and incubated in HRP-conjugated secondary antibody solution for 1 h. The final washed membrane was reacted with an enhanced chemiluminescence reagent (ECL, Beyotime) and exposed to Kodak Scientific Film to detect the immunoblots [22].
Structural determination of HAC and its derivatives
General (Chemical): Optical rotation were determined with a JASCO P-1030 polarimeter; Silica gel (Qingdao Haiyang Chemical Co. Ltd., Qingdao, China) was used for column chromatography; 1H and 13C NMR spectra: Bruker ACF-300, 500 NMR spectrometer, chemical shifts δ in ppm with SiMe₆ as an internal standard (=0 ppm), coupling constants J in Hz. HPLC: Agilent 1260 high performance liquid chromatograph.
Structural modification of HAC: PJH-1, PJH-2 and PJH-3 were prepared by the condensation of HAC with the appropriate acid anhydride. Treatment of HAC with Jones oxidation afforded PJH-5 in 78% yield. Then condensation of PJH-5 with NH₂OH gave PJH-7 as a sole product with low yield. Treatment of HAC with Benzoyl chloride, sulfamoyl chloride, and methyl sulfonic anhydride afforded PJH-4, PJH-6, and PJH-8 accordingly. Their structures were determined by NMR spectral analysis.
2a- acetoxy- 3β- angeloylcinnamolide (PJH-1 C₂₃H₃₀O₆): To a solution of HAC (200 mg, 0.574 mmol) in di-chloromethane (20 mL) was added DMAP (60 mg) and acetic anhydride (0.54 mL, 1.7 mmol). The reaction mixture was stirred at room temperature for 3 h (TLC monitoring). The crude product was chromatographed on a silica gel column using the elution (petroleum ether: acetyl acetate=1:1) to afford PJH-1 (83% yield) as a white needle crystal. Yield: 83%; white needle crystal. [α]D₂⁰ =−0.73° (c 0.3, CH₂Cl₂). ¹H-NMR (CDCl₃, 500 MHz): δ 2.02 (1H, dd, J=4.5, 12.5 Hz, H-1β), 1.46 (1H, t, J=12.1, 12.1 Hz, H-1α), 5.16 (1H, m, H-2), 4.94 (1H, d, J=9.0 Hz, H-3), 1.62 (1H, q, J=5.5 Hz, H-5), 2.47 (1H, m, H-6α), 2.25 (1H, m, H-6β), 6.90 (1H, m, H-7), 2.90 (1H, m, H-9), 4.39 (1H, t, J=9.0 Hz, H-11α), 4.05 (1H, t, J=9.0 Hz, H-11β), 1.26 (3H, s, H-13), 1.08 (3H, s, H-14), 0.97 (3H,
66.7 (C-11), 16.5 (C-13), 27.7 (C-14), 13.6 (C-15); angelica acyl: 127.5 (C-2'), 139.1 (C-3'), 20.6 (2'-CH$_3$), 15.9 (3'-CH$_3$).
**2a-methylsulfonyl- 3β-angeloylcinnamolide (PJH-8 C$_{14}$H$_{16}$O$_5$S):**
To a solution of HAC (150 mg) in 50 mL di-chloromethane, was added DMAP (63 mg) and methanesulfonic anhydride (82.5 mg). The reaction mixture was stirred at room temperature for 18 h (TLC monitoring). The crude product was chromatographed on a silica gel column using the elution (petroleum ether: acetyl acetate=4:1) to afford PJH-8 (71% yield) as a white needle crystal. White needle crystal, $[\alpha]_D^{20} = -9.2^\circ$ (c 0.4, CH$_2$Cl$_2$). $^1$H-NMR (CDCl$_3$, 400 MHz): δ6.91 (1H, H-4), 6.19 (1H, H-15), 4.97 (1H, H-7), 4.90 (1H, H-8), 4.44 (1H, H-1e), 4.07 (1H, H-18), 2.94 (1H, H-9b), 2.94 (3H, H-17), 2.48 (1H, H-5a), 2.27 (1H, H-5b), 2.22 (1H, H-9a), 2.05 (3H, H-16), 1.96 (3H, H-14a), 1.74 (1H, H-5a), 1.66 (1H, H-9b), 1.10 (3H, H-10), 1.00 (3H, H-11), 0.96 (3H, H-12); $^13$C NMR (CDCl$_3$, 100MHz): δ169.28 (C-3), 166.75 (C-13), 140.30 (C-15), 135.59 (C-4), 127.17 (C-14), 126.75 (C-3a), 78.15 (C-7), 76.44 (C-8), 66.64 (C-1), 50.45 (C-9b), 48.51 (C-5a), 44.09 (C-9), 39.75 (C-6), 38.60 (C-17), 35.37 (C-9a), 28.09 (C-12), 24.62 (C-5), 20.60 (C-14a), 17.22 (C-11), 15.97 (C-16), 14.30 (C-10).
**Determination of absolute structure of HAC:** The crystal of HAC was established by slow evaporation of solvent (methanol: water=2:1) at room temperature (20°C). The X-ray structural data were collected with CAD4 EXPRESS (Enraf-Nonius, 1994) at 293.0 ± 0.1 K using graphite monochromatised MoKα radiation ($\lambda=0.71073$ Å). Cell refinement were processed with CAD4 EXPRESS, data reduction used XCAD4 (Harms & Wocadlo, 1995), structure solved by SHELXS-97 (Sheldrick, 1990) and refined by SHELXL-97 (Sheldrick, 1997). Molecular graphics employed DIAMOND and MERCURY.
HAC crystallized in monoclinic space group P21 with unit cell parameters $a=6.8640$ (14) Å, $b=25.676$ (5) Å, $c=11.190$ (2) Å, $\beta=106.14$ (3) Å, $V=1894.4$ (7) Å$^3$, Z=2, $D_x=1.222$ g/cm$^3$, $T=293$ K, $\lambda$ (Mo Kα)=0.71073 Å, the final $R1=0.1116$, $wR2=0.1735$ ($w=1/\sigma(F^2)$), and $S=1.003$ observed reflections with $I > 2\sigma(I)$. The deposition number CCDC 906633 for HAC contains the supplementary crystallographic data can be obtained free of charge via www.ccdc.cam.ac.uk/data_request/cif or Cam bridge crystallographic Data Centre, 12, Union Road, Cambridge CB2 1EZ, UK; fax: +44 1223 336033.
**Stability of HAC and its derivatives in the culture medium:** The test compounds was exposed to DMEM (final concentration 100 μM; final volume 200 μL) in 96-well microplates for 24 h, subsequently drew 100 μL supernatant to centrifuged and freeze-dried, add 1mL of methanol preparation chromatography. Then selected each compound to dissolve in 100μL DMSO with 1 mL methanol as a standard reference. The HPLC analysis of samples were performed on an Agilent 1260 HPLC system with Agilent C$_{18}$ chromatographic column (4.6 mm ×250 mm, 5 μm). The column temperature was maintained at 30°C. The detection wavelength was set to 222 nm. The mobile phase consisted of MEOH (A) and water (B) with a flow rate of 1.0 mL/min. The linear gradient was 50% A to 90% A in 25 min. Each sample analysis was repeated in triplicate.
**Statistical analysis**
All value were obtained from measurements performed in triplicates. For determination of IC$_{50}$ values, log concentrations and linear response data were analyzed by non-linear curve fitting using Prism soft package (GraphPad Software Inc.)
**Results**
**Cytotoxicity of HAC and its inhibitions on LPS-induced pro-inflammatory mediators**
The potential cytotoxicity of HAC was evaluated by the MTT assay after incubating cells for 24h in the absence of LPS, the results showed cell viabilities were not affected by HAC at indicated concentrations (1~100 μM, Figure 1A). Thus, HAC did not display significant cytotoxicity against RAW264.7 cells.
To determine the effects of HAC on the pro-inflammatory mediators in RAW264.7cells, the concentrations of NO and TNF-α in the cell supernatants were examined. Compound HAC showed higher 50% inhibition at 17.68 ± 2.99 μM in the suppression of LPS-induced NO production, however, the lower inhibitory effect (IC$_{50}$ 98.66 ± 13.55 μM) of HAC was obtained on TNF-α content by LPS-induced RAW 264.7 cells. So, HAC was found to inhibit LPS-induced over production of NO and TNF-α in a dose-dependent manner compared to the LPS group (Figure 1B–C). In present paper, Compounds in which the NO, TNF-α inhibition rate value exceed 50% were detected by the IC$_{50}$ values.
**Chemical-structural modification of HAC at C-2.**
In order to evaluate the importance of hydroxyl group at position of
C-2 in HAC, eight novel compounds (named as PJH-1 to PJH-8) were obtained aimed at its hydroxyl group at C-2 position (Scheme 1).
**Absolute configuration of HAC**
The relative stereo-structure of HAC has been determined in our previous paper [13]. In present day, the crystal of HAC was established by slow evaporation of solvent and analyzed by X-ray diffraction method (Figure 2).
**Cytotoxicity of HAC derivatives and their inhibitions on LPS-induced pro-inflammatory mediators**
All derivatives (PJH-1–PJH-8, 1–100 μM) were tested for inhibitory activities against NO production in LPS-induced macrophages and for their cytotoxic effects. The model group was established as a negative control, and the indomethacin (100 μM) group was established as a positive control, the NO concentration was assessed with Griess reagent. TNF-α concentration were measured by ELISA assay. The data are presented as the means ± S.D of three independent experiments. As the results, compounds PJH-1 and PJH-6 showed higher inhibition effects in NO production with IC\textsubscript{50} value of 7.31 μM or 9.28 μM, respectively; however, compounds PJH-1 and PJH-7 showed better inhibition effects on TNF-α levels. Therefore, the effect of PJH-1, as a derivative acetylated at C-2 hydroxyl group of HAC, on macrophages may come from mainly the suppression of TNF-α/NO pathways in a dose-dependent manner (Figure 3A-C).
**Influence of HAC and PJH-1 on iNOS protein and MAPKs signaling pathways in LPS-induced RAW264.7 by western blotting assay**
To address whether the inhibition of NO production was associated with decreased levels of iNOS, the effects of HAC and its most active derivative PJH-1 on LPS-induced expression of iNOS were investigated by Western blot analysis. The expression levels of iNOS were strongly induced by LPS, HAC and PJH-1 (1–100 μM) inhibited the LPS-induced iNOS protein induction in a dose-dependent manner (Figure 4). These results are consistent with the inhibitory effects of them on NO production. This result indicates that HAC and its derivatives suppress LPS-induced expression of iNOS at the transcriptional level.
The MAPKs pathways are known to be important for the expression of iNOS and COX-2. Therefore, MAP kinases act as specific targets for inflammatory responses. To test whether the inhibition of inflammation by HAC is regulated through the MAP kinase pathways, we examined the effect of HAC and PJH-1 on LPS induced phosphorylation of ERK, p38 and JNK in Raw264.7 cells using Western blot analysis. As shown in Figure 5, HAC and PJH-1 attenuated the LPS-stimulated phosphorylation of ERK, p38 and JNK in a concentration-dependent manner. These results suggested that the MAPKs pathways were involved during HAC suppressed LPS-mediated expression of inflammatory mediators.
**Stability of HAC and derivatives in cell medium by HPLC method**
As above results, HAC and its derivatives, PJH-1 inhibited NO and TNF-α production in LPS-activated RAW264.7 macrophage cells without significant cytotoxicity. As we known, the stability of target structure in cell medium is important for its in vitro activity evaluation. So HAC and PJH-1 were determined followed by incubation in cell medium for 24 h. HAC in medium is unstable and easy to decompose a new peak (t\textsubscript{R}, 12.059 min) in HPLC chromatogram, the concentration of active derivative PJH-1 is unchanged (Figure 6).


Discussion
Sesquiterpenoids are a large group of secondary metabolites of many medicinal plants and exhibit a variety of biological activities. Macrophages play a key role in the specific and non-specific immune response during the inflammation process, large amount of the inflammatory mediators such as nitric oxide (NO), prostanoids and pro-inflammatory cytokines will be released in LPS-activated macrophages [23].
Until now, some sesquiterpenoids has been evaluated on their anti-inflammation effects focused on the activation of NF-kB or inhibition of iNOS-dependent NO synthesis [24], only few investigation on structure-activity relationship of drimane-type sesquiterpenoids have been performed. Drimane sesquiterpenes are frequently occurring metabolites in plants, which exhibit a variety of biological activities [25]. In our previous paper, a drimane sesquiterpene lactone, 2α-hydroxyl - 3β - angeloylcinnamolide (HAC) from the Chinese folk medicinal herb *Polygonum jucundum* Lindx, has been reported as a new anti-inflammatory remedy by using xylene-induced ear edema and acetic acid-induced vascular permeability mouse inflammation models [13], HAC can be translated in vivo into another new drimane sesquiterpenoid, 2α, 3β - dihydroxylcinnamolide [14].
In present day, we found that pretreatment with HAC (1-100 μM) significantly inhibited the production of NO in LPS-induced RAW264.7 cells at IC\(_{50}\) concentrations 17.68 μM. NO is a free radical produced from L-arginine by nitric oxide synthases (NOS), the high level of NO might cause inflammatory damage to target tissue during an infection [26,27]. The inhibition of NO release may be effective as a therapeutic agent in the inflammatory disease [28]. Therefore, the regulation of NO release via inhibiting iNOS is helpful to alleviate the inflammatory destruction. As we presumed, HAC can significantly decreased iNOS expression levels induced by LPS in a dose-dependent manner. TNF-α and IL-1 are regulated by NF-KB but are at the same time potent activators of NF-kB themselves, a series of structurally different sesquiterpenoids has been evaluated for their inhibition activities of inflammatory cytokine production and correlation with NF-kB pathway, however HAC show no effect on TNF-α production in LPS-induced RAW264.7 cells.
Sesquiterpene lactones with α, β-unsaturated carbonyl moieties are reactive to cysteine thiol group in the Michael-type addition and have been identified to inhibit the NF-kB signaling pathway. α,β-unsaturated carbonyl moieties in HAC partly account for its inflammatory activities, but the importance of hydroxyl group on anti-inflammation effects are unclear. Therefore, a series of closely related compounds PJH-1 ~ PJH-8 were obtained by chemical procedures and investigated for their inhibitions on NO and TNF-α production in LPS-induced RAW 264.7 cells. As the results, the NO production was significantly inhibited in a dose-dependent manner with IC\(_{50}\) values of 11.8 μM PJH-1. In accordance with the TNF-α assay, compound PJH-1 also inhibited significantly TNF-α production with IC\(_{50}\) value of 3.38μM. In addition, compound PJH-6, another sulfated derivative of HAC, with sulfate group combined to the hydroxyl group at C-2, also significantly inhibited the NO production with IC\(_{50}\) value of 9.28 μM in a dose-dependent manner, but show no effect on TNF-α production in LPS-induced RAW264.7 cells.
These results suggested that the anti-inflammatory effects of HAC were dramatically improved after being acetylated at C-2 position (compound PJH-1). Expression of the iNOS in macrophages is regulated mainly at the induction of transcription factors through mitogen-activated protein kinases (MAPKs). MAPKs important to macrophage cells include p38, c-jun N-terminal kinase (JNK), and extracellular signal-regulated kinase (ERK). This process activates transcription factors such as NF-κB, which in turn upregulates production of cytokines, such as TNF-α and NO. Accordingly, we investigated the effect of HAC and PJH-1 on the LPS-induced ERK, p38 and JNK activation. They caused a dose-dependent inhibition of the phosphorylation of ERK, p38 and JNK. Interestingly, HAC and PJH-1 suppressed activated all of three MAPKs significantly. Taken together, our results provide evidence that HAC and PJH-1 suppressed LPS-induced iNOS expression through the blockage of MAPKs signaling cascade activation. In addition, it should be noted that a high possibility that sesquiterpenoids form adducts in the Dulbecco’s Modified Eagle’s Medium (DMEM) containing 10% FBS before their application to the cells. Hence, in the cell viability assay, the pre-incubation of HAC and PJH-1 (200 M, respectively) in the 10% FBS-DMEM was examined for 24h. It showed that a novel peak (tR=12.059 min) can be detected in HPLC chart of HAC, on the contrary, PJH-1 had no change in the culture.
**Conclusion**
In a conclusion, we proposed to take HAC as the lead compound, designed 8 new compounds, the structure of the compounds was confirmed by spectroscopic methods. Meanwhile, absolute stereostructure of HAC was determined by X-ray crystallography analysis. Inhibitory effects of these compounds on the production of NO and TNF-α induced by LPS on RAW264.7 cell were examined. HAC and PJH-1 may be attributed to their roles in down-regulation of MAPKs pathways by Western blot assay, and the stabilities of PJH-1 were more stable in cell culture medium than that of HAC by HPLC methods.
These results indicated that purposefully modified compounds and the stability of compounds in the medium should be considered to explore the convicitive structure–activity relationships of sesquiterpene lactones. Hence, compound PJH-1 can be selected as the candidate drug for HAC in vitro pharmacological mechanism research of anti-inflammation. So, HAC derivatives might potentially constitute a novel class of anti-inflammatory agents, which require further studies.
**Acknowledgements**
In this paper, Yang Hu undertake the synthesis and pharmacology experiments, and Fei Zhang has contributed to the X-ray structure analysis. Chao-feng Zhang, the corresponding author, undertook the design of this project and QSAR analysis. We would like to thank the National Natural Science Foundation (30700060) and the National New Drug Innovation Major Project of China (2011ZX09307-002-02), for financial support for this research.
**References**
1. Libby P (2007) Inflammatory mechanisms: the molecular basis of inflammation and disease. Nutr Rev 65: S140-146.
2. Yi PF, Bi WY, Shen HQ, Wei Q, Zhang LY, et al. (2013) Inhibitory effects of sulfated 20(S)-ginsenoside Rb2 on the release of pro-inflammatory mediators in LPS-induced RAW 264.7 cells. Eur J Pharmacol 712: 60-66.
3. Zhang X, Song Y, Xiong H, Ci X, Li H, et al. (2009) Inhibitory effects of ivermectin on nitric oxide and prostaglandin E2 production in LPS-stimulated RAW 264.7 macrophages. Int Immunopharmacol 9: 354-359.
4. Zhong LM, Zong Y, Sun L (2012) Resveratrol inhibits inflammatory responses via the mammalian target of rapamycin signaling pathway in cultured LPS-stimulated microglial cells. Plos One 7: e32195.
5. Goldring MB, Berenbaum F (2004) The regulation of chondrocyte function by proinflammatory mediators: prostaglandins and nitric oxide. Clin Orthop Relat Res : S37-46.
6. Palmer RMJ, Ashton DS, Moncada S (1988) Vascular endothelial cells synthesize nitric oxide from L-arginine. Nature 333: 664-666.
7. De Marino S, Borbone N, Zollo F, Ianaro A, Di Meglio P, et al. (2005) New sesquiterpene lactones from Laurus nobilis leaves as inhibitors of nitric oxide production. Planta Med 71: 706-710.
8. Li W, Huang X, Yang XW (2012) New sesquiterpenoids from the dried flower buds of Tussilago farfara and their inhibition on NO production in LPS-induced RAW264.7 cells. Fitoterapia 83: 318-322.
9. Chung MJ, Cheng SS, Lin CY, Chang ST (2012) Profiling of volatile compounds of Phylllostachys pubescens shoots in Taiwan. Food Chem 134: 1732-1737.
10. Zaugg J, Eickmeier E, Rueda DC, Haring S, Hamburger M (2011) HPLC-based activity profiling of Angelica pubescens roots for new positive GABAA receptor modulators in Xenopus oocytes. Fitoterapia 82: 434-440.
11. Hasan A, Ahmed I, Jay M, Voirin B (1995) Flavonoid glycosides and an araquinoquine from Rumex chalepensis. Phytochemistry 39: 1211-1213.
12. Lin Y, Zhang C, Zhang M (2009) [Chemical constituents in herbs of Polygonum jucundum]. Zhongguo Zhong Yao Za Shi 34: 1690-1691.
13. Zhang CF, Hu Y, Lin Y, Huang F, Zhang M (2012) Anti-inflammatory activities of ethyl acetate extract of Polygonum jucundum and its phytochemical study. Journal of Medicinal Plants Research 6: 1505-1511.
14. Zhang F, Gong XS, Xiao BM, Zhang CF, Wang ZT (2013) Pharmacokinetics and tissue distribution of a bioactive sesquiterpenoid from Polygonum jucundum following oral and intravenous administrations to rats. J Pharmaceut Biomed 83: 135-140.
15. Wong HR, Menezes IY (1999) Sesquiterpene lactones inhibit inducible nitric oxide synthase gene expression in cultured rat aortic smooth muscle cells. Biochem Biophys Res Commun 262: 375-380.
16. Hehner SP, Heinrich M, Bork PM, Vogt M, Ratter F, et al. (1998) Sesquiterpene lactones specifically inhibit activation of NF-kappaB by preventing the degradation of I kappa B-alpha and I kappa B-beta. J Biol Chem 273: 1288-1297.
17. Tamura R, Chen Y, Shinozaki M, Arao K, Wang L, et al. (2012) Eudesmane-type sesquiterpene lactones inhibit multiple steps in the NF-κB signaling pathway induced by inflammatory cytokines. Bioorg Med Chem Lett 22: 207-211.
18. Sultana R, Hossain R, Adhikari A, Ali Z, Yousuf S, et al. (2011) Drimane-type sesquiterpenes from Polygonum hydropiper. Planta Med 77: 1848-1851.
19. Wang D, Tang W, Yang GM (2010) Anti-inflammatory, Antioxidant and Cytotoxic Activities of Flavonoids from Oxytropis falcate Bunge. Chin J Nat Medicines 8: 461-465.
20. Fan H, Qi D, Yang M, Fang H, Liu K, et al. (2013) In vitro and in vivo anti-inflammatory effects of 4-methoxy-5-hydroxycanthin-6-one, a natural alkaloid from Picrasma quassioloides. Phytochemistry 20: 319-323.
21. Tseng CH, Cheng CM, Tzeng CC, Peng SI, Yang CL, et al. (2013) Synthesis and anti-inflammatory evaluations of 1β-lapachone derivatives. Bioorg Med Chem 21: 523-531.
22. Abdelwahab SI, Hassan LE, Sirat HM, Yagi SM, Koko WS, et al. (2011) Anti-inflammatory activities of cucurbitacin E isolated from CitrusI lanatus var. citroides: role of reactive nitrogen species and cyclooxygenase enzyme inhibition. Fitoterapia 82: 1190-1197.
23. Lee JK, Sayers BC, Chun KS, Lao HC, Shipley-Phillips JK, et al. (2012) Multi-walled carbon nanotubes induce COX-2 and iNOS expression via MAP kinase-dependent and -independent mechanisms in mouse RAW264.7 macrophages. Part Fibre Toxicol 9: 14.
24. Koch B, Jensen LE, Nybroe O (2001) A panel of Tn7-based vectors for insertion of the gfp marker gene or for delivery of cloned DNA into Gram-negative bacteria at a neutral chromosomal site. J Microbiol Methods 45: 187-195.
25. Jansen R, Gerstein M (2004) Analyzing protein function on a genomic scale: the importance of gold-standard positives and negatives for network prediction. Curr Opin Microbiol 7: 535-543.
26. Bogdan C (2001) Nitric oxide and the immune response. Nat Immunol 2: 907-916.
27. Miljkovic D, Trajkovic V (2004) Inducible nitric oxide synthase activation by interleukin-17. Cytokine Growth Factor Rev 15: 21-32.
28. Lee HJ, Kim NY, Jang MK, Son HJ, Kim KM, et al. (1999) A sesquiterpene, dehydrocostus lactone, inhibits the expression of inducible nitric oxide synthase and TNF-alpha in LPS-activated macrophages. Planta Med 65: 104-108. |
CHRISTMAS GOSPEL
BY CHRIS LASS
SONGBOOK
GO, TELL IT ON THE MOUNTAIN
Swing Intro
J = 140 F C/F Bb/F F Bb m6/F
Refrain
5 F Bb/F C/F F Bb/F Gm/F F Bb/F
Go tell it on the mountain over the hills and everywhere...
9 F Bb/F C/F F Bb/F
Go tell it on the mountain that Jesus Christ is born.
Interlude
12 F C/F Bb/F F Bb m6/F
Refrain 2
16 F A7(b13) Dm7 Gm7 Bb/C F F/A Bb Gm/C
Go tell it on the mountain over the hills and everywhere.
20 F A7(b13) Dm7 Gm7 F/C C7 F C/F Bb/F F
Go tell it on the mountain that Jesus Christ is born.
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
MARY'S BOY CHILD
Interlude
Long time ago in Bethlehem, so the Holy Bible says—
Mary's boy child, Jesus Christ, was born on Christmas Day.
Refrain 1.
Hark now hear the angels sing, a king was born today. And man will live forever more, because of Christmas Day.
Interlude
Refrain 2
Hark now hear the angels sing, a king was born today. And
man will live for ever more, because of Christmas Day.
© 1957 by Bourne Inc. / EMI Music Publishing
**DECK THE HALLS**
Text und Melodie: aus Wales überliefert
*Interlude*
Deck the halls with boughs of holly, Fa la la la la, la la la la la.
'Tis the season to be jolly, Fa la la la la, la la la la la.
Sing we joyous all together, Fa la la la la la, la la la la.
Heedless of the wind and weather, Fa la la la la, la la la la.
Deck the halls with boughs of holly, Fa la la la la, la la la la.
'Tis the season to be jolly, Fa la la la la, la la la la.
Sing we joyous all together, Fa la la la la la, la la la la.
Heedless of the wind and weather, Fa la la la la, la la la la.
last time rit.
Ablauf:
Intro
Go tell it Refrain - einstimmig -
Go tell it Refrain 2 - mehrstimmig -
Interlude -
Mary's Boy Child - einstimmig -
Refrain - einstimmig -
Vers - einstimmig -
Refrain 2 - mehrstimmig -
Interlude -
Deck the Halls - einstimmig -
Deck the Halls 2 Modulation - einstimmig -
Gospelworkshops und mehr Notenmaterial auf www.GospelCoach.de / www.chrislass.com
ANGELS WE HAVE HEARD ON HIGH
Text und Melodie: aus Frankreich überliefert
Engl. Text: James Chadwick (1862)
Satz: Chris Lass
J = 120
Dm7 F(add9)/A Bb C#º7 Dm7 F(add9)/A G7/B C#º7
Vers 1
1. An-gels we have heard on high sweet-ly sing-ing o'er the plains,
and the moun-tains in re-ply, e-cho-ing their joy-ous strains.
Refrain
Glo-ri-a,
in ex-cel-sis de-o. de-o, de-
Gospelworkshops und mehr Notenmaterial auf www.GospelCoach.de / www.chrislass.com
Ode to Joy
28 Dm7 F(add9)/A Bb C#º7 Dm7 F(add9)/A Bb Bb/C F
O___________ de - o___________
Vers 2 / 3
32 Dm7 F(add9)/A Bb C#º7 Dm7 F(add9)/A G7/B C#º7
2. Shep - herds, why this ju - bi - lee? Why your joy-ous strains pro-long?
3. Come to Beth-le-hem and see him whose birth the an-gels sing;
36 Dm7 F(add9)/A Bb C#º7 Dm7 F(add9)/A Bb Bb/C F
What the glad some ti-dings be which in-spire your heav’n-ly song?
Come, a-dore on ben-ded knee, Christ the Lord, the new-born King.
Refrain 2
40 D7 D/C Gm/Bb C'(sus4)/Bb C/Bb F/A Bb G7/B C'(sus4) C C/Bb
Glo-ri-a,
44 F/A Gm/Bb Bb:maj7/C Bb/D F/C Bb/C C |1. F/C Bb/C C |2. F/C Bb/C F/C
in ex-cel-sis de-o. de-o, deO, deo, deo,
O, deo, deo,
D.S. al Coda
Gloria,
in excelsis deo.
Vamp
F(add9)/A Bbmaj7 Gm7 C F(add9)/A Bbmaj7 Bbmaj7 F/A Gm7 C
O, de-o, de-
De-o, de-o,
De-o, de-o,
De-o, de-o,
F#(add9)/A# Bmaj7 G#m7 C# F#(add9)/A# Bmaj7 Bmaj7 F#/A# G#m7 C#
O, de-o, de-
de-o, de-o,
de-o, de-o,
de-o, de-o,
Ablauf:
Intro
Vers 1 - einstimmig -
Refrain - einstimmig -
Vers 2 - einstimmig -
Refrain 2 - mehrstimmig -
Vers 3 - einstimmig -
Refrain 3 - mehrstimmig -
Vamp - Kanon -
Originaltitel: Les anges dans nos campagnes
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
HAVE YOU HEARD
Intro
Vers 1
Christmas time, presents and snow,
children awaiting their miracle.
Holidays with time for friends, the family at the Christmas tree.
PreChorus
Do they know, have they heard what this Christmas brings?
Refrain 1
There’s a
Gospelworkshops und mehr Notenmaterial auf www.GospelCoach.de / www.chrislass.com
child born for you and me, bringing hope that can set us free. There’s a king born in Bethlehem, have you heard?
2. Refrain 2
There’s a child born for you and me, bringing hope that can set us free. There’s a
king born in Bethlehem,
have you heard? His present is peace, his message is love, his grace flows abundantly.
Ablauf:
Intro
Vers 1 - einstimmig -
PreChorus - einstimmig -
Refrain 1 - einstimmig -
Interlude -
Vers 2 - einstimmig -
PreChorus - einstimmig -
Refrain 2 - mehrstimmig -
Bridge - mehrstimmig -
Refrain 3 - mehrstimmig -
MARY, DID YOU KNOW
Text und Melodie: Mark Lowry & Buddy Greene
Satz: Chris Lass
Intro
Verse 1
1. Mary, did you know that your Baby Boy will one day walk on water? Mary, did you know that your Baby Boy will save our sons and daughters? Did you know that your Baby Boy has come to make you new.
This child that you've delivered will soon deliver you.
Mary, did you know?
Gospelworkshops und mehr Notenmaterial auf www.GospelCoach.de / www.chrislass.com
Mary, did you know?
Mary, did you know that your Baby Boy will give sight to a blind man?
Mary, did you know that your Baby Boy will calm the storm with His hand?
Did you know that your Baby Boy has walked where angels trod?
And when you kiss your Baby, you've kissed the face of God.
Mary, did you know?
Bridge
The blind will see, the deaf will hear, the dead will live again.
The lame will leap, the dumb will speak the praises of the Lamb.
Verse 3
Mary, did you know that your Baby Boy is Lord of all creation? Mary, did you know that your Baby Boy will one day rule the nations? Did you know that your Baby Boy is heaven's perfect Lamb.
And the sleeping child you've holding is the great
Ablauf:
Intro
Vers 1 - Tenor -
Vers 2 - Alt -
Bridge - mehrstimmig -
Vers 3 - Sopran -
Schluss - mehrstimmig -
© 1991 Rufus Music
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
© 1991 Word Music
Für D, A, CH: Small Stone Media Germany, Köln
GOD REST YE MERRY, GENTLEMEN
Text und Melodie: aus England (18. Jh.)
Satz: Chris Lass
Intro
God
Vers 1
Rest Ye Merry gentlemen let nothing you dismay, remember Christ our Saviour was born on Christmas Day to save us all from Satan's pow'r when we were gone astray.
Refrain 1
Tidings of comfort and joy, comfort and joy. O tidings of comfort and joy.
Interlude
God
Rest Ye Merry gentlemen let nothing you dismay, remember Christ our Saviour was born on Christmas Day to save us all from Satan's pow'r when we were gone astray.
O tidings of comfort and joy, comfort and joy. O tidings of comfort and joy.
To the Lord sing praises all, within this place, in this place
With true love and brotherhood each other now embrace.
This holy tide of Christmas all others doth deface:
O tidings of comfort and joy, comfort and joy. O tidings of comfort and joy.
Bethlehem in Israel this blessed Babe was born and
laid within a manger upon this blessed morn. The which His mother Mary did
Refrain 3
nothing take in scorn. O tidings of comfort and joy, comfort and joy. O
Outro
ti-dings of com-fort and joy.
Ablauf:
Intro -
Vers 1 - einstimmig -
Refrain 1 - einstimmig -
Zwischenspiel -
Vers 2 - mehrstimmig -
Refrain 2 - mehrstimmig -
Zwischenspiel -
Bridge -
Zwischenspiel -
Vers 3 - mehrstimmig -
Refrain 3 - mehrstimmig -
Outro -
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
AWAY IN A MANGER
Text: überliefert
Melodie: William James Kirkpatrick (1895)
Satz: Chris Lass
= 60 Vers 1 D Am/D G/D
Away in a manger, no crib for a bed, the
Gm/D D E7/D C#º/D
little Lord Jesus lay down His sweet head. The
D(addº)/F# G(addº) D/A G(addº)/B
stars in the bright sky looked down where He lay, the
A7/C# D/F# A7(sus4) D
little Lord Jesus asleep on the hay.
Zwischenspiel
D(sus4) D Ab/Bb Eb
22 Db/Eb Eb Db/Eb Vers 2 Cm
Away in a
manager, no crib for a bed,
the little Lord Jesus lay down His sweet head.
The stars in the bright sky looked down where He lay,
the little Lord Jesus asleep on the hay.
Be near me, Lord Jesus, I ask Thee to stay close by me forever and love me, I pray. Bless all the dear children in Thy tender care and fit us for heaven to live with Thee there, to live with Thee there, to live with Thee there.
Ablauf:
Vers 1 - einstimmig -
Vers 2 - mehrstimmig -
Vers 3 - einstimmig -
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
Diese Seite bleibt zugunsten einer besseren Wendestelle leer.
THERE'S A STAR
Intro
C C(sus4) C(sus2) C F(add9)
Vers
There's a star calling, calling to say a child is born in Bethlehem, a child is born today.
PreChorus
Who would have thought
Refrain
this is the child of God. There's a star looking down on where you are. There's a
Gospelworkshops und mehr Notenmaterial auf www.GospelCoach.de / www.chrislass.com
star shining right into your heart.
He is your peace, keeping you close.
I believe He's our everything.
Interlude
Who would have thought
this is the child of God. There's a
star looking down on where you are. There's a star looking down on where you are. He's our star shining right into your heart. star shining right into your heart.
He is your peace, keeping you close. I believe He is our, He's our He is your peace, keeping you close. He is our, He's our He is your peace, keeping you close. He is our, He's our He is your peace, keeping you close. I believe He is our, He's our
2.
Dm7 C(add9) Bb/D C/E F(add9) C/E
He is our everything._________ I believe_
67 Dm7 Am C/E F(add9) C/E
He is our everything._________ I believe_
71 Dm7 C(add9)
He is our everything._________
Ablauf:
Intro
Vers - einstimmig -
PreChorus - einstimmig -
Refrain - einstimmig -
Interlude -
Vers - einstimmig -
PreChorus - mehrstimmig -
Refrain - mehrstimmig -
Refrain - mehrstimmig -
Outro - mehrstimmig -
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
O COME, ALL YE FAITHFUL
Text: überliefert
Engl. Text: John F. Wade (um 1743) / Frederick Oakeley (1841)
Melodie: John F. Wade (um 1743) / John Reading (vor 1692)
Satz: Chris Lass
Intro
E E/D# A/C# E E/D# A/C# B
1. O, come, all ye faithful, joyful and triumphant! O,
2. Sing, choirs of angels, sing in exultation.
Come, ye, o, come ye to Bethlehem.
Sing, all ye citizens of heaven above!
Come and behold him born the king of angels: O,
Glory to God In the highest: O,
Refrain
come, let us adore him, o, come, let us adore him, o,
come, let us adore him, Christ the Lord.
2. Refrain 2
highest: O, come, let us adore him, o, come, let us ad-
dore him, o, come, let us adore him, Christ the
Interlude
Lord.
3. Yea, Lord, we greet Thee, born this happy morning;
Jesus, to Thee be all glory.
Word of the Father, now in flesh appearing! O,
Refrain 3
come, let us adore him, o, come, let us adore him, o,
come, let us adore him, Christ the
Gospel Chant: Come, let us adore him, o, come, let us adore him, o, come, let us adore him, Christ the
Ablauf:
Solo Vers 1 -
Chor Unisono: „O Come Let“ -
Interlude -
Chor Unisono: Vers 2 -
Chor mehrstimmig: „O Come Let“ -
Interlude -
Chor Unisono: Vers 3 -
Chor mehrstimmig -
Outro -
Originaltitel: Adeste fideles
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
CHRISTMAS MIRACLE
Text, Melodie und Satz: Chris Lass
Vers
What child is this understand,
from Bethlehem a king on straw his
star is shining for him who'd
grace is overwhelming.
Refrain 1
Counselor
you reign forever.
manuel
your kingdom come.
Son of man, my oh stay with me
Christmas miracle, What
Refrain 2
Counselor, You reign forever.
manuel Your kingdom come.
Son of man, o stay with me my Christmas miracle.
You are son of man, the great I am. You're the baby that brought peace.
key to life, the sacrifice. You're my
christmas miracle.
Ablauf:
Vers 1 - einstimmig -
Refrain 1 - zweistimmig -
Vers 2 - einstimmig -
Refrain 2 - mehrstimmig -
Vamp - mehrstimmig -
Refrain 2 - mehrstimmig -
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
SILENT NIGHT / STILLE NACHT
Text: Joseph Mohr (1816), bearbeitet von Johann Hinrich Wichern (1844)
Engl. Text: John F. Young (1863)
Melodie: Franz Xaver Gruber (1818)
bearbeitet von Johann Hinrich Wichern (1844)
Intro
\[ \text{Intro} \]
\[ \text{Vers 1} \]
\[ \text{Sopran & Solist unisono} \]
Silent night, holy night, all is calm,
all is bright, round yon Virgin Mother and Child,
Holly Infant so tender and mild.
Sleep in heavenly peace, sleep in heavenly peace,
peace.
Vers 3
uh... Stille Nacht, heilige Nacht!
peace.
Alles schläft, einsam wacht nur das trauerte hochheilige Paar,
hol der Kna be im locki-gen Haar. Schlaf in himmli-scher Ruh,
Ablauf:
Intro -
Vers 1 - unisono -
Vers 2 - mehrstimmig - Modulation -
Vers 3 - mehrstimmig - deutsch -
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
O HOLY NIGHT
Text: Placide Cappeau (1847)
Engl. Text: John S. Dwight (1812-1893)
Melodie: Adolphe C. Adam (1803-1856)
Satz: Chris Lass
rubato
\[ \text{G} \quad \text{Cm6} \quad \text{Gm/Bb} \quad \text{Ebmaj7} \]
\[ \text{Am7} \quad \text{C/D} \quad \text{G} \quad \text{G(add9)} \quad \text{G} \quad \text{G(add9)} \]
\[ \text{G} \quad \text{G/F#} \quad \text{C/E} \quad \text{G/D} \quad \text{C} \quad \text{C/B} \quad \text{Am7} \]
O Holy Night! The stars are brightly shining, it is the night of our dear Savior's birth.
Long lay the world in sin and error pin-
ing. Till He appeared and the spirit felt its worth.
Gospelworkshops und mehr Notenmaterial auf www.GospelCoach.de / www.chrisslass.com
A thrill of hope the weary world rejoices, for yonder breaks a new and glorious morn.
Fall on your knees! O,
hear the angel voices. O uuh, uuh,
night divine, o uuh, uuh,
night when Christ was born. O uuh.
O Holy Night!
O holy night! The stars are brightly shining,
It is the night of our dear Savior's birth.
O holy night! We hear the angels singing,
Glory to the newborn King!
Long lay the world in sin and error pin-
ing. Till He appeared and the spirit felt its worth.
thrill of hope the weary world rejoices, for
yon-der breaks a new and glo-rious mourn.
Fall on your knees! O, hear the angel voices. O night divine, o night when Christ was born.
night o holy night
o night divine.
Truly He taught us to love one another, His law is love and His
The gospel is peace.
Chains he shall break,
for the slave is our brother and in his name all oppression shall cease.
Sweet hymns of joy in grateful chorus raise we, with
all our hearts we praise His holy name.
Christ is the Lord! Then ever ever praise we His power and glory
ever more proclaim His
and glory
power and glory
ever more proclaim
His
Ablauf:
Intro -
Vers 1 - Solo -
Vers 2 - Chor einstimmig, dann mehrstimmig -
Vers 3 - mehrstimmig -
Originaltitel: Cantique de Noel
© (Satz) 2016 Chris Lass Publishing
Für D, A, CH: SCM Hänssler, 71087 Holzgerlingen
Diese Seite bleibt zugunsten einer besseren Wendestelle leer.
FELIZ NAVIDAD
Intro
= 130
Dmaj7 C#m7 Bm7 D/E G/A D#7
5
Dmaj7 C#m7 Bm7 D/E A D/A Bm/A A
Vers 1
A/C#
F-e-liz Na-vi-
dad,
F-e-liz Na-vi-dad,
F-e-liz Na-vi-
dad pro-spe-ro añ-o y fe-li-ci-dad.
F-e-liz Na-vi -
2. Refrain 1
G/A A(add9)/C# D(add9) E C#m7
We wan-na wish you a mer-ry Christ-mas we wan-na wish you a mer-ry Christ-mas,
we wan-na wish you a mer-ry Christ-mas from the bot-tom of our heart.
We wanna wish you a merry Christmas, we wanna wish you a merry Christmas,
Oh, oh,
we wanna wish you a merry Christmas from the bottom of our heart.
oh, bottom of our heart.
Interlude
Vers 2
Félix Navidad, Félix Navidad, Félix Navidad,
dad prospero año y felicidad.
Félix Navidad,
Félix Navidad,
Félix Navidad,
Refrain 2
dad prospero año y felicidad.
We wanna wish you a
merry Christmas we wanna wish you a merry Christmas, we wanna wish you a
Oh, oh,
mer-ry Christ-mas from the bot-tom of our heart. We wan-na wish you a
oh, bot-tom of our heart.
2. Refrain 3
We wan-na wish you a mer-ry Christ-mas, we wan-na wish you a mer-ry Christ-mas,
Fe-liz Na-vi-dad, Fe-liz Na-vi-dad,
we wan-na wish you a mer-ry Christ-mas from the bot-tom of our heart.
Fe-liz Na-vi-dad pros-pe-ro añ-o y fe-li-ci-dad.
We wanna wish you a merry Christmas we wanna wish you a merry Christmas,
Félix Navidad, Félix Navidad,
Oh, oh,
Félix Navidad, Félix Navidad,
we wanna wish you a merry Christmas from the bottom of our heart.
Félix Navidad prospero añoyo felicidad.
oh, bottom of our heart.
Félix Navidad prospero añoyo felicidad.
Refrain 4
F# E/F# E(add9) F# D#m7
Félix Navidad, Félix Navidad,
We wanna wish you a merry Christmas we wanna wish you a merry Christmas,
Oh, oh,
G#7(sus4) G#7 C#m7 F#7 Amaj7/B
Félix Navidad prospero año y felicidad.
we wanna wish you a merry Christmas from the bottom of our heart.
Ablauf:
Intro
Vers 1 - einstimmig -
Refrain 1 - einstimmig / mehrstimmig -
Interlude
Vers 2 Modulation - einstimmig / mehrstimmig -
Refrain 2 mit O - mehrstimmig -
Refrain 3 Kanon - mehrstimmig -
Refrain 4 Modulation Kanon - mehrstimmig -
© 1970 J & H Publishing Co.
Mit freundlicher Genehmigung von Chrysalis Music Holdings GmbH
The angel said to them, “Do not be afraid; for behold, I bring you good news of great joy which will be for all the people: for today in the city of David there has been born for you a Saviour who is Christ the Lord.”
The angel said to them, “Do not be afraid; for behold, I bring you good news of great joy which will be for all the people: for today in the city of David there has been born for you a Saviour who is Christ the Lord.”
Chris Lass
Jeder Tag zählt
Wenn Hoffnung mehr als Leben ist
Dieser ermutigende Dokumentarfilm erzählt die Geschichte des Musikers Chris Lass, der bereits als Teenager schwere Tiefschläge verkraften musste, doch durch sein Vertrauen auf Gott gestärkt aus diesen Krisen hervorgehen konnte. Chris erkennt: Jeder Tag zählt, denn jeder Tag ist ein Geschenk.
Spieldauer: 35 min + Bonusmaterial: 20 min
DVD-Nr. 210.260
Chris Lass
Christmas Gospel
Die CD zum Songbook
Es gibt wohl kaum eine Musikstilistik, die besser zu Weihnachten passt als Gospelmusik.
Auf diesem Album sind die Songs aus diesem Liederbuch zu hören. Die mehrstimmigen Vokalsätze werden stimmig musikalisch umrahmt von einer Band und einem Orchester. Der Sound erinnert dabei an die großen Popaufnahmen säkularer Weihnachtssongs und sorgt für echte Weihnachtsstimmung.
CD-Nr. 097.372
| Seite | Titel |
|-------|--------------------------------------------|
| 6 | Angels We Have Heard On High |
| 22 | Away In A Manger |
| 34 | Christmas Miracle |
| 11 | Christmas Time |
| 34 | Counselor, Immanuel |
| 4 | Deck The Halls |
| 54 | Feliz Navidad |
| 2 | Go, Tell It On The Mountain |
| 18 | God Rest Ye Merry, Gentlemen |
| 11 | Have You Heard |
| 3 | Long Time Ago |
| 14 | Mary, Did You Know |
| 3 | Mary’s Boy Child |
| 30 | O Come, All Ye Faithful |
| 42 | O Holy Night |
| 38 | Silent Night |
| 38 | Stille Nacht |
| 11 | There’s A Child |
| 26 | There’s A Star |
| 54 | We Wanna Wish you A Merry Christmas |
| 34 | What Child Is This |
SCM Hänssler
www.scm-haenssler.de
ISBN 978-3-7751-5752-0
Bestell-Nr. 395.752
Covergestaltung und Illustrationen: Ann-Marie Falk
Notengrafik: Manuel Weber
Druck und Bindung: Druckerei Mack, Schönaich
Die in dieser Veröffentlichung enthaltenen Werke sind urheberrechtlich/wettbewerbsrechtlich geschützt. Jede Vervielfältigung muss bei dem im Copyright aufgeführten Rechteinhaber angefragt werden.
© Copyright 2016 SCM Hänssler, D-71087 Holzgerlingen
Alle Rechte vorbehalten / All rights reserved.
1. Auflage (2016) |
AMERICAN SOCIETY FOR JEWISH MUSIC
with the
YIVO Institute for Jewish Research
at the Center for Jewish History
Music in Our Time: 2021
A Virtual Concert
Premiered June 24, 2021 at 8 PM
Music by Josh Ehrlich • Stanislav Fridman • Meira Warshauer and Gerald Cohen
• Performances by Julian Müller and Stanislav Fridman • Jerusalem Lyric Trio
• The Cassatt String Quartet • The Choral Torah Collective
Introduction – Music in Our Time: 2021
Michael Leavitt, President
American Society for Jewish Music
– The Program –
Ha’Ola (Parashat Tzav)
by Josh Ehrlich
The Choral Torah Collective
Sopranos: Cantor Mira Davis, Carla Friend, Leilah Rosen
Altos: Cantor Arielle Green, Shirel Richman, Greta Rosenstock
Tenors: Josh Rosenberg, Cantor Jacob Sandler
Basses: Max Silverstone, Josh Ehrlich
Spiral of Souls
by Stanislav Fridman
Julian Müller, cello
Stanislav Fridman, piano
Yishakeyni (Sweeter Than Wine)
by Meira Warshauer
Jerusalem Lyric Trio
Amalia Ishak, soprano
Wendy EislerKashy, flute
Allan Sternfield, piano
Playing for Our Lives
by Gerald Cohen
The Cassatt String Quartet
Muneko Otani, violin
Jennifer Leshnower, violin
Ah Ling Neu, viola
Elizabeth Anderson, cello
Ashan (Parashat Yitro)
and
Miriam HaN’via (Parashat B’shalach)
by Josh Ehrlich
The Choral Torah Collective
Text and Translations
Ha’Ola (Parashat Tzav – Leviticus 6:2) by Josh Ehrlich
Ha’ola hi ha’ola toral
The Rising: the burnt offering itself
Zot hamizbeach al mok’dal
shall remain where it is burned
Haboker ad halala kol
upon the altar all night until
Bo tukad hamizbeach v’eish.
morning, which the fire on the
altar is kept going on it.
Yishakeyni (Sweeter Than Wine) by Meira Warshauer
(Translation by the composer)
Yishakeyni minshikot pihu
O that He would kiss me with His
lips!
ki tovim dodecha miyayin
For Your love is sweeter than wine.
al keyn alamot aheyrucha
Therefore, do the maidens love
You
mashkeyni acharacha narutz
Take me with You, let’s hurry.
heviani hamelech chadarav
The Lover has brought me into His
chambers.
nagila v nism’cha bach
We will be glad and rejoice in You.
nazkira dodecha miyayin
We will find Your love more
fragrant than wine.
meysharim aheyrucha.
Rightly do they love You.
Ashan (Parashat Yitro – Exodus 19:18) by Josh Ehrlich
Kulo alah Sinai v’Har ba’eish
And Mount Sinai was all in smoke,
Adonai alav yarad aher mip’nei
for Adonai had come down upon it
hakirshan k’eshen ashano vaya’al
in fire; the smoke rose like the
m’od ko hahar vayechered.
smoke of a furnace, and the whole
mountain trembled violently.
Miriam HaN’via (Parashat B’Shalach – Exodus 15:20) by Josh Ehrlich
B’yada et hofet han’via Miryam
And Miriam the prophetess took
vatikach uvimcholot b’tupim acharela
the drum in her hand, and all the
hanashim chol valeitzten.
women went out after her with
drums and dances.
Program Notes
Ha’Ola (Parashat Tzav); Ashan (Parashat Yitro); and Miriam HaN’via (Parashat B’shalach) by Josh Ehrlich
Between 2018-2019 Josh Ehrlich composed The Choral Torah: 5 Book in 4 Parts – a collection of fifty-four eclectic a cappella compositions – one for each week of the Jewish year – which reanimate our most ancient and venerated text through fresh, four-part harmony. He since founded The Choral Collective, an ensemble of gifted Judaean-musical educators eager to share this music and the lessons built into it on Biblical and musical literacy. These talented singers are heard in this performance.
Yishakeyni (Sweeter Than Wine) by Meira Warshauer sets the first four verses of Song of Songs, the great love song of the Bible. It invites the listener into a realm of human and Divine love which transcends boundaries through intimate merging. In this realm, all is beauty, with longing and ecstasy the poles of expression. The soprano sings the original text in Hebrew, with a modal melody derived from the traditional cantillation for Song of Songs. She also plays with the sounds and sensuality of the language itself, sometimes using vowels alone, as if disrobing the words of their consonants for greater intimacy. The piano and flute provide a gentle harmonic and melodic landscape for the Song. It is the composer’s hope that by entering the world of Shir HaShirim (Song of Songs), a world known to mystics and lovers from all traditions, we will come closer to making its reality our own. Yishakeyni received the 2004 Miriam Gideon Award from the International Alliance for Women in Music. It was commissioned by Columbia College, Columbia, South Carolina, in honor of Lee Baker and in memory of David Baker.
Playing for Our Lives by Gerald Cohen was commissioned by the Cassatt String Quartet, who gave the premiere of the piece in New York City in February 2012. The Cassatts, in planning a program of music of the composers who were interned in the Nazi concentration camp Terezin (Theresienstadt), asked me to compose a piece which would be a contemporary memorial and tribute to the musical life of that place. Terezin, near Prague, was in essence a transit camp, where Jews and some other prisoners were kept until transport to the death camps such as Auschwitz. The Nazis allowed a certain amount of art and education to take place at Terezin, both as a way of occupying the prisoners, and also
since it served their purpose of deceiving the world as to the nature of concentration camps in general. And there were a great number of excellent artists of all sorts in the camp, among those many excellent performers and several excellent composers—and so musical life flourished with a passion in these very strange surroundings. In this piece, the composer used several musical essences of the life at Terezin. One is the Yiddish folk song “Beryozkele” (Little birch tree), a poignant song that was arranged there by the composer Viktor Ullmann. The second is a lullaby from Hans Krasa’s opera Brundibar, which was one of the most important musical experiences of Terezin—an opera performed, more than 50 times at the camp, entirely by children as the singers. Finally, there are excerpts from Verdi’s Requiem, a piece that was championed at Terezin by the dynamic conductor Rafael Schachter, and was also performed many times. These musical elements and emotions are woven together to create a memorial to the musical and emotional life of the camp. “Beryozkele” and its tender lament dominate the early part of the piece; the middle section is a set of variations on the lullaby from Brundibar, as the music attempts to bring the joy of that piece to the fore; and the final section is dominated by elements of the Requiem, with its passion, anger, and also quiet mourning. The Cassatt String Quartet has recorded the piece—it will be released in 2022 by Innova Recordings on an album of my music for string quartet, featuring the Cassatt Quartet.
The video of Playing for Our Lives shown for this concert was originally produced for a concert of the Library of the Jewish Theological Seminary: Christopher Hickey, videographer/editor; Samantha Chapa, videographer; and Craig Slonczewski, audio engineer.
About the Performers
American German cellist Julian Müller, performs as soloist, chamber musician and orchestral player in the United States and Europe. Hailed as “…haunting and mesmerizing…” by USA Today, Julian appeared as soloist with the Louisville Orchestra, giving world premiere performances of the ballet How They Fade, composed by him and art-pop band, YASSOU, on a commission from the Louisville Ballet Company. Julian has been presented on NPR Live, with Sergei Babayan. Other chamber music collaborations include performances with Simone Dinnerstein, Matt Haimovitz, Peter Salaff, and members of the Cleveland Orchestra. Julian has made festival appearances at the Aspen Music Festival, Heifetz International Music Institute, Caroga Lake Music Festival, among others. Julian appears frequently with the Orchestra of St. Luke’s, performing in many of New York City’s various venues. Additionally, he performs with the Montclair Orchestra, has served as principal cellist of the Cleveland Institute of Music Orchestra, Mannes Orchestra, was a member of the New York String Orchestra Seminar, and was section cello with the Berkshire Symphony. Julian holds a Bachelor of Music degree from the Cleveland Institute of Music and a Master of Music and Professional Studies Diploma from the Mannes School of Music. Julian is currently pursuing a Doctorate of Musical Arts at Rutgers University studying with Jonathan Spitz. Other principal teachers include Timothy Eddy, Georg Faust, Ronald Feldman, and Sharon Robinson.
Stanislav Fridman, pianist (see composer biography below)
The Jerusalem Lyric Trio, Amalia Ishak, soprano, Wendy EislerKashy, flute, and Allan Sternfield, piano, highlights the religious and cultural heritage of the Jewish people. Since 1995, they have toured throughout Western and Eastern Europe, the USA and Canada, South America, Russia, and Israel, and represented Israel in prestigious international music festivals. The Trio has performed and recorded many compositions written especially for them, including Meira Warshawer’s Yishlakeyni. Amalia Ishak, debuted with the Israel Philharmonic in the Ponnelle/Mehta production of Carmen. She appeared in numerous productions of the Israeli Opera and performed at the Rome Opera House. Leonard Bernstein selected her to sing the world premiere of his Arias and Barcarolles in Israel and later in the U.K., describing her as “a singer of outstanding quality, her musicality and versatility are extraordinary.” Wendy EislerKashy studied with Julius Baker, coached with Marcel Moyse and Sir James Galway, and received her M.M. from Manhattan School of Music. In 1975, she was invited by the Jerusalem Symphony Orchestra to be first flutist. She later formed The Jerusalem Duo with pianist Allan Sternfield, representing Israel in music festivals in Hungary, Germany, the Czech Republic, and Morocco. Allan Sternfield studied at Peabody Conservatory with Walter Hautzig, and later coached with Leon Fleischer and Wilhelm Kempff. He has concertized in the US, Europe, Israel, South America, and the Far East. In 1976, he moved to Israel and joined the faculty of the Jerusalem Academy of Music and Dance. He has appeared as soloist with orchestras, in chamber and solo concerts, and on the radio.
The Cassatt String Quartet, acclaimed as one of America’s outstanding ensembles, and based in the Manhattan, has performed throughout North America, Europe, and the Far East. The Cassatt’s numerous awards include grants from the National Endowment for the Arts, the USArtists International, Chamber Music America, CMA/ASCAP, the Mary Flagler Cary Charitable Trust, Meet the Composer, and the Amphion, Copland, Fromm and Alice M. Ditson Music Foundations. Since 1995, the ensemble has been on the performing artist roster for the New York State Council on the Arts.
With a deep commitment to nurturing young musicians, the Cassatt has offered classes for composers and performers at the American Academy, Rome; the Toho School, Tokyo; Bowdoin International Music Festival; Columbia; Cornell; Princeton; Syracuse Universities, and the University of Pennsylvania. The quartet is in residence annually at Maine’s Seal Bay Festival of American Contemporary Chamber Music and Cassatt in the Basin! in Texas. Named for the celebrated impressionist painter Mary Cassatt, the quartet consists of Muneko Otani, violin; Jennifer Leshnower, violin; Ah Ling Neu, viola; and Elizabeth Anderson, cello.
About the Composers
Josh Ehrlich, hailed by Deke Sharon as “dynamic, bold and audacious,” is a composer, lyricist, arranger, accompanist, music director and music educator in New York City. Was a BA in linguistics from Yale (where he music-directed the *Society of Orpheus and Bacchus*) and an MA in composition from Rutgers, Josh now music-directs and orchestrates for the music theater productions, bands, and choirs at Camp Ramah in the Berkshires and The Leffell High School. He also music-directs *Hallelu*, assistant-directs *Kol Ram*, sings regularly with *Pella, Kol Zimra, Simcha Singers* and *Shalom Singers*, accompanies services for *Sh’ar Communities*, and plays keyboards in the ‘90s rock band *Uncle Jesse*. Josh has written several musical theater orchestrations and a cappella arrangements for ensembles all over the New York area including *Voices of Gotham* and the entire *Rum and Piirates*) and is thrilled to have made his Off-Broadway composing debut with *The Imbible: Day Drinking*. Between 2018-2019 he composed *The Choral Torah: 5 Books in 4 Parts*. Starting in September 2020, Josh has been studying to become a cantor at the Jewish Theological Seminary of America.
Stanislav Fridman
Ukrainian-Israeli pianist and composer Stanislav Fridman has appeared as pianist, chamber musician and with orchestra on some of the most prestigious stages, including Carnegie Hall, Alice Tully Hall, New York City Center, Skirball Center, Spectrum NYC, among many others. He has collaborated with some of the leading music organizations such as Martha Graham Dance Company, New York Choral Society, International Contemporary Ensemble (ICE), American Society for Jewish Music, and the Mannes Orchestra. Mr. Fridman’s works have been performed in prestigious venues across Germany, England, Switzerland, Israel, and the United States. In New York, where he is based, his music has been performed at Opera America, National Sawdust, The Center for Jewish History, The Dimenna Center, Spectrum NYC, SoapBox Galley, Triskelion Arts, Salvatore Capezio Theater, All Souls Church, Industria Studios, Arts on Site, The 92Street Y, and The New School. Mr. Fridman holds his BM in Piano Performance and Music Composition as well as MM in Music Composition from Mannes College, The New School for Music.
Meira Warshawer
With a musical palette ranging from traditional Jewish prayer modes to minimalist textures with rich melodic contours, and from jazz-influenced rhythms to imaginative orchestrations of the natural world, composer Meira Warshawer’s music has been performed to critical acclaim and heard on radio worldwide. At its core, Meira’s music expresses her personal spiritual journey and her love for the earth, and much of her creative output draws on Jewish themes and their universal message. Performers and commissioners of her music include orchestras, choruses, chamber ensembles and soloists from the U.S., Europe, the Middle East and Asia. Her music has been recorded for Navona, Ansonica, Albany, MMC and Kol Meira. Visit the discography page on her website. A graduate of Harvard, New England Conservatory, and University of South Carolina, Dr. Warshawer’s music is published by Laureen Keiser Music Publishing, Hildegard Publishing Company, World Music Press/Plank Road Publishing, and Kol Meira Publications. For more information, visit [http://meirawarshawer.com](http://meirawarshawer.com).
Composer Gerald Cohen has been praised by *Gramophone Magazine* for his "linguistic fluidity and melodic gift," creating music that "reveals a very personal modernism that...offers great emotional rewards." His opera, *Steal*
*a Pencil for Me*, based on a true concentration camp love story, had its world premiere production by Opera Colorado in January 2018. Recent instrumental compositions include *Voyagers*, a celebration of the 40th anniversary of the launch of Voyager spacecraft, which had its premiere at New York’s Hayden Planetarium; and *Playing for our lives*, a tribute to the music and musicians of the WWII Terezin concentration camp near Prague.
Recognition of Cohen’s body of work includes the Copland House Borromeo String Quartet Award and Hoff-Barthelson/Copland House commission, Westchester Prize for New Work, American Composers Forum Faith Partners residency, Zamir Choral Foundation’s Hallel V’Zimrah award, and Cantors Assembly’s Max Wohlberg Award for distinguished achievement in the field of Jewish composition. He is cantor at Shaarei Tikvah, Scarsdale, NY, and is on the faculties of the H. L. Miller Cantorial School of The Jewish Theological Seminary and of Hebrew Union College. For Cohen’s composition visit [www.geraldcohenmusic.com](http://www.geraldcohenmusic.com)
The American Society for Jewish Music traces its roots back to the Society for New Jewish Music of St. Petersburg, Russia, 1908. After the Bolshevik Revolution, members of the group published their compositions under the imprint of JUWAL, Publication Society for Jewish Music. Among these members were three composer-musicologists, Joseph Achron, Solomon Rosowsky and Lazar Saminsky, who emigrated to the United States, where, along with Abraham W. Binder and others, founded Mailamm (Makhon Eretz Yisraeli L’Mada’ey haMusika, 193239). From 1939-62, this was refashioned as A.W. Binder as the Jewish Music Forum, which in turn became the Jewish Liturgical Society of America (1963-74). In 1974, the latter group was reorganized as the ASJM under the direction of Albert Weisser.
The ASJM serves Jewish music professionals and interested lay people by publishing a scholarly journal, *Musica Judaica*, producing concerts, hosting lectures by experts in their fields through its academic arm, The Jewish Music Forum, sponsoring the Cantor Aaron J. Kaplan Composers Competition, and establishing links with Jewish communities, universities and seminaries throughout the world. In addition to the programs presented by the Society, to which the general public is invited, the ASJM encourages seminars, workshops and master classes at which students may benefit from the musical expertise of the Society’s members. For more information see [www.jewishmusic-asjm.org](http://www.jewishmusic-asjm.org) |
We’re excited to bring you an improved service, with stable pricing and the best environmental outcomes.
The new waste contract starts on 1 July. For the vast majority of residents, there will be no change to your collection schedule.
But you will see an improvement in service. Some of the key enhancements we’re introducing are:
- The latest technology to track collections in real-time.
- One point of contact (Council) for all your calls and queries, making the process more efficient.
- Improvement in performance and a reduction in missed collections over time due to the information collected to track collections in real-time from the new technology installed in every truck.
- A new varied fleet of trucks to accommodate our narrow and tight streets.
- Different options available to suit your lifestyle – size, frequency, payment options.
- Continued access to 4 free bulky waste collections.
- Continued extra collections over Christmas and specialised collection events throughout the year.
And the best news of all? Prices will remain stable for the next 10 years, without compromising the quality services we provide.
Along with these service improvements, our award-winning 3-bin system is being expanded to all stand-alone houses, including rural properties.
Expanding the 3-bin service means rural properties can now access the same service as everyone else, and Council can continue to keep costs down for everyone.
Penrith is a recognised leader in sustainable waste management and Council is committed to actively supporting our community to reduce waste sent to landfill.
In Penrith, 65% of waste is recovered via our green lid bin and yellow lid bin thanks to the home sorting practices of our residents, working together with Council for a more sustainable future for everyone.
Mayor’s Message
Earlier this year we asked our community about their experience as customers of Council and how we can improve. The feedback we received has helped us bring our new Customer Promise to life.
This Promise starts a new journey for us, one where we will continuously improve our relationships with our customers. It will help us remove barriers so that when you contact us you will have a more positive experience. While we know we are not there yet, this is the starting point to making every interaction you have with us so much better and we are committed to achieving this.
Our promise to you is that we will be proactive, keep it simple, build respectful relationships and listen and respond when serving each other and the public.
We always welcome feedback as we look to continually improve our service to the community.
With the new financial year almost upon us, Council is about to commence an extensive program of works for the next 12 months that will see $264.6 million spent on a wide and diverse range of services and programs.
I would like to take this opportunity to thank all those community groups and individuals who contributed to the development of the 2019-20 Operational Plan through the consultation process. Your feedback helps ensure that Council’s priorities reflect your aspirations.
Finally, as I write this message, the winter chill has hit us here in Penrith after a long hot summer and autumn. Take a moment to look at our tips to keep warm while at the same time saving energy this winter season in this issue of our community newsletter.
Cr Ross Fowler OAM
Penrith City Mayor
Big Improvements at Local Sporting Venues
In celebration of the 200th anniversary of the Emu Plains Convict Farm’s establishment and Penrith’s early colonial history a number of events are being held during Family History Month.
History Conference
Where: Penrith City Library Peter Goodfellow Theatre
Cost: $25 bookings via penrith.city/library
Speakers will explore this early colonial period and give insights into researching your family and local history
Conference Speakers include:
Professor Grace Karskens - historian and archaeologist
Lorraine Stacker - Emu Plains Convict Farm historian
Steve Ford - historical land researcher
Family History Fair
Where: Library lower lounge
Cost: Free entry bookings not required
Meet historical groups from outer Western Sydney and beyond and gain tips and guidance for conducting your own research.
The future of the City’s strong and proud sporting tradition has been further assured with a number of significant upgrades to sporting venues across the Penrith local government area.
Cook Park Soccer Fields, St Marys Fields 2 & 3 saw the installation of a new automatic watering system to both fields and reconstruction of the playing surface with 300 tonnes of recycled organic material incorporated into the existing soil profile and 16,000 square metres of kikuyu turf laid as well as 200 tonnes of top dressing.
Chapman Gardens, Kingswood Baseball Field No. 1 received an outfield surface upgrade an extension of automatic irrigation to the entire outfield, renovation of the existing outfield and 60 tonnes of top dressing.
Jamison Park, South Penrith Field No. 5 saw the installation of a new automatic watering system, a renovation of the existing outfield and 80 tonnes of top dressing.
NEW BINS FOR ALL RESIDENTS
On 1 July, the new waste contract will begin. As a part of that new contract, we are progressively replacing the bins for every household in Penrith.
That’s 210,000 bins, which is a huge undertaking and will take time. We’ll be sending each household a letter explaining the process, and will keep you updated as the rollout progresses.
We understand that not everyone’s bins are damaged or in immediate need of being replaced. However, the average life of a garbage bin is 10 years and waste contracts also last 10 years and as we’re just starting a new one, now is the perfect opportunity for us to replace the bins for everyone. We’ve also changed the bin supplier, so the new bins are a higher quality. They should last longer and have fewer breakages.
Continuing ad-hoc replacement and repair is more expensive than changing all bins at the start of the contract, so replacing all the bins now will save ratepayers money in the long run.
You will get the same bins you have now – new for old. If you get the wrong bins, please contact Council’s Waste Services team on 4732 7615 so we can fix that for you as soon as possible. And if you need more or less capacity, please call us to discuss the options available to help you manage your waste responsibly.
We will collect the old bins as part of the replacement process. Your new bins will be delivered the day before your normal collection day and the old bins will be removed a day or two later. While there may be some change-over issues, we expect service levels to remain consistent with no changes to the current service being provided.
All the old bins will be recycled and turned into things like park bins, seating, pickets, vegetable stakes, fencing etc.
You will receive a letter one or two weeks before your bins are due to be replaced. Replacing 210,000 bins takes a long time and it may be weeks or months before you get your letter.
If your bin is damaged but you can still use it, please hold off until you get your new bins. However, if your bin is unusable, please contact Council’s Waste Services team on 4732 7615 to organise a replacement.
The new bin won’t cost you anything. The domestic waste charge (the fees you pay) for 2019-20 are not final yet, and will be adopted by Council in June 2019, as part of the 2019-20 budget process.
Find out more at: penrith.city/NewBins
1 & 2 NOVEMBER 2019
SAVE THE DATE
Real Festival
Pop Up Bar | Art & Light Installations
Markets & Food Vendors | Artists & Entertainers | River Activities | Kids Shows
Find out more REALFESTIVAL.COM.AU
PENRITH CITY COUNCIL #RealFestival
Our last summer here in Penrith was particularly long and hot. And while the winter weather is here and things are much chillier, we haven’t forgotten that extreme heat and we’re still working towards a cooler city for summer. Council is currently working on a number of tree planting projects, with winter being the best time to plant trees and get them established.
Our current projects include:
- Almost 400 trees are being planted on nature strips in the southern section of St Marys, as part of our Living Places St Marys project. This will create more vibrant and nicer streets, with species chosen to create shade coverage as well as colour to the streetscapes.
- Around 330 trees will be planted alongside our sporting fields across the region, creating much needed shade for spectators.
We’re also looking at where our best opportunities to undertake future tree planting projects are too. Residents can help by planting trees on their own properties as well. A deciduous tree on the north or western side of your home can block that harsh summer sun, and still give you that much needed warmth in winter. Or maybe you’d prefer a native species that provides habitat for local animals. With any tree planting project, we recommend doing a dial before you dig to check for any underground services, and consulting with nursery or tree professionals to select an appropriate species for your yard.
Penrith now has 243 new car parking spaces close to the City Centre with the opening of North Street car park.
Penrith Mayor Ross Fowler OAM, officially opened the car park on 6 June, 2019.
The car park, which provides nine hour parking is close to popular businesses at the top end of High Street and is just a short walk to Penrith Local Court, TAFE and Service NSW as well as Westfield Penrith Plaza and Penrith Station.
A pedestrian ramp links the car park with Lemongrove Bridge and there is a new roundabout at Henry and Doonmore Streets.
The car park has been extensively landscaped to shade parked cars and to help green and cool the City Centre.
Plans for a new multi-deck car park at Soper Place are also progressing, with construction works expected to begin soon. These two new car parks will increase the number of car parking spaces by more than 800 spots.
To find out more about parking in Penrith City Centre and to access an easy-to-use interactive parking map, visit: penrithcity.nsw.gov.au/parking.
Have you heard about the annual Community Assistance Program (CAP) grant? Non-profit organisations and community groups are invited to apply for small grants of up to $1,200 to help kickstart project ideas that will benefit the community.
In its 25th year, CAP grants have a proud history of assisting non-profit organisations and community groups to start successful projects, ranging from purchasing equipment needed for activities to running events that benefit the wider community.
Council understands how challenging it can be for local volunteering and community groups with limited resources, and that a little funding goes a long way towards getting worthwhile ideas off the ground when community groups are involved.
Last year, Council contributed $30,000 to the community through CAP, which funded 37 separate projects benefitting children, young people, seniors, people with disability and residents from culturally and linguistically diverse backgrounds.
Do you have a great idea for a project for the community? Apply for CAP today. For more information and to apply, visit: https://www.penrithcity.nsw.gov.au/grants
Applications close 3pm Monday 8 July 2019.
Our promise to you...
We put customers at the heart of everything we do. When we work with you and each other we will...
**BE PROACTIVE**
We will be friendly, professional and show initiative.
**KEEP IT SIMPLE**
We will offer clear, consistent and accurate information and services, which are easy for everyone to access.
**BUILD RESPECTFUL RELATIONSHIPS**
We value relationships and diversity. We will respect your individual situation.
**LISTEN AND RESPOND**
We will listen to you and seek to understand your needs. We will be honest, accountable and follow through, so you know what to expect and when.
PENRITH CITY COUNCIL
penrith.city/OurPromise
Penrith Library has recently undergone a major refurbishment.
Some of the improvements include a new system to make borrowing and returning books quicker and easier than ever before; additional study desks, expanded quiet zones and a new and improved local history research room.
There’s also new training spaces for the Library’s wide range of community workshops and classes. These rooms offer greater functionality and are more conducive to good communication and productive learning and teaching interactions.
Penrith Library is already widely recognised as one of the best in NSW, if not Australia, and our efforts to update and enhance this facility will ensure it continues to meet the diverse needs of our local and growing communities.
If you’re not already a member of the Library, now is the perfect time to join. Membership to the library is free to all residents and gives you access to a wide range of digital services including e-Magazines, ebooks and audiobooks.
Council has 26 childcare facilities catering for 4000 children each year, 18 of which specialise in long day care.
We offer an all-inclusive, competitive daily rate, as well as shorter 6 hour and 9 hour options.
Most importantly, our centres provide a safe, secure and inclusive environment for all children, including children with additional needs. Our staff are highly qualified, and all our centres meet or exceed the standards set by the National Quality Framework.
With facilities from Emu Heights to Oxley Park, there is one close to home or work where you can be sure that your child will receive the best care and nurturing in the Penrith area.
For more information visit penrith.city/daycare or call the Children’s Services Hotline on 4732 7844.
The colder weather is here, and for many residents that means the challenge of staying warm and keeping energy bills under control. Here are some easy hints and tips to help you this winter:
- **Close your curtains** More than a third of the heat in a room can leak out the windows. Blinds and curtains work like insulation and help to keep that heat in, so keep them closed when you can.
- **Only heat the rooms you need** Close the doors to any rooms you’re not spending much time in, like that empty spare bedroom.
- **Stop drafts** Often our front and back doors don’t seal very well, and let out the warm air out. You can use a door snake, or you can install some draft seals around the door jam.
- **Have shorter showers** Your hot water system is often one of your biggest energy users in your home. By having shorter showers, you’ll save on your power bill.
- **Change the temperature** For every degree you lower your air conditioner, you can save up to 10% on its energy use. A temperature of 18-20 degrees is recommended for winter.
Enjoy a lazy Sunday afternoon at...
**MUSIC BY THE RIVER**
**FREE EVENT**
SUNDAY 22 SEPTEMBER | 11AM-4PM
Tench Reserve, Tench Avenue, Jamisontown
PENRITH CITY COUNCIL
1300 736 836
penrith.city/events
THE UGLIEST DUCKLING
A NEW LIVE STAGE PRODUCTION BY Q THEATRE
Tchick, tchick! One by one the eggs break open. Except for one. This one is the biggest egg of all.
Whether it be a history lesson or a life lesson, your child is bound to learn something from seeing live theatre.
Exploring resilience, transformation and joy, Q Theatre’s adaptation of Hans Christian Andersen’s The Ugly Duckling, showing at The Joan in July school holidays, is a celebration of difference and perfect for the whole family.
Combining circus, dance, music and physical theatre; there are many things parents and children can expect from The Ugliest Duckling. Finding their way through the world, starting from Spring and ending in Winter, three little ducklings will learn how to swim, fly and make friends, taking inspiration from common milestones that young people experience and drawing on those important moments in a young person’s life.
The Ugliest Duckling is a story that resonates strongly with humankind, embracing resilience, empathy, joy and equality – values that move across the boundaries of age. Full of small tender moments and big pictures, It’s a special little world this show is creating, and we’d love you to come along.
WHY NOT VOLUNTEER?
Quite simply, our communities couldn’t function without volunteers. They make a real difference to their local communities and the people of Penrith City are well known for their community spirit. Yet volunteers often don’t get the recognition they deserve for the time, effort, skills and experience they give to help others.
In recognition of their amazing efforts Council will be holding a free Volunteer Expo on Wednesday 25 September where volunteer organisations will be showcasing their work at the Mondo, located outside of the Penrith Civic Centre. Everyone is invited to come along and learn about volunteering and how to get involved. So if you are interested in volunteering, then save the date for our Volunteer Expo!
For more information about the Expo, please contact Council’s Disability Inclusion Officer on 4732 8081. Make sure you save the date and keep an eye out for more details about this exciting event.
DOWN YOUR WAY
NEW TRAFFIC CALMING DEVICE
A new speed hump has been constructed in York Rd, Penrith to slow down the traffic and to improve safety.
NEW FOOTPATHS
We recently constructed a total of 930m length of shared path in Hickeys Lane, Penrith and Smith St, South Penrith.
RECONSTRUCTED ROADS
We recently Reconstructed a total length of 2.0km of road in Lansdowne and Calverts Road, Orchard Hills and Borrowdale Way, Cranebrook as part of the Roads to Recovery and Road Reconstruction Program.
DRAINAGE WORK
We recently installed 130m kerb and gutter including drainage system to improve drainage in Muscharry Rd, Londonderry and Caddens Rd, Claremont Meadows as part of the annual Kerb and Drainage Construction Program.
NEW BUS SHELTERS
We recently installed four new bus shelters in Second Avenue, Kingswood; Andromeda Dr, Cranebrook; Solander Dr, St Clair and Oxford St, Cambridge Park. |
ANNOUNCEMENTS
SECOND SUNDAY OF GREAT LENT
March 8, 2015 - Saint Gregory Palamas
• TROPARION OF ST. GREGORY PALAMAS, Tone Eight:
O light of Orthodoxy, pillar and teacher of the Church, glory of monks and invincible protection of theologians, O Gregory, thou wonderworker, boast of Thessalonika, and preacher of grace, ever pray that our souls be saved.
• KONTAKION OF ST GREGORY, Tone Eight, to the melody To thee the Champion:
With one accord, we praise thee as the sacred and divine vessel of wisdom and clear trumpet of theology, O our righteous Father Gregory of divine speech. As a mind that standeth now before the Original Mind, do thou ever guide aright and lead our mind to Him, that we all may cry: Rejoice, O herald of grace divine.
ACTIVITIES & EVENTS THIS WEEK
• Saturday, March 7: 3:30 PM, Catechism, On Holy Baptism
5 PM, VIGIL
• Sunday, March 8: 9–10 AM, Confession
10 AM, Divine Liturgy
11:45 AM—Church School; Noon—Agape luncheon in Hall
5 PM, Concert by the Yale Russian Chorus
• Tues., March 9: 8 AM, Lenten Matins
• Wed., March 10: 8 AM, Lenten Matins
11 AM, Catechism Revisited
5 PM, Redwood Empire Food Bank Distribution
5 PM, Confessions
6:15 PM, Liturgy of the Presanctified Gifts, Meal and Spiritual Reading
• Thurs., March 11: 8 AM, Lenten Matins
• Friday, March 12: 10:30 AM, Liturgy of the Presanctified Gifts
6 PM, Akathist to the Theotokos (Protection Church)
NEWS AND THANKS:
We congratulate Anastasia and Aaron Brodeur on the birth of their daughter, Thursday, March 5. Many Years. Thom Stewart is back home recuperating from his heart surgery. He welcomes phone calls and visits.
Many, many thanks to the men and women of the Parish Sisterhood who worked so hard hosting the reception which followed the Sunday of Orthodoxy Pan-Orthodox Vespers. Many of our guests commented on the outstanding hospitality shown to them.
• **UPCOMING AND IMPORTANT:**
**Sunday, March 21:** Concert by Nicolas Custer’s Renaissance choral ensemble *Carmina Chromatica*, singing motets and lamentations from the Lenten and Paschal seasons. No charge.
**Tuesday, March 24:** Eve of the Great Feast of the Annunciation. Vigil for the Feast at 6:15 PM. The beautiful hymns of Matins are a joyful reminder, during this season of repentance, of the incarnation, the holiness of the Virgin Mary, and the goal of our life.
**Wednesday, March 25,** Annunciation Vesperal Divine Liturgy at 5 PM. We fast starting at Noon (if not before) in preparation for Holy Communion. After Liturgy we may have fish, wine and oil.
**Tuesday, March 31,** Feast of St. Innocent of Alaska, 6:15 PM Presanctified Liturgy, presided by Archbishop Benjamin. Please note: this Service is an add-on to the monthly calendar.
• **SAINT GREGORY PALAMAS:**
On this Sunday, the second Sunday of Great Lent, our Orthodox Church celebrates the memory of St. Gregory Palamas, the Archbishop of Thessalonika, that great pillar of the Church. St. Gregory Palamas has such an exalted reputation in the Orthodox Tradition that he seems to inhabit another order of reality than your average saint—except for the fact that there are no “average” saints; holiness or godliness is a state of being that entirely transcends measurement or averaging, for “God giveth not the Spirit by measure unto him” (John 3:34). These words of St. John the Baptist recounted in the Gospel of John refer to the Messiah himself, but as our Lord Jesus Christ is “wondrous in his saints,” they speak also of the immeasurable grace revealed in the theological achievement of St Gregory Palamas. In this spirit the liturgical tradition of the Church celebrates the memory of St. Gregory with hymns of praise that soar with exultant abandon to almost hyperbolic heights. For example, at Vespers on Saturday evening we hear: “What hymns of praise shall we sing in honor of the holy bishop? He is the trumpet of theology, the herald of the fire of grace, the honored vessel of the Spirit, the unshaken pillar of the Church, the great joy of the inhabited earth, the river of wisdom, the candlestick of the light, the shining star that makes glorious the whole creation.”
And again: “What words of song shall we weave as a garland, to crown the holy bishop? He is the champion of true devotion and the adversary of ungodliness, the fervent protector of the Faith, the great guide and teacher, the well-tuned harp of the Spirit, the golden tongue, the fountain that flows with waters of healing for the faithful, Gregory the great and marvelous.”
Yet again: “With what words shall we who dwell on earth praise the holy bishop? He is the teacher of the Church, the herald of the light of God, the initiate of the heavenly mysteries of the Trinity, the chief adornment of the monastic life, renowned alike in action and contemplation, the glory of Thessalonika…”
As is very well known, St. Gregory Palamas was a monk on the Holy Mountain of Athos, a direct heir of outstanding hesychastic fathers, who was prepared by his ascetic life of prayer, worship, the cultivation of stillness (hesychia) and by the unsearchable Providence of God to respond to the call to defend the traditional hesychastic practices of Orthodox monasticism leading to the knowledge and experience of God against certain philosophically “cultured despisers” of these practices, such as Barlaam the Calabrian, Gregory Akindynos and Nikiphoras Gregoras. Today we might speak of this unholy trio as “academic” philosophers or theologians. Historically, the controversy that resulted in the Councils of 1341, 1347 and 1351 that were called in Constantinople to sort the matter out is known as The Hesychast Controversy.
In singing the praises of the great Gregory, the liturgical tradition does not exactly give us a history lesson, although it acknowledges the historical basis of St. Gregory’s achievement. There are books aplenty for those who want history. For it is not a mere historical event that the Church wishes to celebrate in liturgically offering St. Gregory for our contemplation on this Second Sunday of Lent. It is rather a profound spiritual possibility, a mystical horizon, a Taboric vision quest, a beckoning path of prayer, an illumination of uncreated light and the transfiguring effect of a Divine encounter to which the Holy Church points us in honoring the achievements of the holy archbishop of Thessalonika.
By assigning the celebration of St. Gregory Palamas to the following Sunday after the celebration of the Sunday of the Triumph of Orthodoxy, the Church intends us to understand that the “faith that sustains the
universe,” the reality of the Incarnation of the God-man, Christ Jesus, the Icon of the Father, is the unshakeable foundation of transfiguring and deifying Light—the fullness of the knowledge of God possible for created human beings.
The writings of St. Gregory Palamas are numerous, and only a small portion of them are available in English, so it may seem difficult if not impossible to grasp in a practical way the full import of his celebrated teaching. But in the fourth volume of the English translation of the *Philokalia*, there is a brief work entitled “The Declaration of the Holy Mountain,” which is also called “The Hagiographic Tome.” It was written by St. Gregory Palamas himself and signed by the leading spiritual authorities on the Holy Mountain at the time. It is essentially a summary of the main themes of St. Gregory’s theological teaching, which the Church celebrates today as her own. There are seven main themes:
1. Uncreated deifying grace is the true teaching of the Church.
2. Deification—union with God—is possible in this life.
3. Nous and heart—the essence of prayer is praying with the mind in the heart.
4. Uncreated light: God’s glory is experienced as uncreated Divine light.
5. Essence and energy: the Divine essence is unknowable; we can know and experience God through his uncreated and equally Divine energies.
6. Deification of the body: the body will share in *theosis* by being transfigured.
7. Experience of the saints: holy men and women of the past and people today have known deification, transfiguration and the uncreated Light.
Each of these themes individually and all of them together add up to a single, stupendous, earth-shattering, soul-shaking and life-transforming meaning: God has become man so that man might become God. Created in His image and after His likeness according to the theanthropic principle (the Logos), every human being is called, invited, nay, invoked by name from before the foundation of the world to know God by experiencing Him, both in this life and the life to come, in spirit, soul and body, as Light, as Grace, as one’s very center of conscious being. This Sunday, therefore, places before us Orthodox the ultimate goal of our faith, not only of Great Lent but the very essence of our life: to live true theology, which is not to know about God, but to know God by experience, to become one with the Holy Trinity, to participate, as St. Peter’s 2nd epistle affirms, in the Divine nature. Our true destiny is so to be crucified with Christ that He may live in us completely, so that knowing and experiencing, through Christ, He Who Is, we may, in Christ, be one who knows even as he is known (1 Cor. 13:12). —*Vincent Rossi*
• **LAST SUNDAY EVENING:**
On the Sunday of Orthodoxy, March 1, 2015, St. Seraphim’s Orthodox Church in Santa Rosa hosted a pan-Orthodox Vespers in honor of the celebration of the Triumph of Orthodoxy. In attendance were 5 bishops, including Metropolitan Gerasimos and Bishop Apostolos of the Greek Archdiocese, Bishop Maxim of the Serbian Archdiocese, our own Archbishop Benjamin of San Francisco and the West, and Bishop Daniel of Santa Rosa. Many clergy and parishioners from other parishes throughout the Bay Area also came to celebrate with us.
The celebration of the Triumph of Orthodoxy has been observed for over a thousand years. Ever since 843 AD when iconoclasm was defeated in the reign of Empress Theodora and Patriarch Methodios, the Orthodox have celebrated the restoration of icon veneration as a definitive victory, not just of icon-veneration, but of the Orthodox faith as a whole, and have commemorated it with a festive procession with icons. In the West, and particularly in America, where various jurisdictions of the Orthodox Church reside in the same geographical area, the tradition has arisen of a pan-Orthodox celebration of Vespers on the evening of the Sunday of Orthodoxy.
The day began auspiciously in bright sunshine with the arrival and greeting of Archbishop Benjamin and Bishop Daniel followed by a splendid hierarchical Divine Liturgy. Then in the evening, hierarchs, clergy and faithful gathered in the nave of St. Seraphim church at 5 PM to listen to a presentation by Fr. Patrick Doolan, our
renowned chief iconographer, on the meaning of icons, on the on-going process of the iconographic decoration of the church building, and on the art and science of fresco painting. Fr. Patrick is an internationally recognized iconographer of the first rank, and after listening to his talk, one could justifiably conclude that he is also a peerless lecturer on all things iconographic. Vladyka Benjamin called it a “flawless presentation.”
Vespers commenced immediately after the talk, and as the service unfolded, one could sense that something extraordinary was taking place. It was not just that the church was full of people and clergy (38 priests), or that five bishops were present, or that choir director Nicolas Custer was doing his usual masterful job of eliciting beautiful sounds from the choir. Everything about the Vespers service was suffused with deep meaning. The words of Psalm 103, so familiar to everyone, resonated with the spirit of truth and the measureless cosmic significance of the Divine Presence in all things; the rhythm of the service itself moved with unusual solemnity and portent, the Lord I Have Cried, the Entrance, the Lamp-Lighting Hymn of Thanksgiving, sung magnificently in Greek by all the clergy, the Prokeimenon, the Litanies, the Aposticha, all were experienced in a new key, in unprecedented depth, in a great liturgical dance of solemnity and joy, which culminated in the procession with icons around the church, singing the Troparion of the Feast, and stopping at each of the cardinal points of the compass, South, East, North and West to sing the small litany. Finally, when Archbishop Gerasimos stood before the congregation and intoned the mighty words of the Synodikon of March 11, 843 AD:
As the prophets beheld, as the Apostles have taught, as the Church has received, as the teachers have dogmatized, as the universe has agreed, as grace has illumined, as truth has revealed, as falsehood has been dispelled, as wisdom has presented, as Christ has triumphed; this we believe, this we declare, this we preach: Christ our true God, and His saints we honor in words, in writings, in thoughts, in sacrifices, in temples, in icons, on the one hand bowing down and worshipping Christ as God and Master, on the other hand honoring the saints as true servants of the Master of all, and offering to them due veneration. This is the faith of the Apostles! This is the faith of the Fathers! This is the faith of the Orthodox! This is the faith which has established the Universe!
…all could hear in those majestic phrases the triumph of Orthodoxy, a triumph of faith and truth and beauty grounded in the absolute and immutable reality of the salvific Love of the Triune God, resounding throughout human history to the very heights of heaven. At the end of the service, Archbishop Benjamin gave an excellent homily, sober and compunctionate, and then invited everyone to the Church Hall for refreshments in honor of the Feast. I am sure that as the evening ended there was in every heart the feeling of inestimable joy in participating in the gift of Orthodoxy. —a parishioner
• PROPERTY IMPROVEMENTS
Concrete for a new sidewalk (between the Hall and the Rectory) was poured this week. The old concrete was too narrow and had many cracks and uneven sections. Next steps: install irrigation in the lawn, plant a tree and grass. Work also began this week, per the Master Plan and 2015 Budget, to build parish offices in the third bay of the storage building. There will be a entryway/secretary office leading to a private office for the rector. The current parish office will become the Parish Library (currently “hidden” in the Sunday School building) and high school class room. |
Appendix 9. Protocols (CIP 0 - CIP 6)
Mining and Metallurgical Company “Norilsk Nickel”
Institute of Criminalistics of the Russian Federal Security Service
State Research Institute for Rare Metals
Complex Procedure
for identification of the Nature and the Source of Origin of Precious Metal Containing Products of Mining and Metallurgical Operations
Moscow, 2006
1. The purpose and the scope of the Complex Procedure
The present Complex Procedure is aimed at identifying the nature and the source of origin of materials produced from ores containing precious metals as well as their mixtures and mixes with other materials.
The Complex Procedure employs a combination of analytical methods to determine the following:
- elemental composition of a substance, including contaminants;
- phase composition of a substance;
- elemental composition (and morphology) of individual microparticles in a substance thus allowing a semi-quantitative determination of the substance in terms of a limited number of microparticle groups where the groups are considered to represent individual phases.
The information obtained by these methods is compared to the corresponding information in the Reference Data Base (hereinafter - RDB) in order to assess the nature and source of origin of an analyzed substance.
Systematized information on precious metal-containing products produced at different process lines of metallurgical operations, and periods of time is included in the RDB. The RDB containing information on 70 types of products was started in 2003 and continues to be updated. Information on each product produced by Norilsk Nickel is summarized in a corresponding databank and “Product Data Sheet”. Databanks and “Product Data Sheets” are continuously updated as new types of products appear or as additional results of analyses of products become available.
Target materials of this procedure are:
- precious metal- (PGMs, gold and silver) containing products and intermediates of mining and metallurgical operations, withdrawn from illegal circulation; their mixtures and the mixes with other materials;
- microresidues left on the surface of evidence material and other objects that are assumed to have been in contact with the stolen materials, (e.g. dust, dirt on the floor, furniture, clothing, tools, packing, car covers and other parts of a cars’ interior, etc.), as well as microresidues on bodies, in the hair or under nails of a crime suspect. Methods for sample collection and handling of microresidues are described in detail in the scientific literature\(^{1,2}\) and are therefore not included in the protocols.
---
\(^{1}\) “Criminalistics”: Textbook. Chief Editor N.P. Yablokov; 3rd edition – M., “Youth” (“Junost”), 2005. P. 257.
\(^{2}\) Khrustalev V.N. “Conceptual Fundamentals of Criminalistic Analysis of Substances, Materials and Products thereof”. Author’s abstract of dissertation/thesis made by J.D., M. – 2004. P.p. 41-43.
Criminalistic examination of such materials pursues the following objectives:
- identification of the confiscated material as a certain type of product;
- provenance of the material (company, shop, process line).
2. The procedure for determining the nature and the source of origin of a substance
In order to identify the nature and the source of origin of a substance as a product of any operating unit or of a particular plant, it is necessary to compare the results of the study of the sample with the information contained in the RDB. An overview of the analytical methods and their corresponding protocols within the Complex Procedure is given in Figure 1.

At the first stage of the study, the bulk elemental composition of the substance is determined by Scanning Electron Microscopy with X-Ray Spectral Microanalysis (SEM-EDX) in accordance with Protocol 1 of the Complex Procedure. The results are used in the preliminary identification of the substance, and for the determination of the sample preparation method (Protocol 2) for the ICP-MS and ICP-OES analyses.
The next stage includes the determination of the elemental composition by Inductively Coupled Plasma Optical Emission Spectrometry (Protocol 3) and Inductively Coupled Plasma Mass-Spectrometry (Protocol 4) and the study of the phase composition by X-Ray Diffractometry (Protocol 5). Selection of which of the methods 3 or 4 to use depends on the
elements that are to be determined and their concentrations (See Paragraph 2 in Protocols 3 and 4). The results of each study are compared with the data in the RDB. In the case of a full match of the sample characteristics with one of the RDB products (i.e. when all diagnostic features overlap), a conclusion as to the type of this product and its source of origin can be made.
If the features of the sample analyzed by the aforementioned methods do not match any of the product types represented in the RDB, than the hypothesis that the sample is a mix of products is examined. For this purpose it is necessary to examine the elemental composition and morphology of individual particles of the sample using SEM-EDX (Protocol 6). If the features of some particles match the features of particles belonging to any product or products from the RDB, this product or a mixture of products may be present in the material under analysis. The assumption that the substance is a mixture can be further verified by comparing all previously identified features of this sample with the features of the pattern mixture of the appropriate types of products represented in the RDB (superposition method). A conclusion is made upon the results of this comparison. If no particles with the features typical of ore products containing precious metals are found, it can be concluded that such products are not present in the analyzed sample.
Application of the complete Complex Procedure is possible only if the mass of the sample is greater than 10 g. Smaller samples may in some cases result in failure to identify the full range of features as specified in the Complex Procedure. If the mass of a sample is less than 1g, then that sample can be examined only by SEM-EDX (Protocol 6).
The Complex Procedure includes the following analytical protocols:
1. Determination of the bulk elemental composition of precious metal-containing products by scanning electron microscopy with X-ray microanalysis
2. Wet acid digestion of PGM containing products for ICP-OES and ICP-MS analysis
3. Determination of the elemental composition of precious metal-containing products by ICP-OES
4. Determination of the elemental composition of precious metal-containing products by ICP-MS
5. Determination of the phase composition of precious metal-containing products by XRD
6. Determination of the elemental composition of micro particles of precious metal-containing products by scanning electron microscopy with X-ray microanalysis
Determination of the bulk elemental composition of precious metal-containing products by scanning electron microscopy with X-ray microanalysis
Author:
Quality manager:
Authorisation:
Date :
This procedure is applicable as of November 1st 2006
0 Update and review summary
0.1 Updates
| # | Section | Nature of Amendment | Date | Authorisation |
|---|---------|---------------------|------|---------------|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
0.2 Reviews
| Review date | Outcome of Review | Next Review Date | Authorisation |
|-------------|-------------------|------------------|---------------|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
CONTENT
0 Update and review summary ................................................................. 2
0.1 Updates .................................................................................. 2
0.2 Reviews .............................................................................. 2
1 Title .................................................................................................. 4
2 Scope ............................................................................................. 4
3 Safety and environment ............................................................... 4
4 Definitions .................................................................................... 4
5 Principle ....................................................................................... 4
6 Reagents and Materials ............................................................... 5
7 Apparatus and Equipment ........................................................... 5
8 Sample preparation ..................................................................... 5
9 Calibration .................................................................................... 5
10 Quality control .......................................................................... 5
11 Procedure .................................................................................... 6
12 Calculation .................................................................................. 6
13 Reporting procedures including expression of results .................. 7
14 Normative references and manuals ............................................. 7
15 Method performance ................................................................... 7
1 Title
Determination of the bulk elemental composition of precious metal-containing products by scanning electron microscopy with X-ray microanalysis.
2 Scope
The method is intended for the quantitative determination of the bulk element composition of dispersed materials. This method enables the determination of the quantitative content of elements in the following concentrations ranges:
- from 5 to 100 wt.% for elements from oxygen to fluorine;
- from 0.2 to 100 wt.% for elements from sodium to uranium.
3 Safety and environment
The general analysis and safety requirements prescribed by national and local laws and regulations in force at the enterprise must be followed.
4 Definitions
*Accuracy (trueness)*: closeness of the agreement between the mean value achieved from the series of analysis results and the adopted true value.
*Error (of measurement)*: deviation of the analysis result from the true value.
*Reference Material (RM)*: material or substance for which the property values are sufficiently homogeneous and well established to be used for the calibration of an apparatus, the assessment of a measurement, or for assigning values to materials.
*Calibration function*: functional relationship relating the measured signal intensity to the analyte quantity.
*Detection limit*: lowest content of analyte, which could be detected with 95% probability using this particular method.
*Probe (subsample)*: portion of the tested material that is removed for testing, following the procedures described in the protocol to assure its representativeness.
5 Principle
The method is based on the interaction between a scanning electron beam with sample material. During the interaction of the electron beam with sample material, secondary electrons and X-ray emission are generated along with a variety of other signals.
Secondary electrons are emitted from the atoms occupying the surface of the sample directly exposed to the electron beam. Collection and display of these secondary electrons forms a readily interpretable image of the surface. The contrast of the image is determined by and displays the sample morphology.
The X-ray emission depends on the elemental composition of the analyzed material. Energy measurement of the characteristic X-ray emission permits the determination of qualitative elemental composition. Measurement of the intensity of a characteristic line is used to calculate quantitatively the concentration of the associated element. Calculations of the elements’ concentrations are made with the use of physical models of interaction between the
6 Reagents and Materials
- Technical, particle free, distilled ethyl alcohol (96%).
7 Apparatus and Equipment
- Scanning Electron Microscope with Energy Dispersive Microanalyzer providing the determination of elements from boron to uranium with spectral resolution better than 135 eV for the Kα line of Mn at a count rate of 1000 counts per second;
- Ultrasonic disperser with frequency of 20-33 KHz;
- Adjustable volume pipette of 200-1000 μL;
- Sample mounts (stubs, studs) for scanning electron microscope;
- Disposable carbon conductive double sided adhesive tapes for scanning electron microscope sample mounts;
- Set of reference materials for EDS calibration;
- Optical binocular microscope with magnification from 20 to 100 times.
8 Sample preparation
Separate a probe (subsample) weighing 0.5 g from the powder sample by repeated quartering and place it into a disposable 1.5 mL plastic test tube. Add 1 mL of ethyl alcohol and mix the contents using the ultrasonic disperser for 5 minutes.
During this ultrasonic mixing, take 0.2 mL of the suspension by a micropipette and place it on the scanning microscope sample mount covered with a conducting carbon film.
Dry the sample stage with the suspension on it at ambient temperature. Use an optical binocular microscope (20-100 times total magnification) to control the process of suspension transfer on the sample mount. The dried sediment must form a thick layer of micro particles that does not crumble. If micro particles form crumbly aggregates, the process of sample preparation should be repeated using a newly prepared sample mount.
9 Calibration
Prior to beginning an analysis, verification of the operational condition of the scanning electron microscope with the X-ray microanalyzer must be established. This includes presence of system peaks, accuracy of magnification, and determination of spectral energy calibration and resolution. Energy calibration of the Energy Dispersive Microanalyzer is performed every 2 hours of equipment work using a “Set of reference materials for X-ray microanalysis” in accordance with the Operating Manual.
10 Quality control
Appropriate control of the analytical results is executed in accordance with ISO 5725 requirements using natural minerals as Reference Materials. Recommended minerals as reference materials are: Wollastonite, Zircon, and Rhodonite.
Quantitative analysis accuracy is considered satisfactory when the following conditions are
met:
\[
\frac{|C - C_K|}{C_K} \leq 0.05,
\]
(1)
Where,
- \( C_K \) is the accepted value of the element mass concentration (more than 1%) in the reference mineral
- \( C \) is the measured average (\( n=5 \)) element mass concentration in the reference mineral
If condition (1) is not achieved, the microanalyzer must be recalibrated (see paragraph 9).
### 11 Procedure
Prepare the scanning electron microscope and energy dispersive microanalyzer according to their Operation Manuals. Specific values of instrument operating parameters will depend upon the specific model of instrument used.
Examples of measurement parameters, provided for reference, are as follows:
- Accelerating potential: 20 KV;
- Field of vision: 2.0 x 2.0 mm;
- Spectrum integral intensity: \(\geq 300000\) counts;
- Spectral resolution \(\leq 135\) eV for Mn-K\(\alpha\);
- Element range: from Oxygen to Uranium;
- Concentration range: from 0.2 to 100 wt. %.
The integral (bulk) elemental composition of a substance is defined by measuring the integral X-ray spectrum emitted by the collection of micro particles on the sample stage in the field of vision of the electron microscope. The field of vision is chosen so that the maximum possible number of micro particles are in full view at a time. The number of micro particles must exceed 1000 particles.
The bulk elemental composition is based on the average of 5 measurements for which the fields of vision are not overlapping.
### 12 Calculation
At the first stage of processing of each obtained spectra, qualitative element analysis is conducted on the basis of the location of characteristic lines. If characteristic lines overlap, a best estimate of the elements presents in the micro particle is checked with the help of an element composition calculation (using the software of the analyzer). An element is considered present if the value of its calculated concentration is greater than the detection limit.
Quantitative content of the detected elements is calculated using software supplied with the analyzer. For each element detected in the examined substance, the range of concentrations determined in the five analyses is calculated.
The results on the bulk element composition are used for preliminary identification of the
sample material and the choice of analytical methods for its further analysis (including sample preparation methods – see Step 2) the CIP-0 Protocol.
13 Reporting procedures including expression of results
Analysis results are recorded in a form required by the examining laboratory’s reporting protocol. In addition to the analysis results, the protocol also must include:
- date of the testing,
- information about the expert (a profession acquired after graduation from the University, expert profession, period of service as an expert, post taken),
- incoming sample data (source of the sample’s origin, who, when and in what way sampling has been executed),
- data about the number of executed measurements on the basis of which analysis results were obtained.
14 Normative references and manuals
ISO 5725–1 through ISO5725-6 Accuracy (trueness and precision) of measurement methods and results. Part 1/Cor1:1998, Part 2/Cor1:2002, Part 3/Cor1: 2001, Part 4:1994, Part 5/Cor1:2005, Part 6/Cor1:2001.
ISO/IEC 17025:2005 General Requirement for the Competence of Testing and Calibration Laboratories.
The Fitness for Purpose of Analytical Methods: A Laboratory Guide to Method Validation and Related Topics: 1998 (EURACHEM).
15 Method performance
Relative error is better than 15% for elements from sodium to uranium and 30% for elements from oxygen to fluorine, except in cases where there are peak overlaps for which accurate corrections cannot be made.
For spectrums with 300000 counts total intensity the detection limits are:
- from oxygen to fluorine – 5 percent by weight;
- from sodium to uranium – 0.2 percent by weight.
Wet acid digestion of PGM containing products for ICP-OES and ICP-MS analysis
Author:
Quality manager:
Authorisation:
Date:
0 Update and review summary
0.3 Updates
| # | Section | Nature of Amendment | Date | Authorisation |
|---|---------|---------------------|------|---------------|
| | | | | |
| | | | | |
| | | | | |
0.2 Reviews
| Review date | Outcome of Review | Next Review Date | Authorisation |
|-------------|-------------------|------------------|---------------|
| | | | |
| | | | |
| | | | |
CONTENTS
0 Update and review summary 2
1 Title 4
2 Scope 4
3 Safety and Environment 4
4 Definitions 4
5 Principle 4
6 Reagents and Materials 4
7 Apparatus and Equipment 4
8 Sample preparation 5
9 Calibration 6
10 Quality Control 6
11 Procedure 6
12 Calculation 7
13 Reporting procedures including expression of results 7
14 Literature and manuals 7
15 Method performance 8
1 Title
Wet acid digestion of PGM containing products for ICP-OES and ICP-MS analysis.
2 Scope
This procedure is intended for full acid digestion of PGM-containing ore concentrates, semi products of their pyro- and hydro-metallurgical processing and also final (commodity) concentrates for subsequent ICP-OES and ICP-MS analysis.
3 Safety and Environment
The general analysis and safety requirements prescribed by national and local laws and regulations in force at the enterprise must be followed.
4 Definitions
“Tsar’s vodka”: freshly prepared 3:1 (v:v) mixture of concentrated HCl and HNO₃.
Probe (subsample): portion of the tested material that is removed for testing, following the procedures described in the protocol to assure its representativeness.
5 Principle
The method is based on dissolution of the examined sample’s probe in inorganic acids. If the sample has not dissolved completely, the sediment is melted together with barium peroxide or sodium peroxide and the resulting fusion product is dissolved using inorganic acids.
6 Reagents and Materials
- De-ionized water of specific resistance 18 MOm·cm;
- Analytical grade Nitric acid;
- Analytical grade Hydrochloric acid (concentrated);
- Analytical grade Hydrochloric acid diluted 0,3 vol.%, 10 vol.%, 15 vol.%, 20 vol.%;
- “Tsar’s vodka”;
- Analytical grade Sulfuric acid 10 vol.%;
- Analytical grade Hydrofluoric acid (concentrated);
- Analytical grade Barium peroxide;
- Analytical grade Sodium peroxide;
- Analytical grade Sodium sulfate;
- Reference Materials having a composition similar to the samples being tested.
7 Apparatus and Equipment
- Analytical balance with precision equal to or better than 0,001g;
- Electric oven with closed coil
- Muffle furnace providing heating temperature up to 1000°C.
- Drying oven with temperature regulation providing maintenance of the required temperature up to 150°C;
- Adjustable pipettes with 1, 2, 5 and 10 ml marks;
• 100, 250 ml volumetric flasks;
• Graduated beakers volume 50, 100 ml;
• Glass beakers volume 100, 250, 300, 600 ml;
• Conical glass funnels # 5;
• 50-100 ml Teflon beakers with lids;
• Glass-carbon bowls (Teflon beakers) volume 200 ml;
• Watch glass;
• Agate mortar and pestle;
• Corundum crucibles;
• “Blue band” de-ashed paper filters;
• Equipment for crushing homogenization of probes (ball crusher or disk mill fitted with tungsten carbide components).
8 Sample preparation
Using the quartering method, select a probe having a mass of 100 g from the received sample. If the mass of the sample is less than 500 g, the probe mass should be 10 g.
If the mass of the probe is not more than 10 g, it is sent to the examination in full.
Select a probe from a Reference Material to be prepared along with the test samples. Select one or more Reference Materials that are similar to the composition of the test samples as determined in Protocol CIP 1. The available Reference Materials appropriate for use with the RDB are given in Table 1.
Table 1.
Reference Materials for use with the RDB. N refers to Transpolar Branch of OAO GMK Norilsk Nickel and K refers to Kolskaya GMK.
| Sample Code (Passport Number) | Sample Identification |
|-------------------------------|-----------------------------|
| N18 | Nickel Sludge |
| N19 | Copper Sludge |
| N20 | KP-1 Grade Concentrate |
| N21 | KP-2 Grade Concentrate |
| K11 | Nickel Sludge |
| K16 | Copper Sludge |
| K22 | Platinum-Palladium Concentrate |
| K25 | Dried Copper Sludge |
Dry the probes for samples and Reference Materials at 105°C to constant weights and homogenize them by means of crushing (abrasion) in an agate mortar or with the help of a mill.
9 Calibration
Not required
10 Quality Control
The completeness of sample dissolution is judged by the following methods:
- Visually by the absence of sediment;
- Dilution of appropriate reference materials;
- Batch variation method.
If using the batch variation method, one should prepare four additional batches 1/5 the size indicated in paragraph 11. The compositions of these additional probes are measured in accordance with Protocols 3 and 4. The measurements results of these additional diminished probes must coincide with the measurement results of the regular probes, within the limits of error, calculated by the t-criteria ($\Delta = 3$, $18 \times MSD$, where MSD - mean-square deviation).
11 Procedure
Method 1$^1$ - Samples with low precious metals content (less than 0.3 percent by mass).
Select 4 batches of samples 1.00 g each, by repeated quartering from the powder sample, and place each one into a Teflon beaker (glass-carbon bowls). Wet each sample with 1 ml of deionized water and add 50 ml of ‘tsar vodka’ during 0.5-1 hour while heated up to slow boiling. Cool the solutions and add 10 ml of hydrofluoric acid. Let the solutions stand for 2 hours at room temperature. Steam the resulting solutions at a temperature of 60-70°C to the condition of wet salts. Then, add 15 ml of “tsar vodka” and 5 ml of hydrofluoric acid and again steam the solutions to the condition of wet salts. Repeat treatment by the “tsar vodka”, then add 10 ml of concentrated hydrochloric acid and steam the solution to the condition of wet salts. Finally, add 10 ml of concentrated hydrochloric acid and 30-40 ml of deionized water and boil the solution for 5-10 min.
The completeness of sample dissolution is judged visually by the absence of sediment.
If there is no sediment, pour the resulting solution into a 100 ml volumetric flask, add 10 vol.% solution of the hydrochloric acid to reach the mark, and then mix the contents.
If sediment is present, see Additional method.
Method 2$^1$ - Samples with high precious metals content (above 0.3 percent by mass)
Select 4 subsamples weighing 0.50g each by repeated quartering from powder sample, and place them into separate Teflon beakers (glass-carbon bowls). Wet each sample with 1 ml of de-ionized water and then add 16 ml of the “tsar vodka”. Let the resulting solutions sit for 30 min at room temperature and then for 1.5-2 hours under heating up to a temperature of 60-70°C. Add 3 ml of hydrofluoric acid and steam to the condition of wet salt and add 10-15 ml of concentrated hydrochloric acid and again steam to the condition of wet salt. Repeat the HF
$^1$ For determination of Arsenic, Selenium and Tellurium the following method of probe preparation is used.
Place probe batch of 0.2 g weight in the 300 cm$^3$ beaker, then add 30 cm$^3$ of nitric acid and 1-2 cm$^3$ of Bromine. Place the beaker covered with a watch glass in an exhaust hood for 1 hour for sulfur oxidation. Then, warm the beaker on an electric hotplate for 20-30 minutes for bromine evaporation. After that cool and wash off beaker walls with water. Heat the beaker until dissolution is complete, cool the solution, pour it into the volumetric flask (100-250 ml), and add deionized water to the mark. Let insoluble sediment settle and then filter solution through the ‘blue band’ filter.
and HCl additions two more times. Dissolve the wet salts by heating to the temperature of 60-70°C in 50 ml of concentrated hydrochloric acid. Pour the solution into a 100 ml measuring flask, increasing its volume with the 20% hydrochloric acid to reach the mark, and mix well. The completeness of sample dissolution is judged visually by the absence of sediment.
If sediment is present, see **Additional method**. (See below)
**Additional method** \(^2\) – If residue is present
For either Method 1 or Method 2, if the probes did not dissolve completely, filter the solutions with the residues through the dual ‘blue band’ paper filters in to 600 ml beakers. Wash filters with sediments 3-4 times with hot 20 vol.% solution of hydrochloric acid and 3-4 times with hot deionized water. Preserve the filter with sediment. Evaporate the filtrate to a volume of 10-20 ml, then add 20 ml of hydrochloric acid and cool (Filtrate 1).
Place the filter with residue into a corundum crucible and place it in an oven. Raise the temperature gradually to 600-650°C to dry, ash and calcine the material. Hold the temperature for a period of 30-40 minutes. Cool the crucible and mix its contents with barium peroxide (mass proportion 1:10) and place in a separate corundum crucible for further melting in a muffle furnace. Place crucibles with mixes in a warm (\( \leq 200^\circ C \)) muffle furnace and slowly heat up to 900°C over 2 hours. Cool the crucibles with fusion products at room temperature. Place the crucibles in 250cm\(^3\) beakers, pour 100ml of 15 vol.% hydrochloric acid over the contents, cover the beakers with watch glasses, and dissolve the fusion\(^3\). After dissolution of the fusion is finished, extract each crucible from the solution with the help of a glass rod and wash with 15 vol.% hydrochloric acid and then water. Heat each solution to the point of full chemical decomposition of barium peroxide and than add to Filtrate 1.
Evaporate the combined Filtrate 1 to wet salts. Add 20 ml of analytical grade hydrochloric acid and add water up to 100 ml. Heat until boiling and add drop by drop 1-2 ml of hot (10 vol.%) sulfuric acid and then add also by drops a solution of sodium sulfate until obtaining a transparent solution upon addition of the last drop. Cool the solution, filter it through the ‘blue band’ filter and wash the sediment 5-6 times with the 0.3 vol.% hydrochloric acid. Evaporate the filtrate to a volume of 20-30 ml. Pour it into a 100 ml volumetric flask and bring to the mark with 10 vol.% hydrochloric acid. Mix the contents of the volumetric flask well.
**Note:** If barium peroxide is not available, sodium peroxide could be used for dissolving of the sediment.
### 12 Calculation
Not required
### 13 Reporting procedures including expression of results
Solution results are recorded in a form required by the examining laboratory’s reporting protocol. It must be recorded in the report in what form the sample was received (in form of powder, or bar, or cake and etc.), how it was fined, whether any residue was left after dilution, and what actions was taken in order to dissolve this residue.
### 14 Literature and manuals
\(^2\) In order to determine Barium (Sodium) and Sulfur, an additional probe must be picked up from the solution before using **additional method**.
\(^3\) If dark residues are present on the filter, the filter must be ashed and the smelting procedure repeated.
15 Method performance
Not required
Determination of the elemental composition of precious metal-containing products by ICP-OES
Author:
Quality manager:
Authorisation:
Date:
Update and review summary
0.1 Updates
| # | Section | Nature of Amendment | Date | Authorisation |
|---|---------|---------------------|------|---------------|
| | | | | |
| | | | | |
0.2 Reviews
| Review date | Outcome of Review | Next Review Date | Authorisation |
|-------------|--------------------|------------------|---------------|
| | | | |
| | | | |
CONTENTS
0 Update and review summary 2
0.1 Updates 2
0.2 Reviews 2
1 Title 4
2 Scope 4
3 Safety and Environment 4
4 Definitions 4
5 Principle 5
6 Reagents and Materials 5
7 Apparatus and Equipment 5
8 Sample preparation 5
9 Calibration 5
10 Quality Control 6
11 Procedure 7
12 Calculation 9
13 Reporting procedures including expression of results 9
14 Normative references 10
15 Method performance 10
1 Title
Determination of the elemental composition of precious metal-containing products by ICP-OES.
2 Scope
This procedure is intended for determining sodium, aluminum, magnesium, sulphur, phosphorus, potassium, calcium, chromium, manganese, iron, cobalt, arsenic concentrations in the range of $1 \cdot 10^{-4}$ to 100 weight % in the tested material; as well as titanium, nickel, copper, selenium, molybdenum, ruthenium, rhodium, palladium, silver, tin, antimony, tellurium, barium, tungsten, platinum, gold and lead concentrations in the range of $1 \cdot 10^{-2}$ to 100 weight %, using the method of optical emission spectroscopy with inductively coupled plasma.
3 Safety and Environment
The general analysis and safety requirements prescribed by national and local laws and regulations in force at the enterprise must be followed.
4 Definitions
*ICP-OES*: - method of inductively coupled plasma optical emission spectroscopy.
*Error (regarding a single analysis result)*: difference between a test result and the accepted reference value.
*Error index “Δ”*: limits of the error associated with a test results determined under reproducibility conditions with the stipulated probability.
*Precision*: closeness of agreement between independent test results obtained under stipulated conditions.
*Standard Deviation*: measure of how values are dispersed about a mean in a distribution of values.
*Repeatability*: precision under repeatability conditions, i.e. conditions where independent test results are obtained with the same method on identical test items in the same laboratory by the same operator using the same equipment within short intervals of time.
*Repeatability Standard Deviation*: standard deviation of test results obtained under repeatability conditions.
*Repeatability Limit “r”*: value less than or equal to which the absolute difference between two test results obtained under repeatability conditions may be expected to be with a probability of 95%.
*Reproducibility*: precision under reproducibility conditions, i.e. conditions where test results are obtained with the same method on identical test items in different laboratories with different operators using different equipment.
*Reproducibility Standard Deviation*: standard deviation of test results obtained under reproducibility conditions.
*Reproducibility Limit “R”*: value less than or equal to which the absolute difference between two test results obtained under reproducibility conditions may be expected to be with a probability of 95%.
*Reference Material (RM)*: material or substance of the subject for analytical testing
sufficiently homogeneous regarding one or several reliably determined characteristics to be used for the measurement method assessment.
*Calibration function*: functional relationship relating the measured signal intensity to the analyte quantity.
5 Principle
The method is based on measuring the intensity of the spectral line caused by stimulation of identifying element’s atoms in an inductively coupled plasma. During these measurements, solution of the sample under analysis is sprayed into the plasma. Quantification of an element’s concentration is made by comparison of the intensity of its spectral line with those of a series of calibration standard solutions.
6 Reagents and Materials
- 99.996% Gaseous argon;
- Ultra pure water, >18MΩ·cm;
- Ultra pure hydrochloric acid, 15 vol.%;
- Standard solutions of the elements to be analyzed with mass concentration 1000 μg/ml.
7 Apparatus and Equipment
- Inductively Coupled Plasma Optical Emission Spectrophotometer with computer controlled operating and data handling system.
- Adjustable pipettes with 200-1000 μl and 1.0-5.0 ml marks;
- 25 ml and 250 ml volumetric flasks;
8 Sample preparation
Executed in accordance with CIP protocol # 2.
9 Calibration
Prepare calibration solutions by dilution of the standard solutions with mass concentration 1000 μg/ml on the day of use. Concentrations of the determined elements are listed in Table 1.
**Table 1**
*Mass concentration of the test elements in calibration solutions*
| Calibration solution No. | Mass concentration of each element, μg/ml |
|--------------------------|------------------------------------------|
| 0- calibration blank | 0 |
| 1 | 10 |
| 2 | 1,0 |
| 3 | 0,10 |
**Preparation of calibration solution No. 1:**
Pipet 2.5 ml of the standard solution (mass concentration of 1000μg/ml) of each of the test elements into a 250 ml volumetric flask. Then, add the diluted hydrochloric acid solution (15 vol%) to fill the flask to the mark.
Preparation of calibration solution No. 2:
Pipet 2.5 ml of calibration solution No. 1 into a 25 ml volumetric flask. Then, add the diluted hydrochloric acid solution (15 vol%) to fill the flask to the mark.
Preparation of calibration solution No. 3:
Pipet 2.5 ml of calibration solution No. 2 into a 25 ml plastic test-tube. Then, add the diluted hydrochloric acid solution (15 vol%) to fill the tube to the mark.
‘Calibration blank’
The diluted hydrochloric acid (15 vol%) which was used for preparation of calibration solutions is the ‘calibration blank’.
Calibrate the spectrometer using solutions No. 1, 2, 3 and ‘calibration blank’. Measure the ‘calibration blank’ first and then the calibration solutions in decreasing order of their numbers. From the intensity of the test elements’ emission lines, subtract the intensity of the ‘calibration blank’. For each element, acquire 3 scans and calculate an average intensity value from these measurements.
Construct a calibration curve for each analytical wavelength within the following axes: average intensity (after subtracting the ‘calibration blank’) vs. mass proportion of the tested element in the calibration sample. Regression factors are automatically calculated and saved in the computer memory until the next calibration.
Calibration curves should be linear and have a linear correlation coefficient of at least 0.999. If calibration curves do not satisfy this condition, the spectrometer calibration must be repeated.
10 Quality Control
Quality Control of analysis results should be conducted in accordance with the regulations of the ISO 5725 with the use of Reference Materials, close to the tested samples in their chemical composition. Also, the difference between test results and corresponding value of the Reference Material must be smaller than the Error index “Δ”.
If unacceptable results are obtained, the ICP’s operating conditions and the spectrometer alignment must be checked and the calibration must be repeated. If the repeated calibration does not provide a smaller difference between the test result and the corresponding value of the Reference material, most likely the sample preparation was not done correctly. The samples must be digested again according to Protocol CIP 2.
Stability control of the Calibration Curves is conducted after measurement of each 10 samples.
Calibration solutions are used for the stability control of the Calibration Curves. The mass concentration of determined elements in the Calibration solutions should be in the range of the measured mass concentrations.
Calibration is considered stable when the following condition is fulfilled:
\[
\frac{|C - C_K|}{C_K} \leq 0.05,
\]
(1)
Where:
C_k – is the value of the element mass concentration of the Calibration solution, μg/ml;
C – is the measured value of the element mass concentration, μg/ml.
If condition (1) is not achieved, the spectrometer must be calibrated again.
**Suitability evaluation** of duplicate results on the same subsample is carried out in the following manner.
The arithmetic mean of two measurements executed on the same subsample is accepted as the final result of the analysis when the difference between them is within limits of the *Repeatability Limit* “r_2”.
If the absolute deviation between the results of two measurements exceeds “r_2” one must obtain two more measurement results.
If in this case the difference between the biggest and the smallest values of 4 measurements is equal or less than the critical range CR_{0.95,n=4} (calculated for the confidence level value of P=95%) than as the ultimate result one should record the arithmetic mean of the 4 measurements.
If the difference between the biggest and the smallest meanings of four measurement results is bigger than the critical range for four measurements, then the median value for four measurements should be recorded as the ultimate result, which is calculated in accordance with the following formula.
\[
\overline{X} = \text{med}\{X_1 < X_2 < X_3 < X_4\} = \frac{X_2 + X_3}{2},
\]
(2)
Where,
X_2 – the second smallest result;
X_3 – the third smallest result.
Deviation between the results of the initial and repeatable analysis must not exceed the *Reproducibility Limit* R.
### 11 Procedure
#### 11.1 Procedure on determination of the element composition.
Prepare the spectrometer as described in its Operation Manual.
For an Optima 3000 (Perkin Elmer, USA), the following working parameters are given as guidelines of typical operating conditions. Daily operating conditions will vary slightly from these values in order to optimize instrumental response:
- ICP generator working frequency: 40 MHz
- Output capacity: 1,3 KWt
- Plasma forming argon flux: 15 l/min.
- Transporting argon flux: 0,8 l/min.
- Cooling argon flux: 0,5 l/min.
- Observation height: 15 mm;
- Sample feed rate: 0,85 ml/min.
The wavelengths of lines recommended\(^1\) for this analysis are shown in Table 2.
**Table 2**
**Recommended wavelengths of spectral lines**
| Test element | Wavelength, nm | Test element | Wavelength, nm | Test element | Wavelength, nm |
|--------------|----------------|--------------|----------------|--------------|----------------|
| Aluminum | 396,150 | Lead | 220,353 | Ruthenium | 240,272 |
| Antimony | 217,579 | Magnesium | 279,553 | Selenium | 196,026 |
| Arsenic | 188,979 | Manganese | 260,568 | Silver | 338,289 |
| | 193,759 | | | | 328,068 |
| Barium | 455,403 | Molybdenum | 202,030 | Sodium | 589,592 |
| Calcium | 396,847 | Nickel | 231,604 | Sulphur | 180,669 |
| Chromium | 205,560 | Palladium | 340,462 | Tellurium | 214,283 |
| Cobalt | 228,616 | Phosphorus | 178,221 | Tin | 189,927 |
| | | | 185,943 | | |
| | | | 213,618 | | |
| Copper | 324,756 | Platinum | 265,946 | Titanium | 334,905 |
| Gold | 242,795 | Potassium | 766,485 | Tungsten | 207,912 |
| Iron | 238,204 | Rhodium | 343,489 | | |
In the process of measurements, mutual influence of elements should be taken into consideration and if necessary a correction procedure should be applied.
Spectrometer calibration is done in accordance with § 9 of this Protocol.
During the analysis, inject blank solutions and solutions of tested samples in the spectrometer and measure the intensities of analytical lines of the determined elements. Subtract the intensity of the blank from each measured line. Obtain three measurements for each solution and calculate the mean value of the measured intensities for each analytical line. Use the corresponding calibration curve, to determine the mass concentration of each element in each tested subsample and record the values obtained.
**11.2 Procedure for identification of the source of a sample of unknown origin.**
The procedure for interpretation of the results of the ICP-OES measurements depends to some extent upon the type of sample being tested and the forensic question to be answered. The most straightforward application is comparison of the element concentrations determined in a sample of questioned origin with the compositions of products in the RDB. A decision that the composition of the substance being tested corresponds to the composition of one specific product in the RDB can be made if the concentration of each element in the unknown substance measured using this protocol (taking into account the error index of the method) is within the variability range of the concentrations of that element in that product.
---
\(^1\) If analyst uses a different wavelength it should be specified in the analysis report.
In the case that the elemental composition coincides with the composition of a product in the RDB it is necessary to specify this conclusion in the analysis report.
In accordance with Protocol 0, the identification of an unknown substance can be considered complete if a correspondence with a product in the RDB is determined on the basis of elemental (Protocols 3,4) and phase (Protocol 5) composition.
The element concentrations determined using this protocol may also be used to answer other questions of forensic significance. The concentrations of elements, particularly the distribution of PGMs may be compared to world-wide databases to provide information concerning possible regions of origin for a sample. Some level of deconvolution of mixtures may be possible using the results of this protocol, when the composition of end members is known or can be estimated. Specific procedures for these and other similar interpretive evaluations cannot be provided in this analytical protocol, because they depend upon the specific case evaluations needed. The purpose of this protocol is to provide an analytical method that produces element concentrations of known accuracy and precision that can be utilized for answering a variety of questions of forensic interest.
12 Calculations
Weight % of the determined element is calculated using the following formula:
\[ X = \frac{C \cdot V}{M} \cdot 10^{-4}, \quad (5) \]
Where,
- \( C \) – mass concentration of the element determined using the calibration curve in \( \mu g/ml \);
- \( V \) – final volume of the sample solution including all dilutions if operator had done them) in ml;
- \( M \) – weight of the subsample in g;
As the final result of an analysis of a sample, the arithmetic mean of two measurements or the median of four measurements are given. Whether the arithmetic mean or the median values are given depends on the quality of the measurements and the procedure for this is specified in § 10.
13 Reporting procedures including expression of results
Analysis results are recorded in a form required by the examining laboratory’s reporting protocol. In addition to the analysis results, the report must also include:
- date of the testing,
- information about the expert (a profession acquired after graduation from the University, expert profession, period of service as an expert, post taken),
- incoming sample information (source of the sample’s origin, who, when and in what way sampling was executed),
- the results of comparison of unknown substance composition with RDB (Does unknown sample composition match with composition of any product from RDB? With what specified product does it match?).
The number of significant figures in the analysis result (element concentration) should correspond to the number of significant figures according to the Error index.
14 Normative references
ISO 5725–1 through ISO5725-6 Accuracy (trueness and precision) of measurement methods and results. Part 1/Cor1:1998, Part 2/Cor1:2002, Part 3/Cor1: 2001, Part 4:1994, Part 5/Cor1:2005, Part 6/Cor1:2001.
The Fitness for Purpose of Analytical Methods: A Laboratory Guide to Method Validation and Related Topics: 1998 (EURACHEM).
ISO/IEC 17025:2005 General Requirement for the Competence of Testing and Calibration Laboratories).
15 Method performance
Method performance is demonstrated by the calculation of the Accuracy, Repeatability and Reproducibility indexes according to the formulas from ISO 5725, and statistic correlations adjusted in the process of “Mastering CIP in Research Analytical Centre OSC ‘Gipronikel Institute”\(^2\). Performance characteristics shown in Tables 3, 4 and 5 were obtained using certified reference samples and are taken from the report on “Mastering CIP in Research Analytical Centre OSC ‘Gipronikel Institute”\(^2\). Comparison between the reference and measured values show no significant bias and therefore in the calculations the bias was neglected.
\[ \Delta = 1,96 \sigma_R; \]
\[ r_2 = Q(P,2) \sigma_r = 2,77 \sigma_r; \]
\[ CR_{0.95,n=4} = Q(P,4) \sigma_r = 3,63 \sigma_r; \]
\[ R = Q(P,2) \sigma_R = 2,77 \sigma_R; \]
\[ \sigma_R = 1,4 \sigma_r \]
Where:
- \( \Delta \) - Error index;
- \( \sigma_r \) - Repeatability Standard Deviation;
- \( \sigma_R \) – Reproducibility Standard Deviation;
- \( r_2 \) – Repeatability Limit;
- \( R \) – Reproducibility Limit;
- \( CR_{0.95,n=4} \) – critical range for four multiple determinations.
Metrological characteristics for precious metals are given in tables 3, 4 and for the rest of the elements – in table 5 (top values are listed).
---
\(^2\) Report on Scientific Research “Mastering CIP in Research Analytical Centre OSC ‘GIPRONIKEL Institute’, its development and improvement” Saint Petersburg. – ‘GIPRONIKEL Institute’, 2006.
Table 3
Error index “Δ” for precious metals, mass %. (P=0.95)
| Content range | Ag | Au | Pt | Pd | Rh | Ru |
|---------------------|------|------|------|------|------|------|
| from 0,0100 to 0,0200 | 0,0015 | 0,0015 | 0,0015 | 0,0015 | 0,0020 | 0,0020 |
| from 0,0200 to 0,0500 | 0,0028 | 0,003 | 0,004 | 0,004 | 0,0035 | 0,005 |
| from 0,050 to 0,100 | 0,006 | 0,007 | 0,007 | 0,007 | 0,008 | 0,010 |
| from 0,100 to 0,200 | 0,012 | 0,015 | 0,015 | 0,015 | 0,020 | 0,024 |
| from 0,200 to 0,500 | 0,028 | 0,020 | 0,025 | 0,025 | 0,030 | 0,034 |
| from 0,50 to 1,00 | 0,04 | 0,04 | 0,04 | 0,04 | 0,05 | 0,07 |
| from 1,00 to 2,00 | 0,09 | 0,09 | 0,06 | 0,06 | 0,07 | 0,09 |
| from 2,00 to 5,00 | 0,21 | 0,17 | 0,13 | 0,13 | 0,14 | 0,20 |
| from 5,00 to 10,00 | 0,30 | 0,22 | 0,21 | 0,21 | 0,22 | 0,28 |
| from 10,0 to 20,0 | 0,4 | 0,3 | 0,4 | 0,4 | 0,4 | 0,6 |
| from 20,0 to 50,0 | 1,0 | 0,7 | 0,7 | 0,7 | 0,7 | 1,3 |
Table 4.
Values of the repeatability limit $r_2$, critical range of repeated measurements $CR_{0.95,n=4}$, reproducibility limits R for precious metals (P=0.95).
| | Mass, % | $r_2$ | $CR_{0.95}(4)$ | R |
|---|---------|-------|----------------|-----|
| Ag | | | | |
| from 0,0100 to 0,0200 | 0,0015 | 0,0020 | 0,0021 |
| from 0,0200 to 0,0500 | 0,0028 | 0,0036 | 0,0039 |
| from 0,050 to 0,100 | 0,006 | 0,008 | 0,008 |
| from 0,100 to 0,200 | 0,012 | 0,016 | 0,017 |
| from 0,200 to 0,500 | 0,028 | 0,036 | 0,039 |
| from 0,50 to 1,00 | 0,04 | 0,05 | 0,06 |
| from 1,00 to 2,00 | 0,09 | 0,12 | 0,12 |
| from 2,00 to 5,00 | 0,21 | 0,27 | 0,29 |
| from 5,00 to 10,00 | 0,30 | 0,39 | 0,40 |
| from 10,0 to 20,0 | 0,4 | 0,5 | 0,6 |
| from 20,0 to 50,0 | 1,0 | 1,3 | 1,4 |
| Au | | | | |
| from 0,0100 to 0,0200 | 0,0015 | 0,0020 | 0,0021 |
| from 0,0200 to 0,0500 | 0,003 | 0,005 | 0,005 |
| from 0,050 to 0,100 | 0,007 | 0,009 | 0,010 |
| from 0,100 to 0,200 | 0,015 | 0,020 | 0,021 |
| Mass, % | r₂ | CR₀,₉₅ (4) | R |
|--------|-----|-----------|-----|
| from 0,200 to 0,500 | 0,020 | 0,026 | 0,028 |
| from 0,50 to 1,00 | 0,04 | 0,05 | 0,06 |
| from 1,00 to 2,00 | 0,09 | 0,12 | 0,13 |
| from 2,00 to 5,00 | 0,17 | 0,22 | 0,24 |
| from 5,00 to 10,00 | 0,22 | 0,29 | 0,31 |
| from 10,0 to 20,0 | 0,3 | 0,4 | 0,4 |
| from 20,0 to 50,0 | 0,7 | 0,9 | 1,0 |
**Pt**
| Mass, % | r₂ | CR₀,₉₅ (4) | R |
|--------|-----|-----------|-----|
| from 0,0100 to 0,0200 | 0,0015 | 0,0020 | 0,0021 |
| from 0,0200 to 0,0500 | 0,004 | 0,005 | 0,005 |
| from 0,050 to 0,100 | 0,007 | 0,009 | 0,010 |
| from 0,100 to 0,200 | 0,015 | 0,020 | 0,021 |
| from 0,200 to 0,500 | 0,025 | 0,032 | 0,034 |
| from 0,50 to 1,00 | 0,04 | 0,05 | 0,05 |
| from 1,00 to 2,00 | 0,06 | 0,08 | 0,08 |
| from 2,00 to 5,00 | 0,14 | 0,18 | 0,19 |
| from 5,00 to 10,00 | 0,21 | 0,27 | 0,30 |
| from 10,0 to 20,0 | 0,4 | 0,6 | 0,5 |
| from 20,0 to 50,0 | 0,7 | 0,9 | 1,0 |
**Pd**
| Mass, % | r₂ | CR₀,₉₅ (4) | R |
|--------|-----|-----------|-----|
| from 0,0100 to 0,0200 | 0,0015 | 0,0015 | 0,0021 |
| from 0,0200 to 0,0500 | 0,004 | 0,004 | 0,005 |
| from 0,050 to 0,100 | 0,007 | 0,007 | 0,010 |
| from 0,100 to 0,200 | 0,015 | 0,015 | 0,021 |
| from 0,200 to 0,500 | 0,025 | 0,025 | 0,034 |
| from 0,50 to 1,00 | 0,04 | 0,04 | 0,05 |
| from 1,00 to 2,00 | 0,06 | 0,06 | 0,08 |
| from 2,00 to 5,00 | 0,14 | 0,13 | 0,19 |
| from 5,00 to 10,00 | 0,21 | 0,21 | 0,30 |
| from 10,0 to 20,0 | 0,4 | 0,4 | 0,5 |
| from 20,0 to 50,0 | 0,7 | 0,7 | 1,0 |
**Rh**
| Mass, % | r₂ | CR₀,₉₅ (4) | R |
|--------|-----|-----------|-----|
| from 0,0100 to 0,0200 | 0,0020 | 0,0026 | 0,0028 |
| from 0,0200 to 0,0500 | 0,0035 | 0,005 | 0,005 |
| from 0,050 to 0,100 | 0,008 | 0,011 | 0,011 |
| from 0,100 to 0,200 | 0,020 | 0,026 | 0,028 |
| from 0,200 to 0,500 | 0,030 | 0,04 | 0,042 |
| from 0,50 to 1,00 | 0,05 | 0,07 | 0,07 |
| from 1,00 to 2,00 | 0,07 | 0,09 | 0,10 |
### Table 5
Values of error index “Δ” (P=0.95), repeatability limit $r_2$, critical range of repeated measurements $CR_{0.95,n=4}$, reproducibility limits R for base metals and contaminant elements (sodium, aluminum, magnesium, sulfur, phosphorus, potassium, calcium, chromium, manganese, iron, cobalt, arsenic, titanium, nickel, copper, selenium, molybdenum, tin, antimony, tellurium, barium, tungsten and lead).
| Mass, % | ± Δ | $r_2$ | $CR_{0.95}(4)$ | R |
|---------|-------|--------|----------------|------|
| from 0.010 to 0.020 | 0.005 | 0.005 | 0.006 | 0.006|
| from 0.020 to 0.050 | 0.010 | 0.010 | 0.013 | 0.014|
| from 0.050 to 0.100 | 0.020 | 0.021 | 0.028 | 0.028|
| from 0.100 to 0.200 | 0.030 | 0.031 | 0.041 | 0.042|
| from 0.20 to 0.50 | 0.04 | 0.042 | 0.055 | 0.056|
| from 0.50 to 1.00 | 0.05 | 0.07 | 0.09 | 0.10 |
| from 1.00 to 2.00 | 0.15 | 0.16 | 0.21 | 0.21 |
| from 2.00 to 5.00 | 0.15 | 0.20 | 0.26 | 0.28 |
| from 5.0 to 10.0 | 0.30 | 0.30 | 0.39 | 0.40 |
| from 10.0 to 20.0 | 0.4 | 0.4 | 0.5 | 0.7 |
| from 20.0 to 50.0 | 0.8 | 0.8 | 1.0 | 1.0 |
| from 50.0 | 1.4 | 1.4 | 1.8 | 1.8 |
Determination of the elemental composition of precious metal-containing products by ICP-MS
Author:
Quality manager:
Authorisation:
Date:
0 Update and review summary
0.1 Updates
| # | Section | Nature of Amendment | Date | Authorisation |
|---|---------|---------------------|------|---------------|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
0.2 Reviews
| Review date | Outcome of Review | Next Review Date | Authorisation |
|-------------|-------------------|------------------|---------------|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
CONTENTS
0 Update and review summary ................................................................. 2
0.1 Updates .................................................................................. 2
0.2 Reviews .................................................................................. 2
1 Title .......................................................................................... 4
2 Scope ....................................................................................... 4
3 Safety and Environment ............................................................. 4
4 Definitions .............................................................................. 4
5 Principle .................................................................................. 5
6 Reagents and Materials ............................................................. 5
7 Apparatus and Equipment ........................................................ 5
8 Sample preparation .................................................................. 5
9 Calibration .............................................................................. 5
10 Quality control ....................................................................... 6
11 Procedure ............................................................................... 7
12 Calculations ........................................................................... 8
13 Reporting procedures including expression of results ............ 9
14 Normative references ............................................................... 10
15 Method performance ............................................................... 10
1 Title
Determination of the elemental composition of precious metal-containing products by ICP-MS.
2 Scope
This procedure is intended for determining titanium, nickel, copper, selenium, molybdenum, ruthenium, rhodium, palladium, silver, tin, antimony, tellurium, barium, tungsten, iridium, platinum, gold and lead concentrations in the range from $1 \cdot 10^{-4}$ to $1 \cdot 10^{-2}$ weight %, using the method of mass spectroscopy with inductively coupled plasma.
3 Safety and Environment
The general analysis and safety requirements prescribed by national and local laws and regulations in force at the enterprise must be followed.
4 Definitions
*ICP-MS*: - method of mass spectroscopy with inductively coupled plasma.
*Error (regarding a single analysis result)*: difference between a test result and the accepted reference value.
*Error index “Δ”*: limits of the error associated with a test results determined under reproducibility conditions with the stipulated probability.
*Precision*: closeness of agreement between independent test results obtained under stipulated conditions.
*Standard Deviation*: measure of how values are dispersed about a mean in a distribution of values.
*Repeatability*: precision under repeatability conditions, i.e. conditions where independent test results are obtained with the same method on identical test items in the same laboratory by the same operator using the same equipment within short intervals of time.
*Repeatability Standard Deviation*: standard deviation of test results obtained under repeatability conditions.
*Repeatability Limit “r”*: value less than or equal to which the absolute difference between two test results obtained under repeatability conditions may be expected to be with 95% probability.
*Reproducibility*: precision under reproducibility conditions, i.e. conditions where test results are obtained with the same method on identical test items in different laboratories with different operators using different equipment.
*Reproducibility Standard Deviation*: standard deviation of test results obtained under reproducibility conditions.
Reproducibility Limit “R”: value less than or equal to which the absolute difference between two test results obtained under reproducibility conditions may be expected to be with 95% probability.
Reference Material (RM): material or substance of the subject for analytical testing sufficiently homogeneous regarding one or several reliably determined characteristics to be used for the measurement method assessment.
Calibration function: functional relationship relating the measured signal intensity to the analyte quantity.
5 Principle
The method is based on ionization of the tested substance in the inductively coupled plasma and detection of the generated ions using a mass spectrometry method.
Inductively coupled plasma (ICP) – an argon plasma of high temperature created by a high frequency alternating electric field with the help of an external inductor. A solution for analysis is injected into the plasma in the form of an aerosol. During this process, recombination of argon ions with atoms formed from elements contained in the solution generates free ions. These ions are fed into the mass spectrometer with the help of a special interface.
In the mass spectrometer, the ions are separated on the basis of their mass-to-charge ratio and counted by an ion detector. The measured signal received by the detector is proportional to the concentration of isotopes of the determined elements.
6 Reagents and Materials
- 99.996% Gaseous argon.
- De-ionized water of specific resistance 18 MOm·cm.
- Ultra-purity grade hydrochloric acid, 15 vol.% solution.
- Standard solutions of the elements to be analyzed with mass concentration of 1000 µg/ml.
7 Apparatus and Equipment
- Inductively Coupled Plasma Mass Spectrometer with computer controlled operating and data handling system.
- Adjustable pipette with graduation marks at 1.0-5.0 ml.
- 25 ml volumetric flasks.
8 Sample preparation.
Executed in accordance with Protocol CIP 2.
9 Calibration
Prepare calibration solutions by dilution of the standard samples of mass concentration 1000 µg/ml on the day of use. Concentrations of the determined elements are listed in Table 1.
Table 1
Mass concentration of the determined elements in calibration solutions
| Calibration solution No. | Element mass concentration, µg/ml |
|--------------------------|----------------------------------|
| 0 – ‘calibration blank’ | 0 |
| 3 | 0,10 |
| 4 | 0,010 |
| 5 | 0,0010 |
Preparation of calibration solution No. 3:
Pipet 2.5 ml of calibration solution No. 2 (see Protocol 3) into a 25 ml volumetric flask. Then, add the diluted hydrochloric acid solution (15 vol.%) to fill the flask to the mark.
Preparation of calibration solution No. 4:
Pipet 2.5 ml of calibration solution No. 3 into a 25 ml volumetric flask. Then, add the diluted hydrochloric acid solution (15 vol.%) to fill the flask to the mark.
Preparation of calibration solution No. 5:
Pipet 2.5 ml of calibration solution No. 4 into a 25 ml volumetric flask. Then, add the diluted hydrochloric acid solution (15vol.%) to fill the flask to the mark.
The same diluted hydrochloric acid (15 vol.%) which was used for preparation of the calibration solutions is used as a ‘calibration blank’.
Calibrate the spectrometer using solutions No. 3, 4, 5 and ‘calibration blank’. Measure the ‘calibration blank’ first and then the calibration solutions in decreasing order of their numbers. Measure the intensity recorded at the test elements’ mass, and subtract the intensity of the corresponding mass for ‘calibration blank’.
Plot a calibration curve for each measured mass within the following axes: intensity (deducting the ‘calibration blank’) vs. mass proportion of the tested element in the calibration sample. Regression factors are automatically calculated by least-squares method and saved in the computer memory until the next calibration.
Calibration curves should be linear and have a linear correlation coefficient of at least 0.999. If calibration curves do not satisfy this condition, spectrometer calibration must be repeated.
10 Quality control
Quality control of analysis results must be conducted in accordance with the regulations of the ISO 5725 with the use of Reference Materials, close to the tested samples in their chemical composition. Also, the difference between test results and data of the Reference Materials must be smaller than the Error index “Δ”.
If unacceptable results are obtained, the cause of this condition must be found and corrective action taken. This may include realignment of the mass spectrometer or adjustment of the ICP operating conditions and must be followed by recalibration. If the repeated calibration does not provide acceptable quality conditions, a conclusion could be made that sample preparation was done incorrectly. In this case, the sample preparation procedure must be repeated in accordance with Protocol # 2.
Stability control of the Calibration Curves is also conducted after observation of 10 samples.
Calibration solutions are used for the stability control of the Calibration Curves. Mass concentration of determined elements in the Calibration solutions should be in the range of the measured mass concentrations.
Calibration Curves could be considered stable if the following condition is fulfilled:
\[
\frac{|C - C_K|}{C_K} \leq 0.05,
\]
(1)
Where:
- \(C_k\) – is the value of the element mass concentration of the Calibration solution, mg/ml.
- \(C\) – is the measured value of the element mass concentration, mg/ml.
If condition (1) is not achieved, the spectrometer should be recalibrated.
Suitability evaluation of two multiple determinations is executed in the following manner.
The arithmetic mean of the results of two determinations executed on two single subsamples is accepted as the final result of the analysis, if the difference between them is within limits of the Repeatability Limit “\(r_2\)”.
If the absolute deviation between the results of two measurements exceeds “\(r_2\)”, one must obtain two more measurement results.
If, in this case, the difference between the biggest and the smallest values of 4 measurements is equal to or less than the critical range \(CR_{0.95,n=4}\) (calculated for the confidence level value of \(P=95\%\)), then as the ultimate result one should record the arithmetic mean of the measurement results.
If the difference between the biggest and the smallest meanings of four measurement results is bigger than the critical range for four measurements, then the median value for four measurements should be recorded as the ultimate result, which is calculated in accordance with the following formula.
\[
\overline{X} = med\{X_1 < X_2 < X_3 < X_4\} = \frac{X_2 + X_3}{2},
\]
(2)
Where,
\(X_2\) – second the smallest result;
\(X_3\) – third the smallest result.
Deviation between the results of the initial and repeatable analysis must not exceed Reproducibility Limit R.
11 Procedure
11.1 Procedure on determination of the element composition.
Prepare the mass-spectrometer according to its Operation Manual.
For Inductively Coupled Plasma Mass Spectrometer Elan 6000 (Perkin Elmer, USA) the following working parameters are given as guidelines of typical operating
conditions. Daily operating conditions will vary slightly from these values in order to optimize instrumental response:
- ICP generator working frequency: 40 MHz;
- output capacity: 1,1 kW;
- plasma forming argon flux: 15 L/min.;
- transporting argon flux: 0,8 L/min.;
- cooling argon flux: 0,5 L/min.;
- measurement exposure for one isotope: 1 sec.;
- number of parallel measurements: 6.
The list of recommended isotopes is shown in Table 2\(^1\).
In the process of measurements, mutual influence of elements should be taken into consideration and a procedure for their correction should be applied.
Spectrometer calibration is done in accordance with § 9 of this Protocol.
In the process of measurements, blank solutions and solutions of tested samples are injected in the apparatus and intensities of analytical lines of the determined elements are measured (deducting the intensity of the blank). Using the calibration curve, mass concentration of the element in the tested subsample is determined and this result is recorded (printed or written down from the screen).
### Table 2
| Element | Mass/charge ratio, T | Element | Mass/charge ratio, T |
|---------|---------------------|---------|---------------------|
| Ti | 47 | Sn | 120 |
| Ni | 60 | Sb | 121 |
| Cu | 63 or 65 | Te | 126 |
| Se | 82 | Ba | 137 or 138 |
| Mo | 95 | W | 184 |
| Ru | 99 | Ir | 193 |
| Rh | 103 | Pt | 195 |
| Pd | 105 | Au | 197 |
| Ag | 107 | Pb | Σ 206, 207, 208 |
### 11.2 Procedure for identification of the source of a sample of unknown origin.
The procedure for interpretation of the results of the ICP-OES measurements depends to some extent upon the type of sample being tested and the forensic question to be answered. The most straightforward application is comparison of the element
---
\(^1\) If operator is using different isotops, he/she has to give a list of isotopes used in the report on executed analysis.
concentrations determined in a sample of questioned origin with the compositions of products in the RDB. A decision that the composition of the substance being tested corresponds to the composition of one specific product in the RDB can be made if the concentrations of each element in the unknown substance measured using this protocol (taking into account the error index of the method) is within the variability range of the concentration of that element in that product.
In the case that the elemental composition coincides with the composition of a product in the RDB, it is necessary to specify this conclusion in the analysis report.
In accordance with Protocol 0, the identification of an unknown substance can be considered complete if a correspondence with a product in the RDB is determined on the basis of elemental (Protocols 3,4) and phase (Protocol 5) composition.
The element concentrations determined using this protocol may also be used to answer other questions of forensic significance. The concentrations of elements, particularly the distribution of PGMs, may be compared to world-wide databases to provide information concerning possible regions of origin for a sample. Some level of deconvolution of mixtures may be possible using the results of this protocol, when the composition of end members is known or can be estimated. Specific procedures for these and other similar interpretive evaluations cannot be provided in this analytical protocol, because they depend upon the specific case evaluations needed. The purpose of this protocol is to provide an analytical method that produces element concentrations of known accuracy and precision that can be utilized for answering a variety of questions of forensic interest.
12 Calculations
Weight % of the determined element is calculated using the following formula:
\[ X = \frac{C \cdot V}{M} \cdot 10^{-4}, \quad (5) \]
Where,
- \( C \) – mass concentration of the element determined using the calibration curve in \( \mu g/ml \);
- \( V \) – final volume of the sample solution (including all dilutions if operator had done them) in ml;
- \( M \) – weight of the subsample in g.
For the final result of the testing, the arithmetic mean of two results or median of four results of multiple determinations made from single subsamples depending on the fulfillment of the conditions as specified in § 10 is reported.
13 Reporting procedures including expression of results
Analysis results are recorded required by the examining laboratory’s reporting protocol. In addition to the analysis results, the protocol must also include:
- date of the testing,
- information about expert(a profession acquired after graduation from the University, expert profession, period of service as an expert, post taken),
- incoming sample information (source of the sample’s origin, who, when and in what way sampling has been executed),
- the results of comparison of unknown substance composition with RDB (Does unknown sample composition match with composition of any product from RDB? With what specified product does it match?).
The number of significant figures in the analysis result (element concentration) should correspond to the number of significant figures according to the *Error index*.
### 14 Normative references
The Fitness for Purpose of Analytical Methods: A Laboratory Guide to Method Validation and Related Topics: 1998 (EURACHEM).
ISO 5725–1 through ISO5725-6 Accuracy (trueness and precision) of measurement methods and results. Part 1/Cor1:1998, Part 2/Cor1:2002, Part 3/Cor1: 2001, Part 4:1994, Part 5/Cor1:2005, Part 6/Cor1:2001.
ISO/IEC 17025:2006 General Requirement for the Competence of Testing and Calibration Laboratories).
### 15 Method performance
Method performance is demonstrated by the calculation of the Accuracy, Repeatability and Reproducibility indexes according to formulas from ISO 5725, and statistic correlations adjusted in the process of the Procedure development\(^1\) and are shown in Tables 3, 4, and 5.
Calculations were based on assumption that not excluded systematic error of the analysis is negligible.
\[
\Delta = 1,96 \sigma_g;
\]
\[
r_2 = Q(P,2)\sigma_r = 2,77 \sigma_r;
\]
\[
CR_{0,95,n=4} = Q(P,4)\sigma_r = 3,63 \sigma_r;
\]
\[
R = Q(P,2)\sigma_R = 2,77 \sigma_R;
\]
Where:
- \( \Delta \) - Error index;
- \( \sigma_r \) - Repeatability Standard Deviation;
- \( \sigma_R \) – Reproducibility Standard Deviation;
- \( r_2 \) – Repeatability Limit;
- \( R \) – Reproducibility Limit;
- \( CR_{0,95,n=4} \) – critical range for four multiple determinations.
---
\(^1\) Report on Scientific Research “Development and improvement of the RDB for OSC MMC “Norilsk Nickel” containing platinum group metals” Moscow – “Forensic Institute FSS of Russia”, 2006, 43°c.
Metrological characteristics for noble metals are given in tables 3, 4 and for the rest of the elements – in table 5 (top values are listed).
| | Ag | Au | Pt | Pd | Rh | Ir | Ru |
|----------------|--------|--------|--------|--------|--------|--------|--------|
| 0.00010 — 0.00020 | 0.00003 | 0.000029 | 0.00003 | 0.00003 | 0.00003 | 0.000020 | 0.000020 |
| 0.00020 — 0.00050 | 0.00007 | 0.00007 | 0.00007 | 0.00007 | 0.00006 | 0.00005 | 0.00005 |
| 0.00050 — 0.00100 | 0.00015 | 0.00010 | 0.00015 | 0.00015 | 0.00008 | 0.00015 | 0.00015 |
| 0.0010 — 0.0020 | 0.00021 | 0.00015 | 0.0003 | 0.0003 | 0.00015 | 0.00029 | 0.00029 |
| 0.0020 — 0.0050 | 0.0005 | 0.0003 | 0.0004 | 0.0004 | 0.0004 | 0.0005 | 0.0005 |
| 0.0050 — 0.0100 | 0.0010 | 0.0007 | 0.0007 | 0.0007 | 0.0007 | 0.0010 | 0.0010 |
### Table 4
**Values of repeatability limit $r_2$, critical range of repeated measurements $CR_{0.95,n=4}$, reproducibility limit $R$ for precious metals ($P = 0.95$)**
| An. | Mass. % | $r_2$ | $CR_{0.95}(4)$ | $R$ |
|-----|---------|-------|----------------|-----|
| Ag | 0.00010 — 0.00020 | 0.00003 | 0.000039 | 0.00004 |
| | 0.00020 — 0.00050 | 0.00007 | 0.00009 | 0.00010 |
| | 0.00050 — 0.00100 | 0.00015 | 0.00020 | 0.00021 |
| | 0.0010 — 0.0020 | 0.00021 | 0.00027 | 0.00029 |
| | 0.0020 — 0.0050 | 0.0005 | 0.0007 | 0.0007 |
| | 0.0050 — 0.0100 | 0.0010 | 0.0013 | 0.0015 |
| Au | 0.00010 — 0.00020 | 0.000029 | 0.00004 | 0.00004 |
| | 0.00020 — 0.00050 | 0.00007 | 0.00009 | 0.00010 |
| | 0.00050 — 0.00100 | 0.00010 | 0.00013 | 0.00014 |
| | 0.0010 — 0.0020 | 0.00015 | 0.00020 | 0.00021 |
| | 0.0020 — 0.0050 | 0.0003 | 0.0004 | 0.0005 |
| | 0.0050 — 0.0100 | 0.0007 | 0.0009 | 0.0010 |
| Pt | 0.00010 — 0.00020 | 0.00003 | 0.00004 | 0.00004 |
| | 0.00020 — 0.00050 | 0.00007 | 0.00009 | 0.00010 |
| | 0.00050 — 0.00100 | 0.00015 | 0.00020 | 0.00021 |
| | 0.0010 — 0.0020 | 0.0003 | 0.0004 | 0.0004 |
| | 0.0020 — 0.0050 | 0.0004 | 0.0005 | 0.0005 |
| | 0.0050 — 0.0100 | 0.0007 | 0.0009 | 0.0010 |
| Pd | 0.00010 — 0.00020 | 0.00003 | 0.00003 | 0.00004 |
| | 0.00020 — 0.00050 | 0.00007 | 0.00007 | 0.00010 |
| | 0.00050 — 0.00100 | 0.00015 | 0.00015 | 0.00021 |
| | 0.0010 — 0.0020 | 0.0003 | 0.0003 | 0.0004 |
| | 0.0020 — 0.0050 | 0.0004 | 0.0004 | 0.0005 |
| | 0.0050 — 0.0100 | 0.0007 | 0.0007 | 0.0010 |
| Rh | 0.00010 — 0.00020 | 0.00003 | 0.00003 | 0.00004 |
| | 0.00020 — 0.00050 | 0.00006 | 0.00007 | 0.00008 |
| | 0.00050 — 0.00100 | 0.00008 | 0.00011 | 0.00012 |
| | 0.0010 — 0.0020 | 0.00015 | 0.00019 | 0.00021 |
| | 0.0020 — 0.0050 | 0.0004 | 0.0005 | 0.0005 |
| | 0.0050 — 0.0100 | 0.0007 | 0.0009 | 0.0010 |
| Ir | 0.00010 — 0.00020 | 0.000020 | 0.000027 | 0.000029 |
| | 0.00020 — 0.00050 | 0.00005 | 0.00006 | 0.00007 |
| | 0.00050 — 0.00100 | 0.00015 | 0.00021 | 0.00020 |
| | 0.0010 — 0.0020 | 0.00030 | 0.00042 | 0.00039 |
| | 0.0020 — 0.0050 | 0.0005 | 0.0006 | 0.0007 |
| | 0.0050 — 0.0100 | 0.0013 | 0.0014 | 0.0014 |
| Ru | 0.00010 — 0.00020 | 0.000020 | 0.000027 | 0.000029 |
| | 0.00020 — 0.00050 | 0.00005 | 0.00006 | 0.00007 |
| | 0.00050 — 0.00100 | 0.00015 | 0.00021 | 0.00020 |
| | 0.0010 — 0.0020 | 0.00030 | 0.00042 | 0.00039 |
| | 0.0020 — 0.0050 | 0.0005 | 0.0006 | 0.0007 |
| | 0.0050 — 0.0100 | 0.0010 | 0.0013 | 0.0014 |
Table 5
Values of error index “Δ” (P=0.95), repeatability limit $r_2$, critical range of repeated measurements CR$_{0.95,n=4}$, reproducibility limit R for base metals and contaminants (titanium, nickel, copper, selenium, molybdenum, tin, antimony, tellurium, barium, tungsten and lead)
| Mass. % | ± Δ | $r_2$ | CR$_{0.95}(4)$ | R |
|---------|-------|---------|----------------|-------|
| 0.00010 — 0.00020 | 0.00003 | 0.00003 | 0.00003 | 0.00004 |
| 0.00020 — 0.00050 | 0.00007 | 0.00006 | 0.00007 | 0.00008 |
| 0.00050 — 0.00100 | 0.00010 | 0.00008 | 0.00011 | 0.00012 |
| 0.0010 — 0.0020 | 0.00021 | 0.00015 | 0.00019 | 0.00021 |
| 0.0020 — 0.0050 | 0.0005 | 0.0004 | 0.0005 | 0.0005 |
| 0.0050 — 0.0100 | 0.0010 | 0.0007 | 0.0009 | 0.0010 |
Determination of the phase composition of precious metal-containing products by XRD
Author:
Quality manager:
Authorisation:
Date:
0 Update and review summary
0.1 Updates
| # | Section | Nature of Amendment | Date | Authorisation |
|---|---------|--------------------|------|---------------|
| | | | | |
| | | | | |
| | | | | |
0.2 Reviews
| Review date | Outcome of Review | Next Review Date | Authorisation |
|-------------|-------------------|------------------|---------------|
| | | | |
| | | | |
| | | | |
| | | | |
CONTENTS
0 Update and review summary 2
0.1 Updates 2
0.2 Reviews 2
1 Title 4
2 Scope 4
3 Safety and Environment 4
4 Definitions 4
5 Principle 4
6 Reagents and Materials 4
7 Apparatus and Equipment 4
8 Sample preparation 5
9 Apparatus calibration 5
10 Quality Control 5
11 Procedure 5
12 Calculation 6
13 Reporting procedures including expression of results 6
14 Normative references and manuals 6
15 Method performance 7
1 Title
Determination of the phase composition of precious metal-containing products by XRD.
2 Scope
This procedure is intended for determining the phase composition of substances, their composition features and the source of origin of test samples using the method of X-Ray Diffractometry.
3 Safety and Environment
The general analysis and safety requirements prescribed by national and local laws and regulations in force at the enterprise must be followed.
4 Definitions
*Accuracy (trueness)*: closeness of the agreement between the mean value achieved from the series of analysis results and the adopted true value.
*Reference Material (RM)*: material or substance one or more of whose property values are sufficiently homogeneous and well established to be used for the calibration of an apparatus, the assessment of a measurement method, or for assigning values to materials.
*Calibration Curve*: graphical representation of measuring signal as a function of quantity of analyte.
*Detection Limit*: lowest content of analyte that could be detected with the help of this particular method with 95% probability.
5 Principle
This method is based on the diffraction of X-rays by the test material’s crystal lattice. Using an X-ray diffraction spectrum, or diffractogram (location and intensity of spectral lines), one can determine the inter-plane distances in the lattices of test materials. By comparing them to reference values for various crystalline substances, the components of the samples under study can be identified.
This method allows determination of the phase composition of the substances and establishment of the differences between samples under analysis with the help of diffractogram appearance.
6 Reagents and Materials
- Technical distilled ethyl alcohol (96%).
7 Apparatus and Equipment
- X-Ray Diffractometer with computer controlled operating and data handling system.
- A toolset for preparing flat powder samples:
- Corundum or agate mortar and pestle;
- Polished glass plate to press the sample into the measuring cuvette;
- Blade to remove the surplus sample material from the cuvette surface.
• ICDD computer database of reference spectra and search system.
8 Sample preparation
Place the sample into a mortar and grind it into a homogenous paste, adding some alcohol. Place the paste in the cuvette and press it with a polished glass plate. The sample top surface and the working surface of the cuvette must be on the same level. Cut off the surplus sample material with a blade. The surface area of the sample shall be not less than 1 cm$^2$.
9 Apparatus calibration
Apparatus calibration must be conducted daily before start of work and also after every goniometer adjustment. Use a finely ground and annealed powdered sample of $\alpha$-quartz as a standard material for apparatus calibration. For calibration, scan the goniometer over the angle range from 15 to 100° (2$\Theta$). Record the angular positions of the analytical lines, their intensities and half widths. Maximum allowed deviation of the angular positions of the X-ray diffraction lines from their true values is ± 0.05 2$\Theta$. If this condition is not fulfilled, goniometer adjustment must be made and the calibration must be repeated.
10 Quality Control
Quality Control is realized by parallel measurements of two probes under the conditions of repeatability. Complete qualitative matching of all lines in the two diffractograms (± 0.05 2$\Theta$) should be fulfilled, i.e. number and aspect angles of the X-ray diffraction lines should coincide. Relative intensity of any three reference diffraction lines should differ not more than 10%. If these conditions are not fulfilled, preparation of two probes and scanning are repeated. If after a second attempt, results are still unsatisfactory, operational examination of the apparatus should be conducted and a fault cleared.
11 Procedure
Analyze two replicate probes of each sample prepared in accordance with paragraph 8. Align the diffractometer according to its Operation Manual.
Perform the scanning while the sample is rotating. The rotation speed shall correspond to the scanning speed so that a full revolution is made with scanning step not exceeding 0.02° (2$\Theta$)
For the XPert-MPD (Philips, Holland) typical measurement parameters are as follows:
• Radiation: Co-K$\alpha$;
• Tube voltage: 40KV
• Anode current: 45mA
• Primary beam: 1st slot width – 10 mm, 2nd slot width – 1 mm
• Secondary beam: slot width – 0.25 mm, detector slot width - 0.1 mm
• Scan range: 15 - 100° (2$\Theta$)
12 Calculation
Phase composition determination
The phase composition of a substance is determined by comparing the diffractograms of the analyzed substance with the reference diffractograms in ICDD database (using the system of automatic data processing and search). In this case, each diffraction peak is defined by angular position \((2\Theta^\circ)\) or by interplanar spacing (measured in Angstroms units) and by relative intensity, normalized under intensity of the most intense diffraction peak.
Fingerprinting method used in the analysis of a substance.
In general, identification of the substance is done in accordance with the previous paragraph and is based on the analysis of its characteristic features revealed by the whole set of methods applied.
The "fingerprinting" method can be used by direct superposition of the diffractogram of the substance under study over diffractograms contained in RDB. If the main peaks of the known substances from the RDB are not present in the substance under investigation, then no known products of the RDB are present in the sample within the method’s detection limit of 1-3% weight.
If a part of the diffractogram of the analyzed substance matches a product in the RDB (within the limits of variability of the phase composition typical for the relative type of products), it is likely that this type of product is present in the sample tested.
In other cases the fingerprinting method can be used for comparing the diffractogram of the test substance with model diffractograms of mixed substances, if the diffractometer is supplied with the corresponding software.
13 Reporting procedures including expression of results
Analysis results are recorded in a form required by the examining laboratory’s reporting protocol. In addition to the analysis results, the report must also include:
- date of the testing;
- information about expert (a profession acquired after graduation from the University, expert profession, period of service as an expert, post taken);
- incoming sample information (source of the sample’s origin, who, when and in what way sampling was executed).
14 Normative references and manuals
ICDD – The International Centre for Diffraction Data.
15 Method performance
The Complex Analytical Procedure XRD is not used as a quantitative method. Therefore, indices of Accuracy, Repeatability and Reproducibility are not applicable. For this method, the performance is determined by the status of the XRD, as mentioned in paragraph 9 “Apparatus calibration”.
The detection limit for crystalline phases is 1 to 3 %.
Determination of the elemental composition of microparticles of precious metal-containing products by scanning electron microscopy with X-ray microanalysis
Author:
Quality manager:
Authorisation:
Date:
Update and review summary
0.1 Updates
| # | Section | Nature of Amendment | Date | Authorisation |
|---|---------|---------------------|------|---------------|
| | | | | |
| | | | | |
| | | | | |
0.2 Reviews
| Review date | Outcome of Review | Next Review Date | Authorisation |
|-------------|-------------------|------------------|---------------|
| | | | |
| | | | |
| | | | |
CONTENTS
0 Update and review summary 2
0.1 Update 2
0.2 Review
1 Title 4
2 Scope 4
3 Safety and Environment 4
4 Definitions 4
5 Principle 4
6 Reagents and Materials 5
7 Apparatus and Equipment 5
8 Sample preparation 6
9 Calibration 6
10 Quality Control 6
11 Procedure 7
12 Calculation 8
13 Reporting procedures including expression of results 8
14 Normative references and manuals 9
15 Method performance 9
1 Title
Determination of the elemental composition of microparticles of precious metal-containing products by scanning electron microscopy with X-ray microanalysis
2 Scope
This method is intended to identify the combination of microparticles of a product under testing by means of comparison of the microparticles’ elemental composition with data stored in the database.
3 Safety and Environment
The general analysis and safety requirements prescribed by national and local laws and regulations in force at the enterprise must be followed.
4 Definitions
Accuracy (trueness): closeness of the agreement between the mean value achieved from the series of analysis results to the adopted real value.
Analysis results error: deviation of the analysis result from the true value.
Reference Material (RM): material or substance one or more of whose property values are sufficiently homogeneous and well established to be used for the calibration of an apparatus, the assessment of a measurement, or for assigning values to materials.
Calibration function: functional relationship relating the measured signal intensity to the analyte quantity.
Detection limit: lowest content of analyte, which could be detected as being present with 95% probability using this particular method.
Microparticle: particles with an average diameter less than 100 µm.
5 Principle
5.1 Basic physics of the method
The method is based on the interaction between a scanning electron beam and a sample material. During the interaction of the electron beam with sample material, secondary electrons and X-ray emission are generated along with a variety of other signals.
Secondary electrons are emitted from the atoms occupying the surface of the sample directly exposed to the electron beam. Collection and display of these secondary electrons forms a readily interpretable image of the surface. The contrast of the image is determined by and displays the sample morphology.
The X-ray emission depends on the elemental composition of the analyzed material. Energy measurement of the characteristic X-ray emission permits the determination of the qualitative element composition. Measurement of the intensity of a characteristic line is used to calculate quantitatively the concentration of the associated element. Calculations of the elements’ concentrations are made with the use of physical models of interaction between the electron probe and sample material.
The diameter of the excitation area of the discriminating X-ray emission varies in the range from 1 µm to 9 µm, depending on the average atomic number of the substance under testing.
5.2 Principle of microparticles combination identification.
The identification of a product by its microparticle content is carried out by the comparison of the microparticles’ elemental compositions present in a product with data stored in the database (RDB).
Each product is represented in the database in the form of microparticle compositions grouped in several types. Each type is characterized by a definite set of elements and recorded in the form of a conventional formula, e.g.:
\[ \text{Pt}_{18-68} - \text{Pd}_{7-19} - \text{Ru}_{2-11} - \text{Rh}_{0,6-4} - \text{As}_{0-1} - \text{Se}_{3-6} - \text{Si}_{0-1} - \text{S}_{2-7} - \text{O}_{2-8} - \text{Ni}_{1-7} - \text{Cu}_{0,5-3} \]
Elements in this formula are ranged by their decreasing informational importance for classification and identification. The indices reflect either intervals of elements concentrations (in mass %) or range of relative integral intensities of analytical lines (in rel. %). Both formulas are reported in the database.
In the evaluation of the information importance of the chemical elements the following considerations are taken into account:
- concentration of an element in the microparticle composition;
- whether this element is typical for the microparticles that form the material of a certain product;
- element ‘specificity’ – (for instance, such rare elements as Platinum, Palladium, Tellurium, Selenium, etc. are much more informative than widely occurring elements like Silicon, Aluminum, Oxygen, Iron).
By using the above criteria and measuring no less than 1000 microparticles, the number and relative content of microparticle types in a product are determined and stored in the RDB. Regularly through time, the products are measured and these data are again stored in the RDB. In this way, possible variation of the products through time is monitored.
6 Reagents and Materials
- Technical, particle free distilled ethyl alcohol (96%).
7 Apparatus and Equipment
- Scanning Electron Microscope with Energy Dispersive Microanalyzer providing the identification of elements within the range from boron to uranium and with spectral resolution not worse than 135 eV for Mn-K\(\alpha\) at a count rate of 1000 counts per second;
- Ultrasonic disperser with frequency (20-33) KHz;
- Adjustable dosing pipettes of (5-40) \(\mu\)L and of (200-1000) \(\mu\)L;
- Sample mounts (stubs, studs) for scanning electron microscope;
- Disposable carbon conductive double sided adhesive tapes for scanning microscope sample mounts;
- Set of reference materials for EDS calibration;
- Optical binocular microscope with magnification from 20 to 100 times.
8 Sample preparation
Separate a probe weighing 0.5 g from the powder sample by repeated quartering and place it into a disposable 1.5 mL plastic test tube. Add 1 mL of ethyl alcohol and mix the contents using the ultrasonic disperser for 5 min.
During this ultrasonic mixing, transfer 0.2 mL of the suspension into a clean test tube. Add ethyl alcohol to make a total volume of 1 mL and again ultrasonically mix the contents of this tube. Then during this mixing, transfer 10-20 µL of the alcohol suspension using a micropipette to a scanning electron microscope sample stub covered with a conductive carbon film. The microparticles must form a monolayer on the sample holder. Use an optical microscope to control the process of suspension transfer to the sample stub. If the particles do not form a monolayer, it is necessary to repeat the process of probe preparation on a newly prepared sample stub.
If the mass of the available sample is less than 0.5g, the amount of alcohol can be decreased pro rata, providing that particles form a monolayer on the scanning microscope sample stub.
Dry the sample holders with the monolayer of microparticles at room temperature and then place them in the electron microscope chamber.
9 Calibration
Prior to beginning an analysis, verification of the operational condition of the scanning electron microscope with the X-ray microanalyzer must be established. This includes presence of system peaks, accuracy of magnification, and determination of spectral energy calibration and resolution. Energy calibration of the Energy Dispersive Microanalyzer is performed every 2 hours of equipment work using a “Set of reference materials for X-ray microanalysis” and in accordance with the Operating Manual.
10 Quality Control
Appropriateness controls of the analytical results are executed in accordance with ISO 5725 requirements and using natural minerals as Reference Materials. Recommended minerals as reference material are: Wollastonite, Zircon, and Rhodonite.
Quantitative analysis accuracy is considered satisfactory when the following conditions are met:
\[
\frac{|C - C_k|}{C_k} \leq 0.05,
\]
(1)
Where,
– \( C_k \) is the standardized value of the element mass concentration (more than 1%) in the reference mineral,
– \( C \) is the measured average (\( n=5 \)) element mass concentration in the reference mineral.
If condition (1) is not achieved, the microanalyzer must be recalibrated (see paragraph 9).
11 Procedure
11.1 Procedure on determination of the element composition and morphology of microparticles.
Prepare the scanning electron microscope and energy dispersive microanalyzer according to their Operation Manuals.
Suggested measurement parameters are as follows:
- Accelerating potential: 20 KV
- Field of vision: 50 – 300 µm
- Spectrum integral intensity: ≥ 300000 counts
- Spectral resolution ≤ 135 eV for Mn-Kα
- Elements determined: from Oxygen to Uranium
- Range of Concentrations determined: from 0.2 to 100%
Arbitrarily select the investigated area on the sample stage with a size corresponding to a field of vision 100x100 µm. Microparticles should form a monolayer on the sample stage surface. During measurements, the electron beam should be focused on the center of a microparticle. Examine all microparticles of size larger than 0.5 µm in the field of vision individually. Then move to another field of vision, which does not overlap with the previously examined. Continue this operation until 1000 microparticles have been examined.
The morphology of the microparticles in the examined sample should also be recorded. Obtain images of the most typical and unusual microparticles. Record the correspondence of element composition with morphology of the microparticles.
11.2 Procedure on identification of microparticles composition.
When analyzing an unknown substance the elemental composition of each of a minimum of 1000 microparticles must be determined.
In the electronic version of the RDB, the match between the elemental composition of an analyzed microparticle in the unknown substance and the microparticle elemental compositions present in the RDB is automatically made on basis of its qualitative composition. When only a hard copy of the database is available, the comparison of the elemental composition of analyzed microparticles to the elemental composition of a certain type of microparticles present in the RDB is done as follows:
- Visually compare the qualitative composition of an analyzed microparticle with the range of qualitative compositions of all the types of microparticles present in the hard copy of the RDB. If the composition of the examined microparticle fits with a particular type of microparticle present in the RDB, it is assumed that the examined particle belongs to this particular type of microparticle.
- In the second step, compare the quantitative composition of the examined microparticle with the compositional range given in the RDB for the particular type of microparticle. When the composition of the examined microparticle is within the compositional range limits given in the RDB for this particular type of microparticle, it is concluded that the examined microparticle belongs to this particular type of microparticle.
Comparison of the elemental composition of each of the examined microparticles in the examined unknown substance with the compositional ranges of the stored types of microparticles will give one of the following three possible results:
- The examined microparticle corresponds to a type of microparticle only present in one particular product. The examined microparticle is related to this particular product.
- The examined microparticle corresponds to a type of microparticle present in different products. The examined microparticle is referred to all of these products.
- The examined microparticle does not correspond with the microparticle compositions present in the RDB. The examined microparticle is referred as unclassified.
With the above two steps we can conclude that Norilsk Nickel precious metal containing products are present in the examined unknown substance when:
- All types of microparticles forming one specific product in the RDB are present in the examined unknown substance.
- The relative weight of these different types of microparticles differs no more than the product variability given in the RDB.
If the measured microparticle compositions do not fulfill the above criteria, it is concluded that no products specified in the RDB are present in the examined unknown sample.
12 Calculation
At the first stage of processing of each obtained spectra, qualitative element analysis is conducted on the basis of the energy of characteristic lines. If characteristic lines overlap, a best estimate of the elements presents in the microparticle is checked with the help of element composition calculation (using the standard software of the analyzer). An element is considered present if the value of its calculated concentration is above the detection limit for that element.
Quantitative proportions of the detected elements are calculated using well-accepted software supplied with the analyzer. For each type of microparticle in the examined unknown substance, the range of its measured proportion is given (see paragraph 5.2).
13 Reporting procedures including expression of results
Analysis results are recorded in a form required by the examining laboratory’s reporting protocol. In addition to the analysis results, the report also must include:
- date of the testing,
- information about expert (a profession acquired after graduation from the University, expert profession, period of service as an expert, post taken),
- incoming sample information (source of the sample’s origin, who, when and in what way sampling has been executed).
14 Normative references and manuals
- ISO 5725–1 through ISO5725-6 Accuracy (trueness and precision) of measurement methods and results. Part 1/Cor1:1998, Part 2/Cor1:2002, Part 3/Cor1: 2001, Part 4:1994, Part 5/Cor1:2005, Part 6/Cor1:2001.
- ISO/IEC 17025:2006 General Requirement for the Competence of Testing and Calibration Laboratories).
- The Fitness for Purpose of Analytical Methods: A Laboratory Guide to Method Validation and Related Topics: (Manual CITA/EURACHEM), 1998.
15 Method performance
Relative error does not exceed 15% for elements from sodium to uranium and 30% for elements from oxygen to fluorine, except in cases where there are peak overlaps for which correction cannot be made.
For spectra with 300000 counts integral intensity the element detection limits are:
- from oxygen to fluorine - 5 percent by weight;
- from sodium to uranium - 0.2 percent by weight.
Appendix 9. Protocols (CIP 0 - CIP 6) |
Ferumoxytol-enhanced magnetic resonance imaging methodology and normal values at 1.5 and 3T
Colin G. Stirrat\textsuperscript{1,*}, Shirjel R. Alam\textsuperscript{1}, Thomas J. MacGillivray\textsuperscript{2,3}, Calum D. Gray\textsuperscript{2,3}, Rachael Forsythe\textsuperscript{1}, Marc R. Dweck\textsuperscript{1}, John R. Payne\textsuperscript{4}, Sanjay K. Prasad\textsuperscript{5}, Mark C. Petrie\textsuperscript{4}, Roy S. Gardner\textsuperscript{4}, Saeed Mirsadræe\textsuperscript{2}, Peter A. Henriksen\textsuperscript{1}, David E. Newby\textsuperscript{1,2} and Scott I. K. Semple\textsuperscript{1,2}
**Abstract**
**Background:** Ultrasmall superparamagnetic particles of iron oxide (USPIO)-enhanced magnetic resonance imaging (MRI) can detect tissue-resident macrophage activity and identify cellular inflammation. Clinical studies using this technique are now emerging. We aimed to report a range of normal R2* values at 1.5 and 3 T in the myocardium and other tissues following ferumoxytol administration, outline the methodology used and suggest solutions to commonly encountered analysis problems.
**Methods:** Twenty volunteers were recruited: 10 imaged each at 1.5 T and 3 T. T2* and late gadolinium enhanced (LGE) MRI was conducted at baseline with further T2* imaging conducted approximately 24 h after USPIO infusion (ferumoxytol, 4 mg/kg). Regions of interest were selected in the myocardium and compared to other tissues.
**Results:** Following administration, USPIO was detected by changes in R2* from baseline (1/T2*) at 24 h in myocardium, skeletal muscle, kidney, liver, spleen and blood at 1.5 T, and myocardium, kidney, liver, spleen, blood and bone at 3 T ($p < 0.05$ for all). Myocardial changes in R2* due to USPIO were $26.5 \pm 7.3$ s-1 at 1.5 T, and $37.2 \pm 9.6$ s-1 at 3 T ($p < 0.0001$ for both). Tissues showing greatest ferumoxytol enhancement were the reticuloendothelial system: the liver, spleen and bone marrow ($216.3 \pm 32.6$ s-1, $336.3 \pm 60.3$ s-1, $69.9 \pm 79.9$ s-1; $p < 0.0001$, $p < 0.0001$, $p = ns$ respectively at 1.5 T, and $275.6 \pm 69.9$ s-1, $463.9 \pm 136.7$ s-1, $417.9 \pm 370.3$ s-1; $p < 0.0001$, $p < 0.0001$, $p < 0.01$ respectively at 3 T).
**Conclusion:** Ferumoxytol-enhanced MRI is feasible at both 1.5 T and 3 T. Careful data selection and dose administration, along with refinements to echo-time acquisition, post-processing and analysis techniques are essential to ensure reliable and robust quantification of tissue enhancement.
**Trial registration:** ClinicalTrials.gov Identifier - NCT02319278. Registered 03.12.2014.
**Keywords:** Cardiac, MRI, Inflammation, USPIO
**Background**
Iron oxide nanoparticles are a class of magnetic resonance imaging (MRI) contrast agents that are generating interest as a method of detecting tissue inflammation. Historically, these nanoparticles were initially used for gastrointestinal, reticuloendothelial system and lymph node imaging [1–3], and subsequently in hepatic and cardiac imaging [4–7]. Recently however, it is in their use as an MRI contrast agent for detecting tissue-resident macrophages that clinical applications are now emerging [8–15].
T2* MRI has been successfully used for over a decade in diagnosing and grading severity of iron accumulation in transfusion-dependent thalassaemia major, and has been instrumental in guiding therapy that improves prognosis, and allows serial disease monitoring [16, 17]. T2* MRI in the assessment of iron accumulation is easily quantifiable, well validated, highly reproducible, clinically robust, and is achievable in a single breath hold [18–22].
Ultrasmall superparamagnetic particles of iron oxide (USPIO) consist of an iron oxide core surrounded by a carbohydrate or polymer coating. These particles can extravasate through damaged capillaries, where they are engulfed and concentrated by tissue-resident macrophages [23]. Gradient echo T2*-weighted (T2*W) sequences are highly sensitive to magnetic field inhomogeneities such as susceptibility artifacts due to the presence of iron, including USPIO. Accumulation of USPIOs in macrophages can be quantified and visualized using T2*W MRI [8, 9] and calculation of, and observing the reduction in, T2* relaxation time due to the presence of iron. Thus USPIO-enhanced MRI can detect tissue-resident macrophage activity and identify localized cellular inflammation within tissues.
In this present study we aimed to observe and quantify the distribution ferumoxytol enhancement following intravenous administration at 1.5 and 3 T MRI and establish a range of normal values for healthy myocardium and other tissue. We also aimed to develop our methodology and describe commonly encountered problems in T2* image analysis of USPIO.
**Methods**
This was an open-label observational multi-centre cohort study using human volunteers recruited as part of a larger trial, recruiting patients with cardiac inflammation. The study was performed in accordance with the declaration of Helsinki, the approval of the Scotland A research ethics committee, and the written informed consent of all participants.
**Subjects**
Participants were aged over 18 years of age. Exclusion criteria were contraindication to MRI or ferumoxytol infusion, any systemic inflammatory comorbidity (eg rheumatoid arthritis), renal failure (estimated glomerular filtration rate <30 mL/min), pregnancy, breastfeeding and women of child-bearing age not ensuring reliable contraception.
**Magnetic resonance imaging**
MRI was performed using 3 T and 1.5 T scanners (Magnetom Verio and Avanto respectively, Siemens Healthcare GmbH, Erlangen, Germany), with dedicated cardiac array coils. All images were acquired using electrocardiogram-gated breath-hold imaging. Routine steady state free precession (TrueFISP) sequences were used to acquire long-axis and short-axis images of the heart. Standard cardiac slice widths (6-mm width with 4-mm gap) and 8 echo times (2.1–17.1 ms range) with matrix size of $256 \times 115$ were acquired in order to generate T2* maps. The in-plane resolution differed as required for larger or smaller subjects; generally, a field of view of $400 \times 300$ mm was used with an in-plane resolution of $2.6 \times 1.6$ mm. T2* relaxation maps were generated before and approximately 24 h after administration of USPIO.
Immediately after the baseline T2* and SSFP cine imaging, breath-held inversion enhancement images were acquired following an intravenous administration of gadolinium contrast medium (0.1 and 0.15 mmol/kg at 3 T and 1.5 T respectively; Gadovist, Bayer Plc, Germany).
**Table 1 Participant characteristics**
| | 1.5 T | 3 T |
|------------------------|-------|-------|
| Number | 9 | 10 |
| Male/Female | 3/6 | 4/6 |
| Age (years) | 52 [45.5–61.5] | 50 [45.25–53] |
| Body-mass Index (kg/m²) | 22.9 [20.1–26.9] | 25.9 [22.5–29.4] |
| Ejection Fraction (%) | 63.6 ± 4.9 | 61.1 ± 4.1 |
N (%), mean ± SD, or median [interquartile range]
Optimal inversion time (TI) was determined on a slice-by-slice basis using standard late-enhancement TI-scout protocols. The inversion-recovery late-enhancement short-axis slices were acquired using similar slice positions to the myocardial T2* imaging. The T2* acquisitions also included imaging of the liver, spleen and spine to allow quantification of USPIO accumulation within organs of the reticuloendothelial system.
**USPIO**
Intravenous infusion of USPIO (ferumoxytol, 4 mg/kg; Rienso®, Takeda Italia, Italy) was performed immediately following the baseline magnetic resonance scan over at least 15-min using a concentration of 2–8 mg/mL, diluted in 0.9 % saline or 5 % dextrose. Hemodynamic monitoring was conducted throughout.
**Study protocol**
Volunteers received 2 MRI scans approximately 24 h apart (Fig. 1).
**Image analysis**
All T2*-weighted multi-gradient-echo images for each patient were analyzed using Circle CVI software (Circle CVI42, Canada). Regions of interest (ROI) were drawn in the heart using standard cardiac segmentation [24], and panmyocardial values averaged using segments 1–16. Further ROI were drawn in skeletal muscle, kidney, liver, spleen, blood pool (from LV cavity) and bone marrow.
An experimentally determined threshold used in previous work [8] for the coefficient of determination ($r^2 > 0.85$) was used to exclude data that did not have an acceptable exponential decay when signal intensity (SI) was plotted against echo time. The inverse of the mean T2* (R2*) for each ROI was then calculated to assess the uptake of USPIO, where the higher the value, the greater the USPIO accumulation.
Late gadolinium enhancement (LGE), ventricular volume and functional analyses were performed using Circle CVI software (Circle CVI42, Calgary, Canada). T2* data were collected immediately prior to USPIO administration. USPIO-enhanced T2* data were collected 24–25 h following ferumoxytol administration.
**Statistical analysis**
All statistical analysis was performed with GraphPad Prism, version 6 (GraphPad Software, San Diego, CA). To assess uptake of USPIO in tissues following single administration, R2* increase from pre to 24 h following USPIO were compared using repeated measures one-way ANOVA. Statistical significance was defined as two-sided $p < 0.05$.
**Results**
Twenty volunteer patients were recruited in total (10 at 1.5 T, 10 at 3 T). Forty MRI scans and 20 infusions of ferumoxytol were completed over the course of the study. Data from one participant at 1.5 T has been removed due to the presence of LGE, (which was included
---
**Table 2 Normal values**
| | 1.5 T Pre-USPIO R2*(s⁻¹) | 1.5 T Post-USPIO R2*(s⁻¹) | 1.5 T Change R2*(s⁻¹) | 3 T Pre-USPIO R2*(s⁻¹) | 3 T Post-USPIO R2*(s⁻¹) | 1.5 T Change R2*(s⁻¹) |
|----------------------|--------------------------|---------------------------|-----------------------|------------------------|-------------------------|-----------------------|
| Panmyocardial average| 33.5 ± 5.4 | 60.5 ± 7.2 | 26.5 ± 7.3 | 46.9 ± 4.1 | 84.2 ± 12.4 | 37.2 ± 9.6 |
| Skeletal muscle | 34.7 ± 4.2 | 44.9 ± 4.7 | 10.2 ± 5.8 | 55.5 ± 17.1 | 59.8 ± 6.6 | 4.3 ± 16.3 |
| Kidney | 16.6 ± 2.0 | 81.2 ± 15.2 | 64.6 ± 16.1 | 43.5 ± 39.1 | 115.2 ± 28.1 | 71.8 ± 48.8 |
| Liver | 36.0 ± 7.2 | 252.3 ± 34.3 | 216.3 ± 32.6 | 65.3 ± 21.2 | 340.9 ± 57.8 | 275.6 ± 69.9 |
| Spleen | 22.0 ± 7.7 | 358.3 ± 59.5 | 336.3 ± 60.3 | 51.2 ± 21.1 | 515.1 ± 137.4 | 463.9 ± 136.7 |
| Blood | 11.3 ± 4.1 | 96.0 ± 26.6 | 84.7 ± 27.2 | 18.8 ± 5.3 | 91.5 ± 20.9 | 72.6 ± 18.3 |
| Bone | 84.4 ± 29.2 | 154.3 ± 62.0 | 69.9 ± 79.9 | 330 ± 168.7 | 747.9 ± 277.8 | 417.9 ± 370.3 |
Mean ± SD
in the cardiac MR protocol so that we could exclude volunteers with any detectable cardiac MR abnormalities according to standard cardiac MR protocols). All other volunteers that were included had structurally normal hearts. One participant was prescribed antihypertensive medication but had a normal cardiac MR study and was normotensive so the data was retained for analysis. Administration of ferumoxytol was well tolerated with no adverse reactions reported during or immediately after administration in any of the participants.
Participants were predominantly middle aged, with greater numbers of women in both groups (Table 1). There were no differences between 1.5 T and 3 T groups in BMI or ejection fraction at baseline.
A summary of results is shown in Table 2. At baseline, panmyocardial R2* values were greater at 3 T than 1.5 T (46.9 ± 4.1 versus 33.5 ± 5.4 s$^{-1}$, Fig. 2, $p < 0.01$) as expected. Baseline R2* values were also greater at 3 T in bone ($P < 0.0001$) but no baseline differences were seen between magnetic field strength in all other tissues (Fig. 3, $p > 0.05$ for all). USPIO increased panmyocardial R2* values at 24 h in both 1.5 T and 3 T scanners ($p < 0.0001$ for both). Post-USPIO panmyocardial R2* values were again greater at 3 T than 1.5 T, as expected (84.2 ± 12.4
**Fig. 3** Tissue R2* pre- and post-USPIO administration at 1.5 and 3 T. Following administration, USPIO was detected by an increase in R2*, 24 h after administration in skeletal muscle, kidney, liver, spleen and blood at 1.5 T, and kidney, liver, spleen, blood and bone at 3 T. (***) = $p < 0.0001$, ** = $p < 0.001$, * = $p < 0.01$, * = $p < 0.05$)
versus $60.5 \pm 7.2$ s$^{-1}$, $p < 0.0001$). Panmyocardial change in R2* between baseline and 24 h post USPIO at 1.5 T was $26.5 \pm 7.3$ s$^{-1}$ and at 3 T was $37.2 \pm 9.6$ s$^{-1}$ ($p < 0.0001$ for both). Detectable increases in R2* were also observed at 24 h post-USPIO in skeletal muscle, kidney, liver, spleen and blood at 1.5 T, and kidney, liver, spleen, blood and bone at 3 T. (Fig. 3, $p < 0.05$ for all). BMI correlated with the panmyocardial R2* changes due to USPIO contrast (Fig. 4; $r = 0.72$, $p < 0.001$).
**Discussion**
For the first time, we report a range of normal T2* values in the healthy human heart and other tissues 24 h after ferumoxytol administration at 1.5 and 3 T. We also report problems, solutions and guidance in ferumoxytol-enhanced T2* image analysis.
Following administration, USPIO is detectable by T2* imaging in the myocardium and other tissues at both 1.5 and 3 T. Tissues with small increases in R2* (less than the blood pool) are likely to represent detection of USPIOs within the intravascular space and include skeletal muscle (at 1.5 T only), myocardium and kidney. In contrast, R2* changes that are greater than the blood pool must be due to accumulation of USPIO, either through iron storage, uptake by macrophages or other phagocytes, or sequestered within tissue interstitium. In the absence of tissue biopsies, we cannot be certain, but as the most pronounced R2* changes were seen in the spleen, liver and bone marrow - organs of the reticuloendothelial system - it would appear likely that USPIO is incorporated quickly into tissue-resident phagocytes and macrophages.
Detection of USPIO enhancement in skeletal muscle at 1.5 T but not 3 T is due to generally noisier data seen across all tissues at 3 T. Due to wider data confidence intervals, a larger sample size would be required to detected the same mean change in R2*. The variation in data at 3 T is partly artifact in the images, but also because of the lower values at 3 T (USPIO has a faster T2* decay time at 3 T). With the same sampling echo times, there are fewer data points to construct the decay curve at 3 T than 1.5 T so our error in estimation also increases.
We chose 24 h post USPIO to re-image participants as myocardial signal attenuation at 24 h has shown to be optimal in the myocardium compared to later time points [8, 9]. In view of this, scanning appointments were generally separated by 25 h, and in practice, this regime worked well for both participants and MRI planning. According to previous work [8], we chose a weight-adjusted USPIO dose of 4 mg Fe/kg body weight. However acknowledging that the distribution of USPIO following administration is predominantly in the organs of the reticuloendothelial system and blood pool, this may not be the optimum administration strategy as blood volume does not increase linearly with weight. We found a correlation between BMI and myocardial R2* change, probably due to increased blood pool USPIO concentration in those with higher BMI. We therefore suggest that a fixed dose approach may also be appropriate depending on the application.
Artifacts were commonly encountered with USPIO-enhanced T2* imaging and made data analysis challenging. Post contrast artifacts at the blood-pool to myocardial interface were commonly seen and needed careful exclusion when selecting myocardial ROI. (Fig. 5A) This limited the assessment of USPIO accumulation at the endocardium. Similarly, blooming artifacts from nearby organs with high iron or blood pool USPIO content, such as lung and liver, commonly created signal deficits within the myocardium. In this situation, examination of T2* decay curves and excluding echo times influenced by artifact aided T2* decay curve fitting (Fig. 5).
The advantage of MRI mapping techniques is that visual assessment and objective quantification can be made using the same image, and these are now entering clinical practice. It seems likely that if UPSIO-enhanced MRI is adopted into clinical practice to detect tissue inflammation, T2* mapping would be used for image interpretation. However based on our experiences, we would recommend caution in interpreting maps alone. Signal attenuation seen on the T2* map may be interpreted as tissue USPIO accumulation, but may be due to blooming artifact from nearby susceptibility effects, and close examination of the T2* decay curve, and individual echoes if possible is suggested in order to distinguish accurately between tissue USPIO accumulation and artifact. In theory, setting an r² threshold as we did helps to exclude areas grossly affected by artifact. In practice however, regions with a seemingly acceptable R² may still be influenced by artifact (Fig. 5). Manual exclusion of later echoes (influenced by artifact) from the curve may result in an improvement in R² (a measure of how well the data
Fig. 5 (See legend on next page.)
points fit the curve), however there is the danger that reducing the number of fitting points will in fact reduce the overall sampling accuracy. Clearly, automated software capable of detecting and excluding artifact would be advantageous. This could be achieved by excluding, or applying less weight, to later echo times especially data points at a large distance from the initial decay curve trajectory [25, 26]. It should be noted that like all other MRI sequences, poor data quality heavily influenced by breathing or movement artifact is generally non-interpretable and post processing using automated T2* decay curve fitting software is not likely to provide a remedy.
Echo times in this study were specific for cardiac imaging and were selected appropriately. Therefore they were not optimal for imaging tissues with T2* values substantially higher or lower than myocardium. Native blood pool and post USPIO bone marrow (Fig. 6) provide examples of low and high T2* values respectively that we had difficulty accurately fitting a T2* decay curve. With high T2* values, only a short part of the decay curve is plotted over the echo sampling time period, and often the signal has not decayed sufficiently for an accurate decay curve to be plotted. In contrast, regions with particularly short T2* decay times have decayed to a level expected from background noise before sufficient data sampling has been made. Therefore fitting a decay curve from a small number (2–4) of echo times is clearly difficult, and often too much emphasis is
**Fig. 5** Inferior Blooming artifact. Example illustrating the challenge in assessing whether the inferior myocardial signal attenuation seen arrowed on the T2* colourmap (**a**, scale 0-60 ms) is true or caused by artifact. Drawing a region of interest (**b**) and examining the decay curve (**c**) along with visualising individual echos (**d1-d8**) helps determine that this is a ‘blooming artifact’ from outside the heart is seen to influence echos 4-8. These can be manually removed, forming a new decay curve (**e**) with improvement in curve fitting ($R^2$ value), although with fewer fitting points.
**Fig. 6** Example of high and low T2* values. Regions of Interest with excessively low or high T2* value (pre-contrast blood pool, **a**, and post USPIO bone marrow, **b**, respectively) can often be difficult to generate an accurate T2* decay curve. Imaging with tissue-specific echo times will help generate more accurate T2* decay curves.
placed upon data decayed to the baseline level of background noise in order to generate a decay curve. Allowances can be made for background noise but are of limited value in this instance. We strongly advise applying tissue-specific echo times tailored to the expected T2* value in order to achieve the most accurate decay curves possible.
**Limitations**
There are some limitations that should be taken into account when interpreting these data. First, this study has small numbers and a larger cohort of participants should be studied to further validate these normal values. Furthermore, due to geographical reasons, it was not feasible to scan the same participants at both centres so comparison cohorts at 1.5 T and 3 T were different. Despite this, both were healthy volunteers groups and displayed no differences at baseline so we do not feel this has impacted on the results. Finally, due to problems in interpreting high and low T2* values as mentioned above, we recommend caution in interpreting some high non-cardiac R2* values; especially in the organs of the reticuloendothelial system at 3 T. In these organs, the spread of R2* data above the median value appears wide. This is possibly caused by artifact and most evident at 3 T, and may additionally explain why these regions have disproportionally high R2* values.
**Conclusion**
We have shown that ferumoxytol-enhanced MRI is feasible at both 1.5 T and 3 T, and suggest a range of expected normal values post-ferumoxytol in a range of tissues. Refinements of dose administration, optimization of acquired echo-times, careful image analysis, and development of post-processing and analysis software capable of excluding common artifacts, are essential to ensure reliable and robust quantification of tissue enhancement.
**Abbreviations**
LGE: late gadolinium enhancement; MRI: magnetic resonance imaging; ROI: regions of interest; USPIO: ultrasmall superparamagnetic particles of iron oxide.
**Funding**
This work was supported by the British Heart Foundation (FS/12/83). CS is supported by the Chief Scientist Office (ETM/266). SA and DEN are supported by the British Heart Foundation (FS/12/83; CH/09/002). DEN is the recipient of a Wellcome Trust Senior Investigator Award (WT103782AIA). Edinburgh Clinical Research Facility and the Clinical Research Imaging Centre are supported by NHS Research Scotland (NRS) through NHS Lothian. SS has received funding for this work via the British Heart Foundation Centre of Research Excellence award for the University of Edinburgh.
**Authors’ contributions**
CS, SA, DIN and SS designed the study, collected and analysed, data and drafted the manuscript. TM and CG analysed and interpreted data, and drafted the manuscript. MD, JP, SP, MP, RG, SM and PH designed the study and drafted the manuscript. All authors read and approved the manuscript.
**Competing interests**
The authors declare that they have no competing interests.
**Author details**
1British Heart Foundation/University Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK. 2Clinical Research Imaging Centre, University of Edinburgh, Edinburgh, UK. 3Edinburgh Clinical Research Facility, University of Edinburgh, Edinburgh, UK. 4Department of Cardiology, Golden Jubilee National Hospital, Clydebank, UK. 5Department of Cardiology, Royal Brompton Hospital, London, UK.
**Received:** 19 January 2016
**Accepted:** 28 June 2016
**Published online:** 27 July 2016
**References**
1. Hahn PF, Stark DD, Lewis JM, Saini S, Elizondo G, Weissleder R, Fretz CJ, Ferrucci JT. First clinical trial of a new superparamagnetic iron oxide for use as an oral gastrointestinal contrast agent in MR imaging. Radiology. 1990; 175:695–700.
2. Saini S, Stark DD, Wittenberg J, Brady TJ, Ferrucci JT. Ferrite particles; a superparamagnetic MR contrast agent for the reticuloendothelial system. Radiology. 1987;162:211–6.
3. Rogers JM, Lewis J, Josephson L. Visualization of superior mesenteric lymph nodes by the combined oral and intravenous administration of the ultrasmall superparamagnetic iron oxide, AMI-227. Magn Reson Imaging. 1994;12:1161–5.
4. Canet E, Revel D, Forrat R, Baldy-Porcher C, de Lorgeril M, Sebbag L, Vallee JP, Didier D, Amiel M. Superparamagnetic iron oxide particles and positive enhancement for myocardial perfusion studies assessed by subsecond T1-weighted MRI. Magn Reson Imaging. 1993;1:1139–45.
5. Ros PR, Freney PC, Harms SE, Seltzer SE, Davis PL, Chan TW, Stillman AE, Murollo LR, Runge VM, Nissenbaum MA. Hepatic MR imaging with ferumoxides: a multicenter clinical trial of the safety and efficacy in the detection of focal hepatic lesions. Radiology. 1995;196:481–8.
6. Kroft LJW, Doornbos J, van der Geest RJ, van der Laarse A, van der Meulen H, de Roos A. Ultrasmall superparamagnetic particles of iron oxide (USPIO) MR imaging of infarcted myocardium in pigs. Magn Reson Imaging. 1998;16:755–63.
7. Taylor AM, Panning JK, Keegan J, Gatterhouse PC, Armitage P, Rodd P, Yang GZ, McGill S, Burman ED, Francis JM, Firmin DN, Pennell DJ. Safety and preliminary findings with the intravascular contrast agent NC100150 injection for MR coronary angiography. J Magn Reson Imaging. 1999;9:220–7.
8. Alam SR, Shah ASV, Richards J, Lang NN, Barnes G, Joshi N, MacGillivray T, McKillop G, Mirsadraee S, Payne J, Fox KAA, Henriksen P, Newby DE, Semple SJK. Ultrasmall Superparamagnetic Particles of Iron Oxide in Patients With Acute Myocardial Infarction: Early Clinical Experience. Circ Cardiovasc Imaging. 2012;5:559–65.
9. Yilmaz A, Dengler MA, van der Kuip H, Yildiz H, Rosch S, Klumpp S, Klingel K, Kandolf R, Helluy X, Hiller KH, Jakob PM, Sechtem U. Imaging of myocardial infarction using ultrasmall superparamagnetic iron oxide nanoparticles: a human study using a multi-parametric cardiovascular magnetic resonance imaging approach. Eur Heart J. 2013;13:4462–75.
10. Richards JMJ, Semple SJ, MacGillivray TJ, Gray C, Langrish JP, Williams M, Dweck M, Wallace W, McKillop G, Chalmers RTA, Garden OJ, Newby DE. Abdominal Aortic Aneurysm Growth Predicted by Uptake of Ultrasmall Superparamagnetic Particles of Iron Oxide: A Pilot Study. Circ Cardiovasc Imaging. 2011;4:274–81.
11. McBride OMB, Berry C, Burns P, Chalmers RTA, Doyle B, Forsythe R, Garden OJ, Goodman K, Graham C, Hoskins P, Holdsworth R, MacGillivray TJ, McKillop G, Murray G, Oatey K, Robson JM, Roditi G, Semple S, Stuart W, van Beek EJR, Vesey A, Newby DE. MRI using ultrasmall superparamagnetic particles of iron oxide in patients under surveillance for abdominal aortic aneurysms to predict rupture or surgical repair: MRI for abdominal aortic aneurysms to predict rupture or surgery-the MA3(RS) study. Open Heart. 2015;2:e000190.
12. Trivedi RA. Identifying Inflamed Carotid Plaques Using In Vivo USPIO-Enhanced MR Imaging to Label Plaque Macrophages. Arterioscler, Thromb, Vasc Biol. 2006;26:1601–6.
13. Trivedi RA, U-King-Im JM, Graves MJ, Cross JJ, Horsley J, Goddard MJ, Skerpet NJ, Quartey G, Warburton E, Joubert I, Wang L, Kirkpatrick PJ, Brown J, Gillard JH. In vivo detection of macrophages in human carotid atheroma: temporal dependence of ultrasmall superparamagnetic particles of iron oxide-enhanced MRI. Stroke. 2004;35:1631–5.
14. Tang T, Howarth SPS, Miller SR, Trivedi R, Graves MJ, King-Im JU, Li ZY, Brown AP, Kirkpatrick PJ, Gaunt ME, Gillard JH. Assessment of inflammatory burden contralateral to the symptomatic carotid stenosis using high-resolution ultrasmall, superparamagnetic iron oxide-enhanced MRI. *Stroke*. 2006;37:2266–70.
15. Tang TY, Howarth SPS, Miller SR, Graves MJ, Patterson AJ, U-King-Im J-M, Li ZY, Walsh SR, Brown AP, Kirkpatrick PJ, Warburton EA, Hayes PD, Varty K, Boyle JR, Gaunt ME, Zalewski A, Gillard JH. The ATHEROMA (Atorvastatin Therapy: Effects on Reduction of Macrophage Activity) Study/Evaluation Using Ultrasmall Superparamagnetic Iron Oxide-Enhanced Magnetic Resonance Imaging in Carotid Disease. *JAC*. 2009;5:2039–50.
16. Anderson LJ, Holden S, Davis B, Prescott E, Charrier CC, Bunce NH, Firmin DN, Wonke B, Porter J, Walker JM, Pennell DJ. Cardiovascular T2-star (T2*) magnetic resonance for the early diagnosis of myocardial iron overload. *Eur Heart J*. 2001;22:171–9.
17. Anderson LJ, Westwood MA, Holden S, Davis B, Prescott E, Wonke B, Porter JB, Walker JM, Pennell DJ. Myocardial iron clearance during reversal of siderotic cardiomyopathy with intravenous desferrioxamine: a prospective study using T2* cardiovascular magnetic resonance. *Br J Haematol*. 2004;127:348–55.
18. Westwood MA, Anderson LJ, Firmin DN, Gatehouse PD, Lorenz CH, Wonke B, Pennell DJ. Interscanner reproducibility of cardiovascular magnetic resonance T2* measurements of tissue iron in thalassemia. *J Magn Reson Imaging*. 2003;18:616–20.
19. Westwood M, Anderson LJ, Firmin DN, Gatehouse PD, Charrier CC, Wonke B, et al. A single breath-hold multiecho T2* cardiovascular magnetic resonance technique for diagnosis of myocardial iron overload. *J Magn Reson Imaging*. 2003;18:33–9.
20. Carpenter J-P, He T, Kirk P, Anderson LJ, Porter JB, Wood J, Galanello R, Forni G, Catani G, Fucharoen S, Fleming A, House M, Black G, Firmin DN, Pierre TGS, Pennell DJ. Calibration of myocardial iron concentration against T2-star. *Cardiovascular Magnetic Resonance*. *J Cardiovasc Magn Reson*. 2009;11:1–2.
21. Kirk P, He T, Anderson LJ, Roughton M, Tanner MA, Lam WWM, Au WY, Chu WCW, Chan G, Galanello R, Matta G, Fogel M, Cohen AR, Tan RS, Chen K, Ng J, Lai A, Fucharoen S, Lalothamata J, Churcharunee S, Jongjirasiari S, Firmin DN, Smith GC, Pennell DJ. International reproducibility of single breathhold T2* MR for cardiac and liver iron assessment among five thalassemia centers. *J Magn Reson Imaging*. 2010;32:215–9.
22. Carpenter J-P, He T, Kirk P, Roughton M, Anderson LJ, de Noronha SV, Sheppard MN, Porter JB, Walker JM, Wood JC, Galanello R, Forni G, Catani G, Matta G, Fucharoen S, Fleming A, House MJ, Black G, Firmin DN, St Pierre TG, Pennell DJ. On T2* magnetic resonance and cardiac iron. *Circulation*. 2011;123:1519–28.
23. Ruehm SG, Corot C, Vogt P, Kolb S, Debatin JF. Magnetic Resonance Imaging of Atherosclerotic Plaque With Ultrasmall Superparamagnetic Particles of Iron Oxide in Hyperlipidemic Rabbits. *Circulation*. 2001;103:415–22.
24. Cerqueira MD, Weissman NJ, DiSizian V, Jacobs AK, Kaul S, Laskey WK, Pennell DJ, Rumberger JA, Ryan T, Yeranu MS, Myoca AHAWG. Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart: A statement for healthcare professionals from the Cardiac Imaging Committee of the Council on Clinical Cardiology of the American Heart Association. *J Am Soc Echocardiogr*. 2002;15:463–7.
25. Shah S, Xue H, Greiser A, Weale P, He T, Firmin DN, Pennell DJ, Zuehlsdorff S, Guehring J. Inline myocardial T2* mapping with iterative robust fitting. *J Cardiovasc Magn Reson*. 2011;13:P308.
26. He T, Gatehouse PD, Kirk P, Tanner MA, Smith GC, Keegan J, Mohiaddin RH, Pennell DJ, Firmin DN. Black-blood T2* technique for myocardial iron measurement in thalassemia. *J Magn Reson Imaging*. 2007;25:1205–9. |
AUTONOMOUS VEHICLES: THE RACE IS ON
Self-driving cars are capturing news headlines and people’s imaginations. Is it really possible to read a book or watch TV while the self-driving car commutes to work? Is there a need to be in the driver’s seat at all? Confusion reigns for good reason. There are significant capability differences between the assists built into today’s cars and a true self-driving car of the future. But, one thing is certain:
THE RACE IS ON.
Standards created by SAE International measure the self-driving capability of a car on a scale of zero to five, where zero represents no automation and five is defined as “full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver.”¹ Audi’s 2019 A8, which features level three autonomous technology, will allow the car to start, accelerate, steer and brake on any road where there is a central barrier between traffic directions.² Tesla anticipates its vehicles will have level five autonomy in about two years³ while, in November 2017, Waymo achieved a major milestone when it became the first company to have autonomous vehicles on U.S. public roads with no human in the driver’s seat.⁴ Independent of the timing question, dozens of companies, from Volvo, BMW, Mercedes Benz, Chrysler and Ford to Bosch, Uber, Apple, and Intel are working alone or in collaboration in the highly competitive, and rapidly advancing, autonomous vehicle race.
This AV race has several mountainous challenges: engineering, regulatory, lack of industry-standardized technology and tools, consumer trust and acceptance, to name a few. At each progressive level of autonomy the challenges become more difficult. But, among the most mountainous of challenges is data. Underlying an automobile’s autonomous capabilities is volumes and volumes of data, required for both training its AI systems and also for real-time decision making once those same systems are deployed.
FOUR DATA CONSIDERATIONS IN THE AUTONOMOUS VEHICLE RACE
While there are different approaches to training autonomous vehicle computer vision models, many companies choose deep learning. For vehicles to advance to higher levels of autonomy through deep learning, their models need volumes of data produced from sensors, such as camera, radar, LiDAR, and ultrasonic data. This creates an acute challenge.
In one day, just one test autonomous vehicle produces as much data as the Hubble Space Telescope produces in a full year.\(^6\) Acquiring the data is difficult, storing it takes massive space and labeling and annotating it accurately takes tremendous resources. Given that a failure of an autonomous vehicle could result in serious injury or loss of life, high-quality training data is vital to the AV’s mission-critical computer vision systems’ ability to learn patterns and operate safely.
It behooves companies to consider data-related processes and infrastructure needs early in research and development to pre-empt the complex issues that arise as operations scale. Without efficient data management, the sheer resources the process will consume can dramatically slow innovation.
Here are four areas to consider when developing methods to manage extraordinary amounts of data for use by AVs.
Data Acquisition
Companies are investing significant time, effort and money in both deploying car fleets that gather real world data via various sensors and also in developing simulated environments to complement the real world. Rightly so. AI must be exposed to a huge diversity of scenarios to identify patterns and learn what the AV could encounter on the road. Data from various topographies, urban and rural areas, weather conditions (rain, fog, snow, cloudy, sunny), road types, and country variances such as left- and right-side driving are all needed to train AVs. In aggregate this is a lot of data. For instance, as of December 2016, Tesla had collected more than 1.3 billion miles of data from Autopilot-equipped vehicles operating under diverse road and weather conditions around the world.\(^7\)
Companies can gain speed and efficiency in data acquisition by optimizing the data requirements and collection approach.
A company’s data acquisition should consider a balance of three factors:
1. The portfolio of scenario coverage needed
2. The urgency of collection in the context of time-to-market schedules
3. Available resources
A plan that balances these three elements will eliminate data redundancy and help ensure data acquisition meets comprehensive needs while running as fast and efficiently as possible given available resources.
Data Storage
Test AVs today generate between 4 and 6 TBs of data per day, with some producing as much as 8-10 TBs depending on the number of mounted devices and their resolution.\(^8\) By comparison, the typical person’s video, chat and other internet use averages about 650 MB per day. That means, on the low end, the data generated from one test car in one day is roughly the equivalent to that of nearly 6,200 internet users.\(^9\) If not sufficiently considered, technical decisions associated with storing such high volumes of sensor data of differing formats, sizes and characteristics can halt a project.
Researchers commonly embark on data storage by purchasing hardware or cloud storage within the department. While having disparate teams create point solutions is easy and expedient in the short term, this approach can’t scale successfully or economically as the storage challenge mounts. Further, having mission-critical data stored in a distributed and unsecured manner introduces risk. Therefore, it’s smart to engage a company’s centralized IT team early before distributed teams get too far down a difficult path.
Considerations when developing a robust and scalable data storage strategy include:
1. **Will you use on-premise or cloud infrastructure?** If you leverage a hybrid infrastructure how will you connect on-premise and cloud?
2. **How will you offload data from the data collection vehicles?** Such high volumes and varied terrains mean on-vehicle storage is needed. How will you move data from the vehicles to the storage infrastructure?
3. **How will you secure the data at each stage of the collection, annotation and usage process?**
4. **How will you understand what data you have that is usable and not usable?** For example, if a camera lens cracks on a data collection vehicle half way through the day, or the lens fogs due to rain, some amount of a 10-to-12 hour video stream may not be usable, but how will you know that and accommodate it?
Data Management
Companies developing AV functions such as lane departure warning, auto emergency braking or parking assist have unique data annotation and labeling requirements based on each of their specific models. Different teams may pull from the same data lake to create datasets to train models that support various functions. This is where tracking the data’s origin, what happens to it and where it moves over time becomes an important issue. A single data set could be broken into smaller ones based on various criteria. One image in a data lake could require different annotation types (bounding boxes, segmentation masks, polylines, etc.) to support a different function that will be saved as multiple independent files.
To understand and track what the data contains – and therefore gain full leverage from it – companies need a strategy, approach, policy and data platform for longitudinal data management. Information on the source data such as the locations where it was collected, what streets were covered, what intersections were recorded, whether the data is from day or night, or sun or rain all needs to be recorded and associated with the data to aid in scene selection and to ensure the full portfolio of data requirements are being met. Scene selection is particularly important for supporting sensor fusion, where researchers combine data from different sensors and sensor types to use the combined information to perceive the environment more accurately. Information on the data’s journey over time through various annotations, labeling needs, and training uses also must be tracked to maintain data integrity and usability.
Finally, consider: how will you educate and communicate to all research teams where the source data resides, what it contains and how it can be accessed?
Data Labeling
As car fleets traverse the roads, they collect many different types of data. Many vehicles have multiple sensors (radar, ultrasound, LiDAR, cameras), each gathering different, complementary data. In just one frame from one camera there can be hundreds of objects to label accurately. By some estimates each hour of data collected takes almost 800 human hours to annotate.\textsuperscript{10} The massive scale of this challenge is impeding many companies from moving as quickly as they would like.
There are a few important considerations when annotating and labeling AV data.
**Provide Clarity on What to Capture**
In a simple traffic intersection there may be hundreds of different possible objects to identify, so creating guidelines on what and how to annotate and label them is critical for efficiency and consistency. Guidelines should define what objects are considered qualified (e.g., passenger vehicles that are >50% visible), and the capture criteria for them (e.g., does the annotation cover “enough” of the object to be acceptable).
**Determine the Toolsets Needed to Best Label and Annotate Objects Across Data Formats**
The value of using the right tools for each annotation task can’t be overstated. For instance, you might need to draw bounding boxes for object localization and detection. Or you may require the ability to apply text labels and draw cuboids for metadata attribution. Or to create polylines to outline road and lane markings. The same tools you use for these annotation types may not work for segmentation masks, which require outlining overlapping objects and objects that share boundaries with 100% precision.
**Consider Economies of Scale**
As companies move from research to prototype to production, the scale of data annotation needs increases exponentially, and the risk associated with bad data increases in parallel. Training data needs are outstripping any single company’s ability to hire enough internal resources to address them at scale, even with expected improvements in algorithmic labeling of data sets. For many companies the unit economics associated with building the expertise and capacity just don’t make sense. In these cases, the most viable option may be turning to third parties that able to build and manage a workforce of annotators at scale to serve multiple entities’ needs.
CHARTING A PATH TO WIN THE RACE
Whether just embarking on aggregating data for AVs or well into the race, companies that are aware and ahead of these data challenges will circumvent issues that could potentially halt their progress.
For those early in the data collection process, consideration of one’s data approach and thoughtful decision-making regarding relevant tradeoffs will help ensure an action plan that is both executable and expeditious. For those where data collection is becoming increasingly precarious, a careful retrofit that leverages what is already in place can take the organization to a more secure, accessible and sustainable data approach.
Together Accenture and Mighty AI are deeply experienced in understanding and addressing AV data management challenges. Mighty AI works with perception and research teams to get them the high-quality training data they need to ensure their vehicles are safe. Accenture brings its broad set of skills and capabilities to bear as companies think through data strategy, collection, storage and use.
FOR A MORE IN-DEPTH DISCUSSION CONTACT
Michael Stephan
Mighty AI
firstname.lastname@example.org
Paul Lalancette
Accenture | Industry X.0
email@example.com
Andrea Regalia
Accenture | Products OEM
firstname.lastname@example.org
Martin Stoddart
Accenture | Webscale
email@example.com
Matthew Quinlan
Accenture | Webscale
firstname.lastname@example.org
NOTES
1 For more background on the SAE definition of each level of autonomy in advanced driver-assistance systems, see https://www.sae.org/misc/pdfs/automated_driving.pdf.
2 Vijayenthiran, Viknesh, “Audi A8 to be first with ‘Level 3’ self-driving capability, but regulations holding back tech,” Motor Authority, April 25, 2017
3 Lambert, Fred, “Elon Musk clarifies Tesla’s plan for level 5 fully autonomous driving: 2 years away from sleeping in the car”, electrek, April 29, 2017.
4 Salesky, Bryan, “A Decade after DARPA: Our View of the State of the Art in Self-Driving Cars,” Medium, October 16, 2017.
5 Hawkins, Andrew, “Waymo is first to put fully self driving cars on US roads without a safety driver”, The Verge, November 7, 2017.
6 “A Look at the Numbers as NASA’s Hubble Space Telescope Enters its 25th Year,” NASA, May 12, 2014.
7 Hull, Dana, “The Tesla Advantage: 1.3 Billion Miles of Data,” Bloomberg, December 20, 2016.
8 Maloney, Kieran, “The data management implications of the Federal Automated Vehicles Policy,” IoT News, January 3, 2017.
9 Beres, Damon, “Each autonomous car will one day generate more data than thousands of people”, Mashable, August 17, 2016.
10 Burke, Katie, “Humans help train their robot replacements,” Automotive News,” August, 27, 2017, August 27.
ABOUT ACCENTURE
Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions—underpinned by the world’s largest delivery network—Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With more than 435,000 people serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at www.accenture.com.
ABOUT MIGHTY AI
Mighty AI’s platform enables autonomous vehicle technology-makers to continuously tune and modify their datasets based on the needs of their AI models. We pair machine learning with human intelligence to revolutionize how companies manage sensor data, generate datasets, and validate their computer vision models. Visit www.mty.ai to learn more, and follow us at @mighty_ai. |
Mixed Movements Monitoring
October - December 2023
Key Figures
- **3,969** Individuals were interviewed from October to December 2023.
- **39** Nationalities interviewed
- **52%** VENEZUELAN
- **17%** HONDURANS
- **6%** CUBANS
- **5%** COLOMBIANS
- **5%** ECUADORIANS
Main countries of origin of respondents
- **GUATEMALA: 1,411**
- **MEXICO: 1,113**
- **COSTA RICA: 613**
- **HONDURAS: 504**
- **PANAMA: 328**
Number of interviews per country
- **54%** Men
- **46%** Women
- **32 years** Average age of respondents
Introduction and scope
In recent years, Central America has witnessed a significant increase in mixed movements, a term used to describe the cross-border movement of individuals and groups who travel alongside each other, using similar routes and means of transport or smugglers, but for different reasons. Generally, individuals in mixed movements travel in an irregular manner due to difficulties in accessing territory or meeting State entry requirements, among other factors. People travelling as part of mixed movements have different needs and profiles and may include asylum-seekers, refugees, victims of trafficking, unaccompanied or separated children, stateless persons, and migrants, including migrants in irregular situations and migrants in vulnerable situations.
With the objective of generating evidence on mixed movements’ dynamics in the Americas, the United Nations High Commissioner for Refugees (UNHCR) and the World Food Programme (WFP) have embarked on a regional monitoring project enabling agencies to better understand trends, profiles, and protection and food security needs of people on the move. Gathering comprehensive data on mixed movements is essential for facilitating evidence-based responses to the protection and assistance needs of individuals on the move. This data not only enables us to address immediate concerns, but also plays a pivotal role in producing information for diverse forums focusing on discussing mixed movements within the United Nations. These include the Issue-Based Coalition on Human Mobility (IBC-HM) as well as regional inter-governmental collaborative initiatives. The Mixed Movements Monitoring serves as a crucial
Key Findings
1. Multiple interconnected factors continue to push individuals into situations of human mobility. Among the respondents, 72% left their country of origin due to difficulties in accessing fundamental rights and meeting basic needs, while 57% cited reasons associated with pervasive violence or personal experience of violent incidents.
2. 3 out of 5 individuals either experienced or witnessed protection incidents along the route (mostly theft, extortion, fraud, or physical threat or assault), underscoring serious concerns regarding the overall protection environment.
3. Individuals exposed to protection incidents continue to be more likely to also face challenges related to food security, and vice versa.
4. In Q4, food insecurity persists as a major issue. 83% of those surveyed reported adopting coping strategies such as eating less, skipping meals, sometimes for entire days within the past week.
5. In Costa Rica, 71% of individuals reported having only one meal or none at all in the previous day. This situation is exacerbated by extended travel times, including long waiting periods at arrivals and stops, often at night, as well as a shortage of public shelters and food assistance programs for displaced people.
component of our commitment to advancing this cause, guaranteeing well-informed and effective contributions.
This report addresses multiple aspects of mixed movements, examining critical factors such as the motivating and triggering factors compelling individuals to leave their country of origin or their host country. It also examines the profiles of those engaged in mixed movements, the journey itself (including protection risks and threats), the condition in the current country (such as documentation, access to territory), food security, and the future aspirations of individuals. The report provides insights into the challenges and opportunities associated with mixed movements in Central and North America, with the aim of informing policy and guiding effective humanitarian response efforts.
**Methodology**
As part of UNHCR-WFP ongoing monitoring efforts, the fourth round of data collection was conducted between October 1st and December 31st, 2023. The questionnaire was applied in Panama, Costa Rica, Honduras, Guatemala, and Mexico. This round of data collection was specifically concentrated at border points, both official and unofficial, where mixed movements are most prominent. Qualitative research through standardized interviews with key informants and focus group discussions were held in Costa Rica, Guatemala, Honduras, Mexico, and Panama. In addition, the monitoring exercise continues to incorporate secondary data, as well as qualitative information from joint analysis sessions and field observations, particularly from Colombia\(^1\).
Data collection within the framework of mixed movements encounters several challenges and limitations, largely attributed to the dynamic nature of these movements. Significant challenges involve accessing remote and border regions, which are often hindered by inadequate infrastructure and security issues. Moreover, given that most movements are irregular, challenges are also faced in accessing individuals near border areas where fears around detention or deportation are heightened. Limitations become particularly pronounced in situations involving individuals from countries outside the Americas where cultural and linguistic barriers exist, as well as highlighting the complexities of gathering data across diverse geographic and socio-political landscapes.
Therefore, findings are only representative of the people who were interviewed and cannot be extrapolated to all people on the move. They provide, however, information on the protection environment, protection trends over time, rights violations, and risks, as well as food security issues faced by the population engaged in mixed movements.
**Data collection locations**
Most interviews conducted in this exercise occurred at formal and informal border crossing points and reception facilities, including shelters surrounding these areas, accounting for 80% of the total. Non-border locations encompass strategic transit facilities, including bus terminals, shelters, and reception sites situated in capital cities or larger urban or peri-urban centers. These non-border locations serve as gathering points for individuals in transit, where they seek support, assistance, and transportation means.
\(^1\) Field observations in Colombia stem from the Necocli Field Journal, a qualitative data exercise conducted by UNHCR Colombia and its implementing partners. It was created as part of an information strategy to identify the protection risks and incidents faced by refugees and migrants along the route or during their stay in Necocli. The entries provide aggregate information regarding their profile, perceptions, and experiences prior to the Darien crossing.
Understanding the Human Mobility Context
In 2023, the Central America and Mexico region experienced an unprecedented surge in human mobility, marked by a significant increase in mixed movements. This phenomenon was driven by a combination of factors including insecurity, violence, human rights abuses, poverty, inequality, environmental degradation, and the effects of climate change. Despite increasingly stringent cross-border mobility policies, this situation led to a large number of individuals of various nationalities embarking on perilous journeys, often relying on irregular movements and smuggling networks. The repercussions were dire, resulting in heightened risks of death, disappearances, extortions, sexual violence, and other human rights violations, with a distressing report of 1,275 individuals missing\(^2\).
This surge in mixed movements not only put refugees and migrants in grave danger but also placed immense pressure on reception systems. It underscored the critical need for a comprehensive strategy to manage mixed movements effectively and to protect the well-being and human rights of those on the move.
While the fourth quarter of 2023 saw the usual seasonal decrease in arrivals to the Darien region, the year concluded with a record-breaking 520,085 arrivals, representing a 110% increase from the previous year. Venezuelans, Ecuadorians, and Haitians were the primary nationalities among those on the move, in addition to significant numbers of people arriving from outside the continent, including individuals from India, Afghanistan, and China, highlighting the global scope of the crisis.
Amidst these developments, Panama faced internal challenges that affected the transit of people taking part of mixed movements. On 20 October, nationwide protests erupted following the ratification of a controversial law involving a concession contract with Minera Panama, S.A., leading to severe disruptions. Protracted road closures by the protesters temporarily halted the transfer of people from the Darien’s reception points to Costa Rica’s South Migration Station (EMISUR). The situation began to normalize after the Supreme Court of Justice of Panama declared the law unconstitutional on 28 November, leading to the dissolution of roadblocks and the resumption of transportation services for migrants and refugees.
In Costa Rica, reception conditions at the South border were reorganized at EMISUR; however, as this space worked both as a transport terminal and as a temporary shelter, local capacities to ensure intersectoral humanitarian assistance and protection services were challenged. After a 10–12-hour bus journey, persons in mixed movements arrive at Los Chiles municipality (North) in dire humanitarian conditions, e.g., food insecurity, and expose to protection risks derived from a lack of shelter.
Honduras and Guatemala have both witnessed notable upticks in irregular migration patterns, accompanied by heightened protection concerns. Notably, Honduras experienced a staggering 189% surge in movements, largely attributed to a migration amnesty favoring individuals in transit who enter the country irregularly. Additionally, there has been a notable increase in arrivals from Nicaragua, including individuals from diverse nationalities such as Cuba, Haiti, and various African countries, who have benefited from simplified visa processes in neighboring Nicaragua. Meanwhile, Guatemala has seen alarming rates of theft and extortion, particularly targeting Venezuelan and Honduran migrants.
In Mexico, the situation was similarly challenging, with over 782,186 events involving individuals in an irregular migration status reported, alongside a significant rise in asylum applications. This trend underscores the broader narrative of mixed movements heading northward towards Mexico and the United States, where 2023 ended with over 2.5 million encounters at the U.S. Southwest border.
The complexity and scale of the situation across the region highlight the urgent need for a coordinated and humane approach. It is imperative to address the multifaceted challenges of mixed movements comprehensively, ensuring the safety and protection of those in transit, reflecting a collective responsibility towards human rights and dignity.
---
\(^2\) IOM Missing Migrants Project, 2023: [https://missingmigrants.iom.int/downloads](https://missingmigrants.iom.int/downloads)
The Mixed Movement Monitoring interviewed people of 39 different nationalities. Almost two-thirds of the people interviewed are from a South American country, which marks a 6% increase compared to the previous quarter, and 21% increase since the first quarter. This is primarily attributed to the large number of Venezuelans who were interviewed and the increase in the number of respondents from Ecuador. The rise in the number of Ecuadorian respondents could also be indicative of the escalating insecurity and violence prevailing within the country.
Contrary to previous quarters, there was a decline in the number of respondents from Central America. Hondurans (71%) stand out as the predominant nationality among Central American respondents, followed by smaller numbers of Guatemalans, Salvadorians, and Nicaraguans. The Caribbean constitutes the third-largest region of origin, accounting for 10% of the total, which reflects a 10% decrease from the first quarter. This is followed by a smaller fraction of individuals originating from Asia and Africa (2%), mirroring the figures from previous quarters. Mexico and Panama stood out as the countries with the highest diversity of nationalities among those interviewed. In Panama, individuals from 22 different countries were interviewed, with Venezuelans being the most common nationality. Mexico followed closely, with interviews conducted with individuals from 19 distinct countries, where Hondurans were the predominant group. Across most of the surveyed countries, Venezuelans were the primary nationality interviewed, followed by Hondurans and Cubans, highlighting their significant presence.
**Host countries**
- In the survey conducted, 18% of participants reported residing in one or more countries other than their country of origin for a minimum of six months. Venezuelans are the nationals who most frequently cited having lived in a host country amongst all nationalities, predominantly in Colombia (55%), Peru (18%), and Ecuador (15%). This quarter has witnessed a slight increase in the percentage of Venezuelans indicating that they have not resided in a host country previously, suggesting that they might have moved directly from their country of origin. Specifically, the figure rose from 73% in Q3 to 75% in Q4.
**Group composition**
*Who do you travel with?*
| Group Composition | Percentage |
|-------------------|------------|
| With the entire family | 30% |
| With part of the family | 28% |
| Alone | 25% |
| Friends | 19% |
| Unrelated companions | 6% |
Average family composition
- Average adults in group: 2.6
- Average children in group: 1.7
- Average child < 5 per group: 1
---
3 Host country: The country in which a non-national stays or resides, whether legally or irregularly.
4 The question on host countries was modified since quarter 3, with a reduction in the length of residence from one year to six months.
In the fourth quarter, echoing trends from the previous one, the composition of traveling groups remained fairly balanced. Approximately 30% of respondents traveled with their entire family, while 28% journeyed with part of their family, and another 25% traveled independently. Among those traveling with family, the average group size was 4.3 members, typically comprising two children, one of whom is below 5 years old. These findings highlight the significant number of children on the move across the region.
The dynamics of these groups and families differ based on the participants' origins. Central Americans are more likely to travel alone (35%) and in smaller family units (averaging 3.5 members), whereas South Americans more frequently travel with their family and/or friends, forming larger groups (averaging 5 members).
For extracontinental travelers, there is also a tendency to travel alone (35%), with friends (25%), or with part of the family (24%). Among those traveling with family, the average group size is 4.7 members, including two children, with at least one child below the age of 5.
**LEGAL STATUS**
**Legal status of respondents who have lived in host countries**
Out of the 18% of individuals who have lived in countries other than their country of origin for a minimum of six months, half of them applied for legal status in the host country. Among those who applied, 70% successfully obtained the legal status they sought, accounting for almost 10% of the total respondents.
- **1 in 2** people applied for legal status in the host country
- **70%** of them obtained a legal status
Nearly 70% of individuals successfully obtained the legal status for which they applied. However, a significant 45% lacked documentation to prove their legal status. This lack of documentation could stem from a high incidence of theft during their journey or instances where some individuals sent their documents to family members in other countries along their route. The fourth quarter marked an increase in the number of people holding valid documents from host countries, with 61% of these documents still valid for more than a year, indicating improvement in the stability of their legal status.
This quarter has seen an increase in the number of Venezuelans who left Colombia with permanent residency status – 43%, compared to 18% in the previous quarter. Haitians who lived in a host country obtained mostly permanent residence (49%) and work/study visas (38%). Venezuelans, Hondurans, and Haitians constitute the majority of asylum documentation holders, comprising 10% of the observed permits.
Legal status of family living in host countries
Where is the rest of the family?
- United States: 50%
- Colombia: 20%
- Venezuela: 19%
- Mexico: 7%
- Other: 7%
- Cuba: 6%
Applied for legal status in country
- No: 41%
- Yes: 41%
- Don't know: 9%
- Prefer not to answer: 7%
- Not applicable: 2%
Status obtained in country
- Yes: 85%
- No: 15%
Type of status obtained
- Asylum permit: 35%
- Nationality: 29%
- Permanent residence: 14%
- Special program: 12%
- Work/study visa: 8%
- Other: 6%
- Refugee status: 1%
Central Americans
Where is the rest of the family?
- United States: 75%
- Honduras: 29%
- Mexico: 21%
- El Salvador: 13%
- Guatemala: 8%
South Americans
Where is the rest of the family?
- United States: 45%
- Colombia: 30%
- Venezuela: 29%
- Ecuador: 9%
- Panama: 9%
As described in the previous section, 22% of individuals are traveling with only part of their family, while the remaining family members are primarily located in the United States (45%), Colombia (30%), and Venezuela (29%).
These results vary depending on the interviewees' country of origin: additional family members of individuals from Central America more commonly reside in the United States (75%), Mexico (29%), and Honduras (21%), whereas South Americans have family members primarily in the United States (45%), Colombia (30%), and Venezuela (29%).
Furthermore, among the family members residing in third countries, 40% have applied for legal status, with a significant majority (83%) successfully obtaining it. This success is primarily due to their classification as asylum seekers (47%) or their access to other legal stay arrangements within these countries (17%). Nevertheless, for those that may have been successful in obtaining temporary legal status as asylum seekers, it's important to note that this status is not permanent. Additionally, recognition rates for these nationalities, especially in the U.S., are extremely low. Consequently, they are unlikely to secure long-term status or achieve viable integration and may also face deportation if their claims are rejected.
DOCUMENTATION
Echoing trends observed in the previous quarter, when asked about the documents they currently possess within the country of interview, a majority of respondents reported carrying their national identification cards (79%), while a significant number also have their passports (27%). The distribution of these documents significantly varies when analyzed against the respondents' regions of origin. Individuals from the Caribbean, Asia, and Africa primarily use passports for travel (85%), in contrast to those from South and Central America, who predominantly carry ID cards (84%).
Throughout the year, a consistent pattern has emerged in the prevalence of expired passports, with a slight decrease observed in the last two quarters. Among passport holders, 15% possess documents that have expired. This issue is particularly pronounced among Venezuelans, with 41% holding expired passports. High costs and barriers to accessing renewal procedures are among the primary reasons for their limited access to valid passports, potentially restricting their access to parole programs and other legal pathways.
79% of respondents carry an ID card as their primary form of documentation.
22% of respondents had at least one specific protection need.
22% of respondents with specific needs reported experiencing physical, psychological, or sexual violence and/or abuse.
Persons with specific needs\(^6\) are particularly vulnerable to protection risks and abuses as the difficult conditions of the journey heightens their susceptibility to abuse and exploitation and put them at risk of lasting or irreversible harm.
The percentage of respondents with one or more specific need identified has remained the same as the previous quarter, approximately one in four. Among these identified needs, that of a single parent traveling with their children continued to be the most prevalent, accounting for 36% of cases. This trend also points to a significant presence of children in transit. The percentage of individuals with specific needs who reported incidents of physical, psychological, or sexual violence and/or abuse (22%) continues to stand out as the predominant specific need in Costa Rica, with 46% of those reporting specific needs citing instances of violence. Additionally, by the end of December 2023, 15 children were born in Darien to parents who were taking part in mixed movements and officially registered by Panamanian authorities, highlighting the severe risks faced by women on the move. Furthermore, numerous births are reported to take place during the jungle crossing, tragically resulting in some stillbirths or mothers not surviving the arduous journey.
**Displacement**
**REASONS TO LEAVE COUNTRY OF ORIGIN**
Participants were queried about the motives behind their departure from their respective countries of origin. This inquiry allowed them to choose one or more reasons, facilitating a comprehensive understanding of the multifaceted factors influencing their decision to leave. In the subsequent analysis, the various responses are classified into three overarching groups: rights-related, violence-related, and other factors\(^7\).
---
\(^5\) The category for “woman at risk” includes risks specific to women such as pregnancy and lactation, which used to be separate categories in previous versions of the survey.
\(^6\) Any person who experiences particular protection risks or barriers due to the intersection of their personal characteristics with the environments, which requires specific targeted actions in order to enjoy the full range of their human rights. Children (especially unaccompanied and separated children), victims of trafficking, women at risk, older persons, and persons with disabilities are among the groups that often have specific protection needs. These persons have the same basic needs as other refugees but often face barriers to having these needs met.
\(^7\) **Rights-related**: This category encompasses factors associated with the lack of access to basic rights and services, including challenges related to employment, low income, food, medical services, or education. **Violence-related**: Within this category, responses are linked to concerns about the general situation of violence or insecurity, as well as instances of being a victim of violence, including threats and intimidation. **Other**: This category encompasses a range of reasons, including but not limited to family reunification, natural disasters, and other options that may not distinctly fall into the rights-related or violence-related categories.
72% of respondents have left their country of origin due to lack of employment opportunities, or barriers in accessing the labor market, as well as rights and services, such as lack of access to food, health, or education.
The most frequently cited reason for leaving one’s own country of origin was the lack of access to employment (67%), aligning with trends observed throughout the year.
57% of respondents cited violence-related factors as a primary motivation for their decision to leave their country of origin.
This encompasses apprehension stemming from the overall climate of violence and insecurity (44%), as well as instances where individuals themselves were victims of violence (21%). In sum, over half of the respondents consistently provide answers indicating that they moved in search of protection and safety.
30% of respondents mentioned both violence and limited access to basic rights and services as reasons to leave their country of origin.
This highlights the intricate and interconnected nature of the factors driving mixed movements.
It is important to consider that the results above vary if checked against the countries of origin of the respondents.
- Respondents from Honduras, Guatemala, and El Salvador, who account for 22% of the total number of interviewees, have a higher prevalence of violence-related reasons to have left their country of origin (68%), as 45% declared being a victim of violence and 39% are fleeing generalized violence. Other nationality that most frequently cited violence related reasons are Ecuadorians (73%).
- The prevalence of violence-related reasons to leave one’s country of origin is higher amongst extracontinental nationalities (72%), if compared to nationalities from the Americas continent (56%).
23% of respondents left the country of origin due to lack of food, consistent with the previous quarters.
Women respondents were minimally more likely to state that they left country of origin due to lack of food (25%) than men respondents (21%), a trend consistent with other quarters.
Cubans (32%) and Haitians (21%) are among the main nationalities that left their country of origin due to lack of access to food. Food security remains a critical factor in the decision to leave one’s country of origin, representing one of the main push factors behind mixed movements in the region.
---
8 In previous quarters, “victim of violence” and “threats/intimidation” were two separate answer options. For the third quarter, the answer options were revised and consolidated. “Victim of violence” now identifies “The person or someone close to them was a victim of violence, threats or intimidation (extortion, assault, GBV, kidnapping, discrimination / xenophobia, etc.)”.
The human mobility patterns observed in the last quarter of the year are consistent with those of the previous period, illustrating the complex and varied routes taken by individuals navigating different regions. These routes are influenced by the travelers’ countries of origin and their intended destinations, with Central American territories playing a key role as transit areas. The selection of these pathways varies across different groups, showcasing the varied strategies they employ to reach their intended destination.
Individuals from Africa and Asia often begin their journeys in Brazil, with 61% choosing it as their starting point within the Americas. They reach Brazil by plane and exit mainly through the land borders of Oiapoque (Amazonas) and Assis Brasil (Acre). Their route typically includes moving through Peru, then heading northward through Ecuador and Colombia, or through Guyana, Venezuela, and Colombia, before facing the perilous Darien jungle in Panama. On the other hand, Chinese nationals often commence their journey from Ecuador, benefitting from its visa flexible entry policy for Chinese nationals.
In Honduras, most of the people on the move entered the country from Nicaragua through the El Paraiso Department, particularly via Las Manos border (65%) and Trojes (22%). The remaining 13% entered mostly through Choluteca department. In October, the number of individuals from Haiti passing through Honduras spiked, multiplying twenty-threefold in comparison to July, primarily due to direct flights from Haiti to Nicaragua. In addition, while most Cubans arrive directly by plane to Nicaragua and then cross into Honduras, some 100 people arrived by boat on the shores of Honduras in 2023. The maritime path between Colombia and Panama has been increasingly used as a route of transit, with monthly averages of 300-400 individuals, including nationals from China, Ecuador, and Venezuela. Concurrently, the route from San Andres Island in Colombia to Nicaragua has become more common. Despite being more costly and equally dangerous, this maritime route is favored by many for its perceived safety of bypassing the Darien jungle. In 2023, Colombian authorities identified 533 people on this route, mainly from Venezuela, China, Vietnam, and Ecuador.
Additionally, individuals from the Caribbean started their journeys from Central American countries due to more lenient visa policies. This approach enables them to avoid the dangers of the Darien jungle.
These evolving transit routes underscore the search for safer and more accessible routes, reflecting the complexity of human mobility as well as people’s resilience in overcoming obstacles to reach safety and security.
---
9 Migración Colombia, 2023: https://unidad-administrativa-especial-migracion-colombia.micolombiadigital.gov.co/sites/unidad-administrativa-especial-migracion-colombia/content/files/001127/56345_min-31-dic.pdf
The map below illustrates the primary routes used by both continental and extracontinental individuals to reach North America:
- **Capital**
- **City**
- **Land route**
- **Sea route**
- **Air route**
*The boundaries and names shown and the designations used on this map do not imply official endorsement or acceptance by the United Nations.*
Source: UNHCR and R4V
The number of people who experienced or witnessed a protection incident along the route has continued to grow in the last quarter of 2023 with 62% of respondents, a slight increase from 58% of people in the previous quarter. Theft remains the most prevalent incident type (49%), accompanied by extortion (38%), physical threats/assaults (26%), and fraud (21%).
Numerous participants in focus group discussions highlighted severe protection incidents, especially during the crossing of the Darien Jungle, including instances of robbery, deaths, and kidnappings. Given the challenges in collecting data on gender-based violence (GBV) within the context of mixed movements, it is crucial to note that both qualitative and secondary data indicate an increase in GBV incidents in the Darien during the fourth quarter of 2023, particularly concerning sexual exploitation and abuse. Médecins Sans Frontières (MSF) reported a continuous increase in cases of gender-based violence (GBV), with the organization attending to a total of 676 victims in 2023. Alarmingly, 57% of these cases occurred in the last quarter alone.
Due to gender inequalities, which are amplified in the context of human mobility, as well as conflating factors such as limited legal admission pathways, lack of documentation, and inadequate financial resources, women and girls are also exposed to heightened risks of sexual, physical, psychological, and abuse. This is typically perpetrated by individuals, illegal armed and criminal groups, and trafficking networks. Further, the limited information and constrained GBV response services available throughout transit routes, represent two main barriers that hinder survivors’ ability to seek assistance.
It is not feasible to monitor the precise prevalence of GBV due to ethical considerations and barriers in access to services. Nonetheless, governments and humanitarian actors should continue prioritizing actions to prevent, mitigate, and respond to incidents of GBV. Timely, multi-sectoral, and life-saving assistance to survivors is fundamental, including specialized case management, psychosocial support, legal assistance, and clinical management of rape, among others.
Upon examining the top 5 nationalities, Venezuelans, and Hondurans surface as the primary reporters of protection incidents in the fourth quarter.
People who experienced or witnessed protection incidents in the route
- Honduras: 73%
- Venezuela: 69%
- Colombia: 58%
- Ecuador: 52%
- Cuba: 16%
People who experienced protection incidents by quarter and continental/extracontinental
- Continental: Q3 59%, Q4 62%
- Extracontinental: Q3 46%, Q4 51%
---
10 Médecins Sans Frontières, 2023: [https://www.msf.org/lack-action-sees-sharp-rise-sexual-violence-people-transiting-darien-gap-panama](https://www.msf.org/lack-action-sees-sharp-rise-sexual-violence-people-transiting-darien-gap-panama)
Food insecurity remains a critical issue throughout the journey. Nearly half of those surveyed (44%), slightly fewer than in the third quarter, managed to consume only a single meal (37%) or went without food entirely (7%) on the day before the interview. The situation is particularly dire among respondents in Costa Rica, where one in five (24%) endured a whole day without food, followed by Panama (7%). The higher figures in Costa Rica can be attributed to the journey's dynamics, which typically includes a 12–14-hour bus ride followed by an additional 12 hours of travel toward the northern border. The extended travel durations, coupled with the time spent at arrivals and stops—often during the night—as well as the lack of public shelters and food programs for people in human mobility, exacerbate the food security situation.
Regarding nationalities, respondents from Venezuela (59%), Colombia (50%), Ecuador (47%), and Haiti (40%), are all showing low levels of food consumption, eating only one meal or none the day prior to the interview while on the route. Meanwhile, Haitians have shown a further deterioration, with an increase to 40% from 31% in the previous quarter. Though inadequate meal consumption among Ecuadorian respondents remains high (47%), it has improved notably since the last quarter (57%).
83% of respondents faced difficulties covering their food needs and resorted to various coping strategies due to a lack of food or insufficient funds to purchase it.
The most prevalent coping strategy is skipping meals (37%), 52% of respondents in Guatemala reporting skipping meals, the highest among all the countries of interviews.
When asked about their food situation over the previous week, 17% of respondents reported having no difficulties, 8% higher than the previous quarter. Still, more than half (59%) adopted coping mechanisms to face food shortages and/or the lack of financial means to access food. These strategies range from eating cheaper and less preferred foods (24%) to skipping meals or eating less (37%), as well as regularly spending entire days without eating (17%). The latter was reported most in Costa Rica (55%), followed by Panama (41%), Guatemala (19%), Honduras (15%), and Mexico (7%).
In Panama, participants reported that their food supplies were inadequate for the journey, leading to days without food in the jungle or sole reliance on "panela" (solidified sugarcane juice) for sustenance. Meanwhile, focus group discussions in Honduras revealed that participants felt they received better treatment there than in other countries. Additionally, people on the move mentioned receiving support, including water, food, medical care, shelter, and clothing from humanitarian organizations and government agencies.
In Honduras, 34% of respondents reported skipping meals as a negative coping mechanism and 32% reported eating one or no meals the day prior to the interview. 64% of interviewees highlighted access to food for their family as a primary need, highlighting the fact that despite efforts made by institutions, the limited reception conditions in Honduras must be continuously strengthened to address the high prevalence of negative coping strategies and food insecurity.
Food security and protection
An analysis of food security and protection indicators further underscores the connection between the two areas. 30% of respondents who encountered protection incidents also engaged in negative coping mechanisms related to food security, including skipping meals or going an entire day without food. When exploring women that experienced a protection incident along the route, Costa Rica stood out, with 68% of respondents eating one meal or none the day before the interview. These results mirror those from the second quarter of the year, pointing to the fact that individuals exposed to protection incidents are more prone to face food security challenges and vice versa.
MAIN NEEDS
| Need | Percentage |
|-----------------------------|------------|
| Food for family | 65% |
| Shelter | 39% |
| Clothes and shoes | 30% |
| Healthcare | 27% |
| Information | 16% |
| Internet/telephone | 15% |
| Drinking water | 15% |
| Legal | 9% |
| Food for children | 8% |
| Child care | 6% |
In the Necocli field diary, qualitative data from Colombia captures the economic challenges confronting people on the route, particularly those without shelter on Necocli's streets or beaches. Many face prolonged stays in Necocli due to the lack of financial resources to proceed with their journey. They contend with unsanitary living conditions and limited access to food, which have led to an increase in cases of severe illnesses and malnutrition.
In the countries where quantitative data collection was conducted, the main needs reported by respondents were food (65%), shelter (39%), and clothing and footwear (30%), marking a shift from healthcare being a top need in the previous quarter. Moreover, the surveys as well as focus group discussions in Panama and Honduras revealed a significant demand for more information, particularly as it relates to the Humanitarian Parole Program and the "CBP One" application implemented by the United States Government.
Further analysis of the responses, segmented by the country of interview, showed notable trends. For instance, in Panama, challenges unique to the border crossing conditions through the jungle were reflected in the concerns expressed, with clothing and footwear (56%) being the most cited by respondents who lost their personal belongings in rivers and cliffs, followed by food (33%) and healthcare support (22%), maintaining consistency with the previous quarter's findings.
Costa Rica stood out for reporting the highest levels of need. Among its respondents, 91%, highlighted food as a primary concern, representing a 7% increase from the last quarter. Additionally, two-thirds (67%) indicated a need for shelter, and more than half (52%) needed clothing and footwear. In contrast, needs reported in Mexico, apart from food (69%), shelter (39%), and clothing and footwear (25%), call attention to Information (21%), which is higher than the regional average.
In the fourth quarter, the results remained similar to those of the third quarter, with a continued preference for the United States. This is consistent with the pattern observed in previous quarters, where the United States has been the dominant intended destination, chosen by 87% of respondents, followed by Mexico with 9%. Venezuelans are the leading nationality intending to make the United States their final destination, accounting for 58%, followed by Hondurans at 15%. Among those selecting Mexico as their final destination, Hondurans represent a significant amount at 37%, followed by Cubans at 19%, and both Salvadorans and Guatemalans at 10%.
The primary motivation for choosing their destination, as stated by a significant 77% of respondents, is the availability of better economic opportunities in the chosen country. Additionally, 25% cited the presence of family members in the destination country.
**Scenarios of alternative intentions**
During the second half of the year, the survey also began to ask respondents about contingency plans and alternative intentions in case they are unable to reach their final destination.
*In case not possible to reach intended country of destination, what would yo do?*
- Wait until I’m allowed to proceed to country of destination: 69%
- I don’t know: 13%
- Return to country of origin: 7%
- Stay in country of interview: 7%
- Prefer not to answer: 3%
- Return to country of residence: 1%
Remarkably, 69% indicated their willingness to wait until they are permitted to proceed to their intended country. Within this group, 64% were Venezuelans, and 9% were Hondurans. On the contrary, among those considering a return to their country of origin, the majority (59%) were Hondurans, while 20% were Venezuelans. This suggests that Hondurans are more inclined to view returning home as a viable alternative, in contrast to Venezuelans who lean towards waiting in a transit country as their preferred course of action.
What would be the reason(s) for not considering to return to country of origin or host country?
- Low income: 71%
- Political instability: 36%
- Family or personal reasons: 28%
- Insufficient access to food: 19%
- Other: 7%
- Discrimination: 4%
- Lack of documents: 3%
The primary reasons prompting individuals to delay their journey to their chosen destinations are primarily economic challenges in both their origin and destination countries, representing 71% of the cases—a 5% increase from the previous quarter. Closely following is political instability, mentioned by 36% of those surveyed.
Upon further examination, it is evident that low income is the main factor influencing the decision to relocate among individuals from Venezuela (75%), Honduras (56%), and Cuba (66%). For family or personal reasons, Hondurans indicate the highest percentage at 47%, followed by Colombians (36%) and Venezuelans (25%). 26% of Haitian respondents noted that a lack of food would be a primary reason to delay return to their country of origin, the highest among other nationalities.
What would be the reason(s) to return?
- Because I was not able to reach country of destination: 87%
- Support from family members (including family emergencies): 9%
- Economic opportunities: 4%
- Other: 3%
- Cultural ties: 2%
- Improved social conditions: 1%
- Political stability: 0%
Among those considering a return to their country of origin or host country, now at 7%, a decrease of 3% from the previous quarter, a substantial 87% stated they would only pursue this option if unable to reach their intended destination. Furthermore, 9% mentioned their reason as the need to support their family or address family emergencies.
Risks upon return
Risk upon return for self or family
- No: 50%
- Yes: 41%
- Prefer not to answer: 9%
Respondents were asked if they would face any kind of risk if they had to return to their country of origin or host country.
Approximately 2 out of 5 people responded that they would face some kind of risk upon return.
In broad terms, concerns about protection upon return often encompass considerations regarding security and prospects for successful reintegration, especially in situations where the overall condition of the country or the circumstances facing specific individuals and groups, such as unaccompanied and/or separated children, trafficked individuals, survivors of gender-based violence (GBV), members of the LGBTIQ+ community, among others, are still precarious.
The prevalence of risks upon return is particularly high in Mexico, where 67% of the interviewed individuals expressed facing such risks when returning to their country of origin. Among the reasons, it is highlighted threats, extortion, and/or persecution (41%), and general violence, discrimination, and/or xenophobia (21%).
Extracontinental respondents, as well as those from Central American countries, such as Guatemala and Honduras, have a higher rate of perceived risks upon return than other groups.
These results follow the same pattern as the indicator on reasons for leaving countries of origin or host countries, where it was identified that more than half of the respondents had reasons related to violence to leave their respective countries, as mentioned in the section above. When considered together, these indicators suggest a high number of individuals in mixed movements who may qualify for international refugee protection needs. These factors create a complex landscape where safety, security, and reintegration prospects remain fragile. Recognizing and addressing these needs is essential to ensuring the safety and well-being of vulnerable populations in the context of mixed movements. |
CARTMEL’s SMITHIES
by Peter Roden
L’Enclume, the now famous Michelin starred restaurant in Cavendish Street, Cartmel, took as its name the French translation of Anvil, the traditional tool of the blacksmiths who had earlier used the premises as their workshop. But how much do we know about the blacksmiths who worked there, and at another historic Smithy in Cartmel village, and their connections with blacksmiths at other Smithies in the surrounding area?
These notes started as an attempt to pull together the stories of Cartmel’s local blacksmiths, drawing on (1) data now available in the recently completed database of census records for this area, (2) some family history research on the families identified from the census records, and (3) some recent oral history interviews, recalling memories of those who worked in the Cavendish Street Smithy.
Subsequently, several friends who saw earlier drafts have provided additional information. Pat Rowland and Barbara Copeland have provided a significant number of additional references and suggestions regarding the historic records, and John Batty, and his discussions with Richard Davis and Jonathan Wood, and Derek Birch, with whom I have discussed the draft, have all contributed additional information. The help from all of them in compiling this article is gratefully acknowledged.
Historically, the Smithy in Cavendish Street was not the only Smithy in Cartmel. The first detailed Ordnance Survey map of the area in 1848 clearly shows another Smithy near Springfield House, on the corner of ‘Back Lane’, known by various other names subsequently, and an un-named street, now known as Priest Lane. That Smithy is later recorded with the house name Fell View, which can still be seen on a cottage there.
The census records rarely give details of postal addresses as we know them today, so it can be difficult to locate specific properties within a village. However, until 1949, when the civil parish of Upper Holker was abolished, the village of Cartmel was divided by the River Eea into two different townships, now known as civil parishes. Cavendish Street, on the West of the river, was in the township of Upper Holker, whilst Aynsome Road, on the East of the river, was in the township of Lower Allithwaite. This separation of the two Smithies in Cartmel, into different townships, is a great help in allocating the Cartmel blacksmiths in census records to their respective Smithies.
The ownership of the Smithy in Cavendish Street is relatively easy to trace for about 120 years from 1838 to the late 1950s, as the families who lived and worked there only changed once during that period. The first was the Wilkinson family and the second was the Swainson family.
Thomas Wilkinson was baptised in Sedbergh on 10 Oct 1813 and on 9 June 1838, he married Dorothy Whitehead at her local church in Kendal. Their marriage certificate gives Thomas’s occupation as a ‘Black & White Smith’, and also indicates that he was already living in Cartmel at the time of his marriage. Over the next 20 years, Thomas and Dorothy had 11 children, all baptised in Cartmel, and Thomas is recorded as a blacksmith in the Upper Holker part of Cartmel in the four censuses of 1841, 1851, 1861 and 1871.
Before any of their sons were old enough to follow in their father’s trade, Thomas had a young journeyman blacksmith living with the family in 1851, named James Lambert, aged 24 and born at Askrigg in Wensleydale c.1827. James Lambert probably left Cartmel in the early 1850s, married a girl named Agnes from Witherslack, and moved to Allithwaite. He is recorded as a blacksmith in Allithwaite in the next four censuses, from 1861 to 1891. James and Agnes had two sons who followed their father into the blacksmith’s trade, both baptised in Cartmel, i.e. William baptised on 5 March 1854 and James Jnr baptised on 5 April 1857. Both these sons were working as blacksmiths with their father at the time of the 1871 census. The family’s address in the Allithwaite censuses is given as the Post Office in 1871 and 1891, and The Square in 1881, and in 1891 father James’ occupation is specifically ‘Blacksmith and Postmaster’. The elder son William Lambert became a brewer, and his address in the censuses from 1881 to 1901 is given as The Brewery in Allithwaite. However, William probably combined his brewing activities with some blacksmith work as in the 1911 census, he gives his occupation as simply a blacksmith, with an address only as Allithwaite. James Lambert who trained at the Cartmel Smithy died aged 64 a few months after the 1891 census. Without digressing further into the Lambert family history, it may be noted that in the 1911 census, the Post Office in Allithwaite was occupied by a third generation of the Lambert blacksmiths, i.e. 22 year old William Lambert, whose 18 year old sister Emma was then the Postmistress, and they had a younger sister, 7 year old Lizzie Lambert living there with them then.
To return to the Wilkinson family of blacksmiths in Cartmel: all of Thomas Wilkinson’s sons who survived infancy followed their father as blacksmiths. The eldest of these was James Wilkinson who was baptised on 9 February 1840. He is recorded as working with his father as a blacksmith in 1861. However, tragedy struck the family in December 1864. James was only aged 24 when he was buried in Cartmel on the 20th of that month, only 9 days after his younger sister Jane, aged 20, who had been buried on the 11th of that month.
Another son, John Wilkinson, baptised on 3 June 1842, is recorded working with father as a blacksmith in both 1861 and 1871. However, he probably left home soon afterwards, and moved across the Kent estuary to Slyne near Hest, north of Lancaster, where he was working as a blacksmith at the times of the censuses in 1881, 1901 and 1911, though he has not yet been located at the time of the 1891 census. In both 1881 and 1901 he was living with his wife Annie, who was born in Ireland, but neither of those censuses mention any children. John was a widower by 1911, and then had his unmarried younger sister Hannah living with him. He probably died aged 72 in the 2nd Qtr of 1914.
The early 1870s seem to have been difficult times for local blacksmiths, and they felt it necessary to publish the following notice in the *Westmorland Gazette* on 16 March 1872:
**NOTICE**
At a meeting held this day, at Cartmel, the BLACKSMITHS of Cartmel, Allithwaite, Flookburgh, Newton, Broughton, Witherslack, Grange, Newby Bridge, Lindale, Crosthwaite, Bowland Bridge, &c., having taken into consideration the great rise in the price of Labour, Iron, Coals, &c., found it necessary to RAISE the PRICE OF HORSE SHOEING and all other work, according to the times.
Cartmel, March 9th 1872.
No doubt Thomas Wilkinson of Cartmel would have been a participant, if not the leader of that meeting, which also shows that almost every village in the Cartmel peninsula had its own local blacksmith at that time. It also shows signs of local co-operation in price fixing, like medieval guilds, which would certainly be banned now by competition regulators!
Thomas Wilkinson himself died aged 66 on 25 Feb 1880, and by 1881, his youngest son George Wilkinson, baptised on 12 March 1856, was working as a blacksmith, with his widowed mother Dorothy being head of the household then. He probably had a 15 year old apprentice named William Rigge working for him then, as William Rigge lived not far away with his widowed mother who kept a sweet shop in Devonshire Square. However, George Wilkinson did not live to enjoy old age, but died aged 28 in the 2nd Qtr of 1884.
At the time of the 1891 census, widow Dorothy Wilkinson was still living in Cavendish Street, and her unmarried daughter Hannah, then aged 39, was the only other person in her household then. Who was actually running the Smithy there then is open to speculation, as discussed below. At the time of that census, another young blacksmith named James Hardwick, aged 20, who gives his employment status as ‘employed’, was living close by, and may have been working there then. However this James Hardwick does not appear again anywhere in the peninsula census records.
The end of all Wilkinson family interest in the Cavendish Street Smithy came in the 3rd quarter of 1898, after about 60 years there, when Dorothy Wilkinson died at the age of 83. It was then in the Swainson family for approximately 60 years, with blacksmiths father and son-in-law.
In the census records of 1901 and 1911, the blacksmith in Cavendish Street was William Stables Swainson, with his wife Hannah and their two daughters Mary Agnes and Florence. In 1901, William also had an 18 year old apprentice blacksmith resident and working for him named Walter Byrom from Blackburn, but no other resident blacksmith in 1911.
William Stables Swainson was born in Coniston in March 1861\(^1\), the eldest son of James Swainson and Agnes Jane Stables, who were married at Coniston the following month, on 13\(^{th}\) April 1861. At the time of their marriage, James was aged just 21, having been born in Colton in early 1840. He was then working as a tailor at Nibthwaite in the parish of Colton, and was the son of Edward Swainson, a farmer. Agnes was aged only 17 when she married James Swainson, having already bourn their eldest son. She gives her address as Coniston on their marriage certificate, and she was the daughter of Miles Stable(s), a miller, having been born in Torver in September 1843. James and Agnes had five sons, all of whom became blacksmiths. Their first four sons were all born at Coniston, (i.e. William in 1861, Edward in 1866, Miles or Myles in 1868 and James in 1870), but their youngest son John was born in Cartmel in 1876, thus giving an indication of when the family moved to Cartmel.
At the time of the 1881 census, 20 year old William S Swainson was learning his trade as an apprentice blacksmith with William Hoggarth, who was then the inn keeper and blacksmith at the Lowwood Inn in Haverthwaite\(^2\). Interestingly, William Hoggarth himself may well have worked at the Cavendish Street Smithy in the early 1870s. He was baptised at Cartmel on 23 March 1850, the son of Thomas and Agnes Hoggarth. They lived at Field Broughton, where the family is recorded in the 1851 census, and William and his by then widowed mother were still in Field Broughton for the 1861 census. By the time of the 1871 census, William had become a blacksmith, and was then living with John Farrer, a master blacksmith in Kirkby Ireleth, with whom it seems likely that he had worked as an apprentice and learnt the trade. The next known reference to William Hoggarth comes from William Field’s Log Book\(^3\), in which he records on 23 Jan 1878 that “William Hoggarth, blacksmith, left the house next to Tower House, he having taken the public house at Lowood”. The Field family lived in Tower House in Cavendish Street, Cartmel, so not only would William Hoggarth have been their next door neighbour, but living in Cavendish Street seems a strong suggestion that he was working at the Cavendish Street Smithy for a while before moving to Lowwood, where he subsequently trained William Swainson.
As mentioned above, the parents of William S Swainson had moved to Cartmel between 1871 and 1876, and in 1881, they and their four other sons were at an address given only as ‘Cartmel’. By 1891, William was back with his family at an address given as Broughton Road. William’s occupation then is given as a black and white smith, and his three brothers, Edward, Myles and James were also blacksmiths by then, whilst their youngest brother John was still at school, but later also a blacksmith.
However, it is by no means certain where William and his three blacksmith brothers were working in 1891, as there are two possible scenarios. One scenario is that he and they had already started working at the Cavendish Street Smithy, before or after the death of George Wilkinson in 1884, and whilst it was still in widow Wilkinson’s ownership. All the brothers were still unmarried at that time, so it would have been natural for them to be still living with their parents elsewhere in the town. Only after William had married, and Dorothy Wilkinson had died, was in possible for William to move in and live at the Cavendish Street premises. The other scenario involves William and his brothers working for a while at the Aynsome Road Smithy, before the Cavendish Street premises became available for William after Dorothy Wilkinson’s death. The Aynsome Road Smithy is discussed in more detail below.
---
\(^1\) His birth was registered as William Swainson Stables.
\(^2\) It is curious to find Lowwood Inn in Haverthwaite, in the civil parish of Colton, as the village of Lowwood was just across the River Leven in the civil parish of Upper Holker.
\(^3\) See Research folder on Cartmel Peninsula Local History Society’s website
It may have been out of operation for a while and it might have been bought by William’s father for his sons? It is tempting to assume that the family’s Broughton Road address would be at the Smithy there, but there are reasons to doubt this assumption, as discussed below.
On 9th May 1891 William Stables Swainson, a 30 year old blacksmith and son of James Swainson, a tailor, was married at Cartmel parish church to Anne (later Hannah) Smith Shaw aged 27, the daughter of Richard Shaw dec’d, a coachman, both parties then living in Cartmel. They had two daughters, Mary Agnes, born 30 September 1893, and Florence, born 4th Qtr 1894. Clearly, William and his family would be looking for a home and business premises of their own, and the Cavendish Street Smithy became available for them before or after the death of Dorothy Wilkinson in 1898. They were there at the time of the 1901 and 1911 censuses, as mentioned previously.
In 1909, the modern technology of motor cars may not have reached Cartmel, but bicycles had certainly done so. The advertisement illustrated below appeared in the *Westmorland Gazette* on 10 July 1909, which clearly mentions W. Swainson of Cartmel as one of the local distributors. Not only did Swainson advertise bicycles in the local newspaper paper, he also put an enamelled advertisement for them on the barn opposite to his Smithy, which can still be seen there.
Cavendish Street looking South, former Smithy on left, old barn with Raleigh sign on right
Then on 3rd April 1926 at Cartmel parish church, William’s elder daughter, Mary Agnes Swainson aged 33, married William (Billy) Dickinson Watson. Their marriage certificate gives Billy Watson’s occupation as a farmer, (not a blacksmith), and he was then living at Sturdys Farm, Field Broughton. Curiously, his marriage certificate gives Billy’s age as 34, although he was born in Ulverston on 13th June 1890, the son of William and Sarah Watson. Billy’s father, William Watson senior, was also a farmer, and at the time of the 1911 census, young Billy was working as a groom at Birkby Hall.
When Billy’s father-in-law, William Swainson, died in September 1932, Billy took over the business, though he himself was never a blacksmith, although he described himself as an Agricultural Blacksmith in the 1939 register. That record really means that he was the owner of a blacksmith’s business, rather than being active in the skills of that trade.
Billy’s widowed mother-in-law probably continued to live on site, as the Grange Red Books of 1933 and 1934 record Mrs W. Swainson living in Cavendish Street Cartmel, before they list Billy Watson at the Smithy from 1935. Hannah S Swainson died aged 76 in March 1939.
As owner of the business after his father-in-law’s death, Billy Watson immediately needed to employ a skilled blacksmith, and many of the recent oral history records recall that that active blacksmith in Cavendish Street, from the early 1930s, was always known locally as Timbuck.
Billy Watson is mentioned in several recent oral history interviews. Although he “worked” at the Smithy, he is remembered for putting wooden handles on implements made by Timbuck, or selling bicycles etc., but not actually doing blacksmith’s work.
The real identity of Timbuck was revealed by Derek Birch in a recorded video discussion with Bob Copeland and Howard Martin at L’Enclume entitled ‘Then and Now’, but mostly about its time as a Smithy.⁴ Derek Birch revealed that Timbuck was James (Jim) Hewartson, who was born on 25th April 1903, and subsequently became Derek Birch’s father-in-law.
Derek Birch has also told me that Timbuck served his time as an apprentice at the Newby Bridge Smithy, and the 1939 Register shows that he was then living at Chapel House, Field Broughton. Although that house is only 100 yards or so from the Field Broughton Smithy, (of which more below), Derek says that he never heard any mention of Timbuck ever working at the Field Broughton Smithy, but he has many photographs of his father-in-law working as a blacksmith, and has kindly allowed me to include two of them below.
Timbuck (James Hewartson) working as a Blacksmith
---
⁴ Recording made 19 Sept 2016 for the Cartmel Village Society – see CVS website
That video recording mentioned above includes many memories of the working life of this Smithy, from shoeing horses to putting rims on cart wheels for wheelwrights, and a few examples of Timbuck’s mischievous sense of humour. Howard Martin also mentions that by the late 1950s, it was then owned by James Gibson, the father of one of Howard’s school mates. At what point Billy Watson retired and sold out to Gibson is not mentioned, but was it in the late 1950s.\(^5\) Billy Watson died aged 69 in March 1960.
Gibson’s short ownership of this Smithy is a bit of a mystery. It is not known why he bought it, whether as an investor or as a blacksmith himself, nor why he had sold it by early January 1960.
The new owner was a chartered accountant named W.A.R. (Austen) Denison, who at the time that he purchased this Smithy, was still living in Hereford where he was the County Treasurer. Initially, he worked with a Mr & Mrs Clegg of Grange as his local agents for the management of this Smithy business.\(^6\) Denison resigned his position in Hereford in May 1962, and later that year moved to live in Cartmel at what he called Anvil House, (now converted for accommodation over L’Enclume).
Shortly after Denison’s move to Cartmel, in March 1963, *Cumbria Life* magazine published an article featuring half a dozen people working in Cartmel, one of whom was Tim Hewartson.\(^7\) The section about Tim and the Cartmel Smithy is quoted below.
“Three years ago the smithy at Cartmel was purchased by Mr. Dennison, who is trying to preserve it and to encourage the blacksmith’s craft. In his efforts he has been greatly helped by the Rural Industries Bureau.
“The deeds show that the building goes back for over 230 years. It lies within the precincts of the old Priory, and just across the road, most appropriately, is an old chestnut tree. Horses are still shod, though most of them are those used for riding rather than heavy work on the land, and ornamental ironwork is produced for private individuals and for churches.
“For 32 years the smithy has been the workplace of Tim Hewartson, who recalls that when he served his apprentice-ship he received 10s. a week, starting work at 6-30 a.m. and not taking off his leather apron for almost 12 hours. Tim worked for William Swainson for four years; then Bill Watson took over the Cartmel smithy, and "I'd 24 years under him."
“To shoe horses, Tim goes as far as Witherslack Hall and Levens Bridge. He says there are less than half a dozen farm horses left in the valley now, but it is not difficult to recall the days when four or five were led to the smithy every day.
“Some of the toughest work came in winter, when the shoes of horses were sharpened so that they could retain their grip on surfaces made treacherous through ice or snow. To "frost sharp" the shoes of one animal took over an hour.
“Gradually the old hand operations have been partly mechanised, though the skill demanded of a blacksmith remains. In Mr. Dennison’s smithy today are welders, oxy-acetylene cutters and an electric blower for the forge.
“At a time when wheels are mainly factory made and rubber shod, this smithy has requests for the hooping of wooden cart wheels. They belong to the men of Flookburgh and Allithwaite, who seek cockles and fluke on the sands of the Bay. Tim points out a special oven that is used for this work. Mr. Dennison shows visitors a grassy patch by the stream, near Wheelhouse Bridge, where years ago wheels were hooped with the aid of old peat fires. The beck was handy when the time came for quenching.”
\(^5\) The Grange Red Books, in Grange Library, only mention J.R.Gibson, at The Smithy, Cartmel, in 1959. Billy Watson is listed in 1957. The Red Books for 1958 and 1960 are not available in Grange Library.
\(^6\) See Appendix for reference to Denison’s correspondence with Mr & Mrs Clegg.
\(^7\) *Cumbria Life*, March 1963, pages 428-429. Copy found and donated to CPLHS Archives by Rose Clark, and kindly drawn to my attention by Nigel Mills.
About the time of his move to Cartmel, Denison also purchased the Lakeland Rural Industries in Borrowdale, whose principal products were then small stainless steel goods for both domestic and church uses. Denison then used the name Lakeland Rural Industries to promote both of his businesses, in Cartmel and Borrowdale.
One of Denison’s marketing innovations was the production of a Post Card illustrating Timbuck at work in the Smithy. Several examples of this Post Card survive locally, and it is illustrated below.
Denison later deposited extensive records of this, and his other related businesses, in the Kendal Archives. Further details about him, and those archived records, are included as an Appendix to this article. Suffice it to mention here that by Denison’s time, there was very little traditional farrier’s work to be done, and the ‘products’ of the Smithy then were mainly wrought iron work, of which the surviving gates for Kendal parish church, near the Ring o’ Bells public house, are an outstanding example.
The earliest letter in the Kendal Archives\(^8\) from the “Rural Industries Organiser for the County of Lancashire” to Denison, is dated 6\(^{th}\) January 1960, and starts “I understand from J.R.Gibson of Cartmel that you have purchased the Cavendish St. Forge at Cartmel, and that you have agreed to continue employing Tim Hewartson as a blacksmith and farrier, and that you would like to have him trained in the art of Electric Arc Welding”.
Apart from the confirmation of when Denison bought the business, this reference to electric arc welding prompted Derek Birch to recall the reason for Timbuck’s retirement, and the consequent closure of the Smithy business on this site. Shortly before the age of 65, Timbuck got a flash from electric arc welding in his eye, and subsequently had to wear dark glasses. This prompted his reluctant retirement, as Derek says that he boasted that he had never taken a full week’s holiday off work in his life, (only occasional half days off for special occasions like funerals). Subsequently, Timbuck (James Hewartson) died in December 1979 aged 76.
---
\(^8\) Kendal Archive Ref WDB 109, box 1 of 2 for acquisition # A1812, in a folder marked “Forge – Rural Industries Bureau”
Timbuck’s retirement in 1968 marked the end of the Smithy business in Cavendish Street, Cartmel. Even whilst this Smithy was still running, Denison and his wife had developed their interests in selling paintings by mainly local artists, both in Borrowdale and in a few shows around Cumbria. After Timbuck’s retirement, they developed “The Anvil Gallery”, in the property adjacent to the Smithy across the ginnel South of the Smithy in Cavendish Street. The documents in Kendal Archives include many folders of correspondence with artists up to the late 1970s.
As Denison was born in November 1909, the indications from all his deposits in the Kendal Archives are that when he reached the age of 65, he started to wind down his business interests, and little is known of what, if anything, he did with his property in Cartmel during the 1980s, before he sold it in 1992 to Richard Davis and Jonathan Wood, who then had other business interests in local antiques shops. They are believed to have used the premises for some years as storage space, rather than retail purposes. By 2002 they had decided to convert the Smithy premises into a restaurant, and attracted Simon Rogan, who translated the traditional Anvil name into French, and the premises became L’Enclume.
When Richard & Jonathan bought the premises in 1992, they also bought all its original blacksmith’s fixtures and fittings. Some of these are still on display within L’Enclume, like the Anvil and the Hooping Block, (for putting iron rims on wooden wheels). However, some are also out on loan to the National Trust at Speke Hall, Liverpool. A recent enquiry there (March 2018) confirmed that “the Cartmel tools are perfectly safe and secure”. They are located in the Smithy at Home Farm, Speke, but it was not open to the public at the time of the enquiry.
Returning to the Swainson family, oral history recollections mention Myles Swainson, one of William Swainson’s brothers, so it is worth recording what happened to William’s four brothers, who all followed the blacksmith’s trade at some time.
Brother James was the only one to leave the blacksmith’s trade. At the end of 1898, he was married in Yorkshire to Jane Anne Thirkell, who was born in Leeds, and in 1901, they and their young daughter, and Jane’s younger brother who was born in Carnforth were all living in Bradford, with James giving his occupation then as a railway engine stoker.
William’s youngest brother John was married in the 1st Qtr of 1901 to Mary Jane Foxcroft, and shortly afterwards, at the time of the 1901 census, John and his bride were living in Holker. They were there again for the 1911 census, with two children, when John gave his occupation as a blacksmith on a nobleman’s estate. Sadly, he seems to have died a few months later, at the age of 35, in the 3rd Qtr of 1911.
William’s brother Myles was also married in the 1st Qtr of 1901 to Eleanor Townson, and shortly afterwards, at the time of the 1901 census, Myles and his bride were both ‘visitors’ at Melbourne House in Cartmel, when Myles’ occupation is given as a blacksmith working from home on his own account. Where that might have been one can only guess. By 1911, the couple were living in Barngarth, specifically recorded as having no children, and Myles was working as an employed blacksmith, presumably with his elder brother William in Cavendish Street. Recent oral history recollections from one of Myles’ neighbours in the late 1920s or early 1930s, mentions that at that time Myles was stone deaf but used to work at the Cavensish Street Smithy 6 days a week and a Sunday morning, he used to walk to Field Broughton to work in the Smithy there. His reasons for going to Field Broughton are easily explained below. Myles died at the age of 63 in June 1932.
The fifth of the five Swainson brothers, Edward, born 8 Nov 1866, married Catherine Hill in the 2nd Qtr of 1895. They moved to the old Smithy at the High Dog Kennels in Field Broughton, where their three children were born: twins Edward and Eveline in the 2nd Qtr of 1896, and daughter Mary Agnes in August 1900. This family were at High Dog Kennels at the times of both the 1901 and 1911 censuses, and at the time of the latter, Edward junior had also become a blacksmith. In 1911, that Smithy was probably also employing another blacksmith, namely, 21 year old Robert Benson who lived with his parents at Well Close
Bank in Field Broughton. Edward junior had given up the blacksmith’s trade by the time of the 1939 Register, and is recorded therein as a Postman in Clitheroe. Edward senior was still living and working at the Field Broughton Smithy at time of the 1939 Register, then aged 72. Hence it is not difficult to guess why and where Myles Swainson was visiting in Field Broughton around 1930. How long this Smithy continued at Field Broughton is currently unknown, but Edward Swainson senior lived to the age of 85 before he died in December 1951.

There had been a Smithy at High Dog Kennels in Field Broughton since at least 1841. In the 1841 and 1851 censuses, the blacksmith there with his family was John Brockbank, who was born c.1799 in Lower Holker. In 1861, the blacksmith there with his family was John Farrer, aged 35 and from Kendal. In the 1871 census, the address High Dog Kennels is not specifically mentioned, nor has any blacksmith been found in Field Broughton at that time. In the 1881 and 1891 censuses, the blacksmith at High Dog Kennels with his family was John Burns, who was born at Bouth c.1848, before Edward Swainson took over that Smithy as described above. Financial problems may have caused John Burns to vacate the Smithy at Field Broughton, as he subsequently moved to Allithwaite, where he can be found in the 1901 and 1911 censuses, but not without being listed with a Receiving Order under the Bankruptcy Acts in 1893\(^9\).
As mentioned earlier, the Cavendish Street Smithy was not the only Smithy in Cartmel, so what do we know about the Aynsome Road Smithy, and the families associated with it?
In 1841, the Smithy on Aynsome Road, in the Lower Allithwaite part of Cartmel, was being worked by the blacksmith Thomas Bradley, who had been born in Cartmel parish in 1803, and had married Mary Kellet in Cartmel on 15 January 1837. At that time, he probably had a young lad working for him, as Richard Sedgwick, aged about 15, who lived nearby at Town End, is recorded as a blacksmith’s apprentice then.
In 1851, Thomas Bradley and his family were probably still there. Their address in that year’s census is given as Barngarth, as it was then for 15 other families, so that address may refer to an area of Cartmel rather than just one street, as there are few street names mentioned at that time. Besides his wife and his four school children, Thomas’s household then included his father, 79 year old William Bradley, a retired blacksmith born in Liverpool, his younger sister Elizabeth aged 35, and a 14 year old apprentice blacksmith named Isaac Newby from Brow Edge. Thomas’s father William was buried at Cartmel aged 84 on
---
\(^9\) *Huddersfield Chronicle* 26 Aug 1893 lists “John Burns, Allithwaite, Cartmel, Lancashire, blacksmith” under the heading of Bankruptcy Act Receiving Orders.
13 January 1854, and Thomas himself was buried at Cartmel on 17 May 1859 at the age of 56.
What happened next at the Aynsome Road Smithy is unclear and cannot be determined with any certainty from census records. At the time of the 1861 census, Thomas Bradley’s widow Mary was still living in that half of Cartmel, but no specific address is given. All her children had either left home or died by then, and her household consisted of herself, her widowed mother, and three ‘boarders’, but no blacksmiths. There was, however, in this half of Cartmel at this time, an unmarried young blacksmith living in lodgings, 23 year old John Barrow from Kirkby Ireleth, and also an apprentice blacksmith named Richard Waterhouse, aged 15, living with his parents. One can only speculate that in the absence of any other candidates, this blacksmith and this apprentice may have been working at the Aynsome Road Smithy.
The census records for 1871 have no blacksmiths at all recorded in the Lower Allithwaite part of Cartmel, and it is possible that the Aynsome Road Smithy may not have been operating at that time? It was obviously a difficult time for blacksmiths, as shown by the notice quoted earlier after their meeting in Cartmel on 9th March 1872.
From 1881 to 1911, it seems probable that the Aynsome Road Smithy was being operated by Peter Butler, born c.1853 in Flookburgh and married to Isabella Barrow of Cartmel in 1st Qtr of 1878. Certainly he and his family were there in 1901 and 1911, when the census records identify the house adjacent to the Smithy with its current name of Fell View.
However, in 1881, the family address is given only as Cartmel, and in 1891 as Grange Road. In 1881 Peter Butler’s occupation as given as a blacksmith employing one boy, and that boy is likely to have been the 15 year old blacksmith’s apprentice James Beetham, who was living nearby with his parents.
However, in 1891, when Peter Butler’s address is given as Grange Road, (a street name no longer in use), we find the Swainson family of blacksmiths living in Broughton Road, presumably the old name for the road to Field Broughton, now Aynsome Road, in which we find this Smithy. It may be just a co-incidence that the Swainson family were living on the same road as this Smithy? If they were actually operating this Smithy in 1891, then one has to explain where Peter Butler was working in 1881 and 1891, as he was certainly living at Fell View in 1901 and 1911. Moreover, there is an article in the *Westmoreland Gazette* of 15th Feb 1890, reporting on the bi-monthly meeting in Cartmel of the Rural Sanitary Authority, which mentions a problem “near to Mr Butler’s Smithy”. If the location of Peter Butler’s Smithy wasn’t at Fell View at that time, could there have been a third site for a Smithy in Cartmel then?
Since completing the draft of this article, Jenny Gray has given the CPLHS a hoard of late 19th century documents from Black Beck Hall, Ayside. They are mostly invoices and receipts relating to the Ellwood family’s business there as carriers. The hoard includes 34 invoices from various blacksmiths, including the two Cartmel Blacksmiths in the 1880s, Peter Butler and William Swainson, from whom examples are illustrated below.
As a post script, perhaps, to the story of the historic Smithies in Cartmel, it might be added that Cartmel still has a Smithy, at Ivy Cottage on Aynsome Road. It trades under the name M.E.L., and its website (in 2018) says that it has been trading there of over 20 years.
APPENDIX
Documents in Cumbria County Archives at Kendal
deposited by Mr W.A.R. Denison
Introduction
There are eight boxes or bundles of business records in the Cumbria County Archives at Kendal which were deposited by Mr W.A.R. Denison. Denison was the owner of Cartmel’s Cavendish Street Smithy from 1960 until it closed in 1968. The records that he deposited in the archives not only relate to this business, but also include records from his other local business interests. They are arranged in the archives with the following references:
**WDB 109** **Cartmel Forge**
2 Boxes from acquisition A1812, and a bundle of books of time sheets from acquisition # A2078.
**WDB 110** **Lakeland Rural Industries, Borrowdale**
3 Boxes, 2 from acquisition A1812, and 1 from acquisitions A2086 & H9262.
**WDB 111** **The Anvil Gallery, Cartmel**
2 Boxes from acquisition A1812.
The dates on which these records were deposited in the archives were: A1812 on 17 March 1992, A2086 on 20 December 1993, and H9262 on 10 September 2009.
The following notes give some information about Denison, and then summarise the contents of these records in the archives.
**W.A.R. Denison**
William Austen Raymond Denison was born on 5 November 1909 in Rochdale, the son of William H. Denison, who, at the time of the 1911 census, was Headmaster of an Elementary School there. By 1939, Austen had qualified as a Chartered Accountant, and was living with his parents in Folkestone, where he was working for the local council. In 1952 he became the Treasurer of Herefordshire County Council. At the age of 46, in the 2nd Qtr of 1956, he married Marjory Button of Hereford, who was born on 25 December 1909.
As well being a chartered accountant, (F.C.A.), he was also a fellow of the Institute of Municipal Treasurers and Accountants, (F.I.M.T.A.). It seems highly likely that through that Institute, he would have known Alfred Wainwright, (famed for his walking books on the Lakeland Fells) who was the Borough Treasurer of Kendal from 1948 to 1967. One can only speculate on whether a friendship with Wainwright in any way lead to Denison’s purchase of the Carmel Forge in January 1960, at a time when he was still living and working in Hereford?
In May 1962, he resigned his position in Hereford, and later that year he moved to live at what he called Anvil House in Cartmel. His resignation in Hereford was not without controversy, as the Council gave him a substantial Golden Handshake, which caused some protests when it became public knowledge.\(^{10}\)
About May 1962, Denison also bought a second Lake District business, Lakeland Rural Industries in Borrowdale, and he subsequently used that brand name for his business interests in both Borrowdale and Cartmel.
By the mid to late 1970s, as he passed the age of 65, he seems to have been winding down all his local business interests. He never had any children, and his wife Marjory died at the age of 79 in May 1989. He sold Anvil House in 1992, lived to be 100, and died on 24th October 2010, at the Old Vicarage Residential Home, Allithwaite.
\(^{10}\) See articles in *The Birmingham Post* on 19 Nov 1962, 16 Mar 1963, 22 Nov 1963 and 18 Jan 1964.
WDB 109 Cartmel Forge Archives
Box 1 of 2 with this reference contains 5 large folders, viz.:
- Correspondence with Mr & Mrs Clegg of Elder Cottage, Cart Lane, Grange over Sands, during 1960-62, clearly indicating that they were acting as his local agents whilst he was still living in Hereford. One particular letter of 3 September 1962 mentions a general wage increase in the engineering industry, and agrees new rates for Tim of 4½/11d per hour and £10/16½/4d per week.
- Correspondence with the Rural Industries Bureau, of 35 Camp Road, Wimbledon Common, SW19, mainly from Jan to June 1960 after Denison’s purchase of the Smithy, plus pro forma for estimating the cost of wrought iron work, and correspondence re. new equipment in March/April 1964.
- 3 folders of general Customer Correspondence, for 1960-65, 1966 and 1967-68 all containing some invoices and some designs and quotations for wrought iron work.
Box 2 of 2 with this reference contains:
- 4 folders of correspondence with specific customers, viz.: (1) Sign for Swan & Royal Hotel, Clitheroe, 1960; (2) Allithwiate Church Gates, 1963, with photos & press cuttings; (3) Mrs W Howell's Double Gates, 1964; (4) Cartmel Steeple Chases Ltd. 1964-68.
- 2 Order Books, Jan 1960 to May 1967, and June 1967 to end, the last order being dated 26 April 1968.
- Bag of Time Books 1964-68, the last one being labelled "Final" for 1 Nov 1967 to 23 Apr 1968.
- Large Plastic bag labelled "Designs"
There is also a bundle of 14 Time Sheet Books, 1960-64, most of which have 5 columns headed A G W T C for 5 potentially chargeable workmen. The vast majority of entries are under the column headed T, presumed to be Timbuck.
WDB 110 Lakeland Rural Industries, Borrowdale
The products of Lakeland Rural Industries in Borrowdale were all made from stainless steel. The production summaries show that these were divided into trays, bowls, dishes on feet, other tableware, pendants and broaches. In addition, they produced ornaments and vessels for use in churches.
Box 1 of 3 with this reference contains about 20 folders of customer correspondence for ‘Church Work’, plus an envelope of Church Work Enquiries 1965-74, and some photographs of specimen work.
Box 2 of 3 with this reference contains 12 Order Books from Sept 1962 to April 1974; workmen’s time sheets for 1964-66 being one A4 sheet per man per week; production summaries per workman; photographs of products; copies of an illustrated sales brochure, and a folder for Abbey Horn Ware sold at Borrowdale.
Box 3 of 3 with this reference contains more Church Work customer files, several folders illustrating finished stainless steel products; a folder of price lists; and some notes on ‘Methods of using stainless steel from sheet to finished items’.
WDB 111 Anvil Gallery Archives
Box 1 of 2 with this reference contains numerous small folders or envelopes, some of which relate to stainless steel products from Borrowdale, rather than paintings from Cartmel.
There are two folders of correspondence with artists between 1974 and 1978, and one folder marked ‘Artists no longer supplying’. There are also brochures for Cartmel Art Society 9th, 10th & 11th Annual Exhibitions in 1975, 76 & 77.
There are 20 folders relating to Exhibitions, viz., Ambleside, (9 folders for 9 years 1963-1971); Lakeland Rose Show in Grange, (5 folders for 5 years 1964-68); Royal Lancs Show 1964 & 1965; Bristol Building Design Centre, (re Stainless Steel) 1965; Abbot Hall Exhibitions 1965 & 1974; and Brockhole 1969.
There also a folder with correspondence relating to lectures given by Denison, viz., (1) RICS Conference @ Lancaster Univ., July 1971; (2) Rotary Club of Workington, May 1973; (3) Torver Women’s Institute, May 1974; (4) Lake District National Park Open Day at Seatoller, Sept 1976.
The contents of Box 2 of 2 with this reference certainly all relate to the selling of paintings, as summarised below.
There are 13 envelopes of correspondence with specific artists, viz.: Mr F. McJannet, 1972-76; Patience Arnold, 1972-76; Monica Berry, 1972-76; Mr D.G. Valentine & Mrs J. Valentine, 1972-76; Mr N.J. Hepworth, 1972-73; Miss Jill Aldersley 1973-76; Mrs D.S.Fringer, 1973; Mrs Sonia Walshaw, 1973; Mr N.J. Harper, 1973-74; J.Wilkinson, 1973-78; W. Geldart, Exhib June 1975; C.M. Unwin, 1975-79; M.H. Pickup, 1976-79.
There are lists of pictures on ‘S or R’ (Sale or Return) for 1974-76, a list of ‘Paintings at Borrowdale’ in 1977, and folders for exhibitions of paintings at Sunderland in April 1975 and the Northern Arts Exhibition in Newcastle in 1976. |
Welcome to the first issue of *Boonshoft Rounds*, a newsletter that highlights the activities of the Boonshoft School of Medicine (BSOM) students, residents, staff, faculty and alumni. Through this publication, we hope to keep you informed of the latest happenings at BSOM.
**X Marks the Spot**
In White Hall on Wright State’s campus, medical education of students and residents is integral to the Boonshoft School of Medicine’s (BSOM) mission. During the pandemic, keeping everyone safe has become a primary factor of the educational experience. Since the beginning of this crisis, BSOM faculty and staff have vigilantly executed careful plans for student safety when the need for face-to-face training proves necessary. For BSOM Foundations of Clinical Practice students, most learning activities are virtual; however, the majority of clinical skills training needs to take place in person. Additionally, examinations are held in person.
Staff, faculty, and training volunteers, led by Assistant Dean, Gregory Toussaint, M.D., alongside John Needles, Manager of MedOps, Karen Bertke, Manager of the Skills Assessment Training Center, Amanda Bell, M.D., Vice-Chair for the Doctoring Phase, and Teresa Kohlhepp, Assessment Program Coordinator, have put student health and safety first.
What does this mean?
Since June, individual testing of anyone coming into White Hall has been accomplished via temperature taking, signed and dated paper documentation on health-related questions, ensuring everyone wears a mask, and pre-set spacing. When a full class of 120 students needs to enter White Hall, some days are very labor intensive for staff and faculty to ensure safety. Prior to all scheduled activities, MedOps and Medical Education staff carefully go through the scheduling and spacing of every room to ensure everything is safe and ready for the upcoming
curricular activity. Medical students are required to fill out a health disclaimer prior to coming for their staggered-access scheduled classroom activities. Student temperatures are taken, and masks and social distancing are required.
Yes, X does mark the spot in White Hall! Students stand at marked points to enter the building for the mandated health checks and for entering classrooms and study spaces throughout the building. Six-foot spacing is required, although when learning clinical skills, that may not be possible; therefore, extra precautions are taken with gloving and using face shields. Faculty are available to assess students who answer yes to any of the screening questions, determining when students must return home and undergo further evaluation.
“The safety protocols we’ve mandated are working well thus far to keep our students, staff, and faculty healthy throughout this pandemic,” states Brenda Roman, M.D., Interim Dean and Chair of the Department of Medical Education. “The extremely detailed planning and execution of all curricular events takes a great deal of time and manpower. However, the health and safety of our students will continue to be our priority for the future. We appreciate the cooperation of our students in doing their part to stop the spread of COVID-19.”
Above: From checking temperatures, to social distancing and requiring masks, the Boonshoft School of Medicine is taking precautions to keep everyone safe.
DEPARTMENT OF MEDICAL EDUCATION
In early 2020, the Wright State University Board of Trustees approved the creation for the Department of Medical Education, instead of simply having an Office of Medical Education. This change allows the department to offer curricular options for additional degrees and certificate programs, as well as opportunities for faculty status for positions that require terminal degrees.
In addition to the role of Associate Dean for Medical Education, Dr. Roman now serves as Chair of the Department of Medical Education. In order to mimic the structure of other departments and for greater role clarification, two new positions were created. Dr. Amanda Bell, who previously served as Director of Biomedical and Clinical Integration, now serves as Vice-Chair for the Doctoring Phase. In this role, she will work closely with clerkship directors in overseeing the Doctoring Phase. “Already, Dr. Bell’s work as Vice-Chair for Doctoring has proved invaluable for students who can reach out to her for any concerns that they may have during this phase,” states Dr. Roman. Dr. Bell remains as Co-Director of Clinical Medicine. Dr. Irina
Continued on page 3
Overman, previously Director of the Foundations Phase, will now serve as Vice-Chair of Foundations of Clinical Practice. Dr. Roman comments, “Dr. Overman has done a phenomenal job in overseeing the Foundations curriculum, especially when she also serves as a director of two modules.”
Three staff positions have been reclassified as faculty, please congratulate the following faculty for becoming assistant professors in the Department of Medical Education:
Colleen Hayden, Ed.D., Director, Medical Education & Accreditation
Amber Todd, Ph.D., Director of Assessment & Evaluation
Jeanette Manger, Ph.D., Assistant Director of Medical Education Research
**FACULTY CURRICULUM COMMITTEE (FCC) 2020 HIGHLIGHTS**
**January 2020:**
Reviewed and approved the revised grade submission policy to ensure students receive feedback on their clinical medicine OSCEs within 6 weeks; allows for feedback and improvement before next OSCE.
**April 2020:**
April 1st emergency meeting
Reviewed and approved changes to emergency medicine and sub-I requirements for Class of 2020.
Reviewed and approved numerous non-clinical and clinical online electives for students to complete due to COVID-19 pandemic.
**May 2020:**
May 5th emergency meeting
Due to Prometric testing site closures, reviewed and approved change to Step 1 requirement to instead require “passing” of CBSE for Class of 2022 students to move into doctoring phase.
Reviewed and approved changes to advanced doctoring requirements for Class of 2021.
Reviewed and approved retroactive changes to clinical medicine doctoring grading schema.
**June 2020:**
Reviewed and approved COVID-19 “Return to Clerkships” policy and procedure.
Reviewed and approved academic progression plan for Class of 2022 who were still delayed in taking Step 1.
Reviewed and approved revised quartile policy due to COVID-19 impacts on curriculum.
Discussion of physician leadership certificate (now pathway) proposal.
**July 2020:**
Approval of physician leadership pathway proposal; to commence AY2020/21.
Discussed contingency plans for clerkship students who are absent due to COVID-19-related illness or quarantine.
**August 2020:**
Reviewed and approved the diversity in medical education curriculum development elective for advanced doctoring credit.
Reviewed and approved the parental leave elective for advanced doctoring.
Discussion of students self-monitoring when coming into White Hall.
Reviewed and approved changes to the foundations absence policy, with specific language about absences from virtual class sessions.
From coordinating with healthcare providers in Africa, traveling to New Orleans and New York, providing virtual care for patients, to assisting with testing, Boonshoft School of Medicine students alumni and faculty are contributing time and effort, with a focus on helping the community, during the COVID-19 pandemic.
**Medical Students Aid Coronavirus Monitoring at Centers for Disease Control and Prevention**
Two fourth-year students at the Wright State University Boonshoft School of Medicine have aided the effort to monitor the coronavirus at the Centers for Disease Control and Prevention (CDC). Rinki Goswami, of Beavercreek, and Vishal Dasari, of Chennai, India, are working in the Emergency Operations Center (EOC) set up to track the spread of the illness. Read the full story at https://bit.ly/32eVAwt.
**Student Volunteers in New Orleans to Fight Coronavirus**
After finishing his second year at the Wright State University Boonshoft School of Medicine, Kyle Henneke was looking forward to beginning rotations. But they were canceled, and he found himself staying at home trying to help flatten the curve. That is, until he joined a few COVID-19 health care provider groups on Facebook and learned about ways to volunteer in areas hit hard by coronavirus. Read the full story at https://bit.ly/35i3NBb.
**Volunteering Services to Help Busy Providers**
Students at the Wright State University Boonshoft School of Medicine started a coordination effort to aid health care providers in Dayton responding to the coronavirus pandemic. The effort paired physicians and health care providers with medical students available to babysit, dog walk, run errands or assist with eldercare. Read the full story at https://bit.ly/3m88KTR.
**Department of Psychiatry Responds to Pandemic with Virtual Efforts**
The Department of Psychiatry at the Wright State University Boonshoft School of Medicine began the use of telepsychiatry with a grant-funded, statewide resource in 2012, entitled Ohio’s Telepsychiatry Project for Intellectual Disability. It was one of the first in the state and provides patient care to underserved and outlying counties with limited infrastructure and resources. Read the full story at https://bit.ly/32frXLq.
**Alumni Work on Coronavirus Contact Tracing**
When the economy in Ohio was preparing to reopen, new focus was being placed on contact tracing. The push is to keep track of people who have come into contact with those who have contracted the coronavirus. Alumni of the Wright State University Master of Public Health Program are busy at work. Read the full story at https://bit.ly/3m2MP0u.
**Serving Patients at Mount Sinai Beth Israel Hospital**
After attending the Boonshoft School of Medicine, Aaron Patterson, M.D., ’09, specialized in psychiatry. He pioneered a program to provide mental health support to patients and hospital staff during COVID-19. Read the full story at https://bit.ly/32eCOFK.
**Students Assist with COVID-19 Testing**
Boonshoft School of Medicine students assisted with COVID-19 testing at pop-up clinics in June. Read the full story at https://bit.ly/3bHEXwv.
**Medical Students Return to Care for Eswatini ‘Stateside’**
Due to the COVID-19 pandemic, Boonshoft School of Medicine students were not able to travel abroad this past spring or summer. Being “stateside” didn’t stop them from serving the people in Eswatini. Read the full story at https://bit.ly/2R6WTHw.
Supporting Student Success
Below is a summary of what the Boonshoft School of Medicine is doing to support student academic success.
M1 students:
The Boonshoft School of Medicine academic support program continues to facilitate weekly review sessions for the M1 students. Each Saturday morning, students meet online to answer practice questions led by peer leaders modeling best practices for learning and review. Most first-year medical students attend the sessions and collegially discuss the material and support each other throughout their coursework.
Upcoming sessions:
| Origins Part 2 | |
|----------------|----------|
| Review 1 | October 17, 2020 |
| Review 2 | November 1, 2020 |
| Review 3 | November 7, 2020 |
| Review 4 | November 14, 2020 |
| Review 5 | November 21, 2020 |
| Review 6 | December 5, 2020 |
| NBME Review | Date TBD |
Continued on page 6
M2 students:
In early October, the Step 1 Prep series begins. Each week students will focus on one topic and follow up with a review of that topic with peer leaders each Saturday.
Here is the tentative schedule for the Step Prep Series:
| Step 1 Prep Series | Date |
|---------------------------------------------------------|--------------------|
| Electrocardiogram (EKG) | October 10, 2020 |
| Gastrointestinal (GI) | October 24, 2020 |
| Step 1 and Family Medicine | Week of October 28, 2020 |
| Endocrinology and Reproductive | October 31, 2020 |
| Anatomy | November 7, 2020 |
| Biostats | Date TBD - November 2020 |
| Renal | November 14, 2020 |
| Respiratory | November 21, 2020 |
| Biochemistry | December 6, 2020 |
| Practice exam over winter break | |
| Hematology and Oncology | January 12, 2021 |
| Neurology | January 19, 2021 |
| Psychiatry | January 26, 2021 |
M3 students:
Step 2 CK, Informational
October 23, 2020
FOUNDATIONS
Class of 2024
The Class of 2024 has completed the first block of Origins and are almost through Human Architecture 1. Human Architecture is our anatomy course. Our anatomy professors worked hard to ensure dissection still occurred for the students while maintaining safety during these COVID-19 times. They are learning clinical skills on Fridays and adding to their medical tool box as they grow in becoming physicians.
Class of 2023
The Class of 2023 has completed Human Architecture 2, the anatomy course for the second year. Due to COVID-19 the course ran entirely online, however, the students worked hard and overcame these obstacles to continue to advance their medical knowledge. They are presently in the middle of Beginning to End, our endocrine, reproduction, and gastrointestinal disorder course. They are applying the knowledge they are learning and advancing their clinical skills every Friday as they tackle Clinical Medicine and work with standardized patients. This class is actively involved in making changes in our curriculum and advancing the equality and inclusivity for all people and students.
Help for student mental health needs
Medical school is a time of growth and challenge. It presents unique stressors, and at times, students find their coping strategies are overwhelmed. Students may be in contact with family and friends less often than previously, and the demands of medical school leave some abandoning healthy practices in order to study extra hours. This is rarely effective and may lead to worsening health.
When medical students have mental health needs, there are several options available to them which are low-cost or free of charge. Both the Psychiatry Resident Psychotherapy Clinic and the Wright State University Counseling & Wellness Center provide opportunities for therapy. Both programs are confidential and separate from the student’s academics. The Counseling & Wellness Center offers individual, family, and group therapy; psychological assessments; and a variety of support groups. More information can be found on their website at www.wright.edu or call at 937-775-3407. They also have a Crisis Line called Raider Cares: call 833-848-1765 or text “LISTEN” to 741-741.
The Resident Psychotherapy Clinic offers individual and family therapy, and a group designed for medical students called Peak Performance. There is also an option for psychiatric assessment and medication management available if needed. To connect to any of these options in the Department of Psychiatry or to discuss next steps, please contact email@example.com or 937-775-8124.
Here are a few tips for wellness in medical school and good rules for life!
DO!
• Stay in touch with family, friends, people who are not in medical school – they will keep you grounded
• Socialize and talk to people who are in medical school – they understand
• Get 7-8 hours of sleep
• Eat healthy meals
• Exercise
• Do things you enjoy
• Use good coping strategies
• Develop a good schedule, lists, calendars or other things that help you stay organized and on track
• Make time for appointments – if you need to go to your primary care physician, a specialist, the dentist, your therapist, etc., it is okay! Let your site coordinator and attending/resident know ahead of time if possible, or let someone know you need to miss class or leave early. You don’t have to tell them the details. Just don’t miss the same class or the same rotation all the time
DON’T!
• Use alcohol, nicotine, cannabis, opiates, or other substances in excess or as coping mechanisms
• Try to work and study more than your brain/body can handle. Medical school is difficult and demanding, but you are not made to study 14 hours per day or to read a medical textbook for hours on end without breaks
• Neglect your relationships
• Neglect your health
• Give up your hobbies
• Procrastinate
• Think you can catch up on all your sleep on the weekends (You can’t!) |
Uncovering the veil of night light changes in times of catastrophe
Vincent Schippers\textsuperscript{1} and Wouter Botzen\textsuperscript{1,2}
\textsuperscript{1}Utrecht University School of Economics; Kriekenpitplein 21-22, 3584EC Utrecht, Netherlands
\textsuperscript{2}Institute for Environmental Studies (IVM), VU Amsterdam; De Boelelaan 1111, 1081 HV Amsterdam, Netherlands
Correspondence: Vincent Schippers (firstname.lastname@example.org)
Abstract. Natural disasters have large social and economic consequences. However, adequate economic and social data to study subnational economic effects of these negative shocks are typically hard to come by especially in low-income countries. For this reason, the use of night light data is becoming increasing popular in studies that aim to estimate the impacts of natural disasters on local economic activity. However, it is often unclear what observed changes in night lights represent exactly. In this paper, we examine how changes in night light emissions following a severe hurricane relate with local population, employment, and income statistics. We do so for the case of Hurricane Katrina, which struck the coastline of Louisiana and Mississippi in August 2005. Hurricane Katrina is an excellent case for this purpose since it is one of the biggest hurricanes in recent history in terms of human and economic impacts, made landfall in a country with high-quality sub-national socioeconomic data collection, and is covered extensively in the academic literature. We find that overall night light changes reflect the general pattern of direct impacts of Katrina as well as indirect impacts and subsequent population and economic recovery. Our results suggest that change in light intensity is mostly reflective of changes in resident population and the total number of employed people within the affected area, and less so but positively related to aggregate income and real GDP.
1 Introduction
Natural disasters have large social and economic consequences around the world. Impacts of natural disasters are projected to rise as a result of a combination of climate change increasing the frequency and/or severity of extreme weather events and continued urbanization in disaster-prone areas (IPCC, 2014). Studying these impacts, however, is not trivial. For many areas where natural disasters have large impacts, adequate data on local population and economic activity are not available. For this reason, there is a growing literature that studies the local effects of natural disasters by making use of changes in local night light intensity (see e.g. Bertinelli and Strobl, 2013; Gillespie et al., 2014; Elliott et al., 2015; Zhao et al., 2018; Kocornik-Mina et al., 2020). The idea is attractive as night light data is available at high levels of spatial detail, is available consistently over time for the whole globe, and does not suffer from inadequate data collection and measurement error relating to the capacity of (national) statistical offices to measure the state of the economy. Night light intensity is used in a wide range of applications, such as a proxy for economic activity (e.g. Hodler and Raschky, 2014; Michalopoulos and Papaioannou, 2013), or as a proxy for population and GDP (Elvidge et al., 1997; Sutton and Costanza, 2002; Ebener et al., 2005; Sutton et al., 2007; Ghosh et al., 2010) or GDP growth (Chen and Nordhaus, 2011; Henderson et al., 2012). In other studies night lights are used to study urbanization (Henderson et al., 2003; Zhang and Seto, 2011; Ma et al., 2012), migration in response to flood risk (Mård et al.,
and population displacement due to violent conflict (Li and Li, 2014; Li et al., 2015). However, few studies examine how night lights and economic activity relate to each other in shock times, and there is relatively poor understanding of what changes in night light intensity reflect exactly especially when downturns in lights are considered (Bennett and Smith, 2017).
In this paper, we aim to advance our understanding on this issue by studying in detail the effects of Hurricane Katrina on county-level population, employment, and income for the most heavily affected counties in Mississippi and Louisiana, and then relating these to changes in night light intensity. Hurricane Katrina is one of the biggest hurricanes in recent history in terms of human and economic impacts, located in a country with high-quality sub-national data collection. We exploit this high-quality data by relating local changes in economic activity to changes in night light. Our key goal is to assess to what extent it is possible to capture the regional economic dynamics following damages from a big natural disaster by making use of the annual nighttime lights. We show that immediate damages are captured well by reduction in night light; there is a strong and negative correlation between the degree of housing damage and reduction in light intensity at the county-level. Furthermore, we show that recovery of population, employment and income after Katrina takes years for some of the most heavily affected counties. While not related one-to-one, this dynamic is reflected in a relatively quick recovery of night light intensity in these counties. Our results show that the use of night light data for studying the immediate economic impact of a big natural disaster such as Hurricane Katrina is warranted. Using these data in areas where alternative economic statistics at the desired level of geographical aggregation are absent may therefore allow for studying the effects of shocks on regional economies.
Our paper connects to a number of different literatures about natural disasters, climate change, and their economic impact, as well as strands of literature that are concerned with economic development. First, our study connects with the literature on the economic consequences of floods and other natural disasters that uses night lights or economic indicators to proxy these consequences. Specifically for floods, most closely related is the work by Kocornik-Mina et al. (2020), who study the urban impact of large-scale floods in a global sample, using nighttime light intensity as a proxy for local economic activity. The authors find a short-lived negative effect of flooding in the year of the flood, suggesting that economic activity recovers to the pre-flood equilibrium rather quickly. In effect, our case study of Katrina is a part of their broader analysis, which we study in more detail and for which we examine the relationship between decline and recovery of night light and economic activity in detail. Moreover, we show that observable reductions in light intensity are possible for multiple years after the disaster. Related to Kocornik-Mina et al. (2020) is the work by Elliott et al. (2015) who similarly find a significant but short-run effect of typhoons on economic activity in cities in coastal China, also proxied by nighttime lights, and Gillespie et al. (2014) who study the impact of the 2004 tsunami in the Indian Ocean on affected communities in Sumatra, Indonesia.\(^1\)
Second, most economic studies use more traditional indicators of economic activity to study disaster impacts instead of night lights. Strobl (2011) assesses the economic growth impact of hurricanes for US counties and reports a decline in GDP growth in the year of impact.
\(^1\)This work is part of a growing literature that studies the local economic impacts of hurricanes and other natural disasters, often making use of nighttime lights as a proxy for local economic activity. Related papers on hurricanes are Bertinelli and Strobl (2013) on the local economic impact of hurricanes in the Caribbean, Mohan and Strobl (2017) on the short-term impact of cyclone Pam in the South Pacific, Del Valle et al. (2018) on cyclone impacts in Guangdong, China, Ishizawa et al. (2019) on hurricane impacts in the Dominican Republic, and Miranda et al. (2020) on windstorm impacts in Central America more generally. Night lights have also been used to study earthquake impacts (Kohiyama et al., 2004; Fan et al., 2019; Nguyen and Noy, 2020), and a combination of disaster types globally (Felbermayr et al., 2022) and for Indonesia and Southeast Asia respectively (Skoufias et al., 2020, 2021).
of 0.5% on average. Notably, this impact is netted out at the state level within a year, implying that effects are local in nature. Closely related to this is work on the economic growth impacts of hurricanes in Central America and the Caribbean (Strobl, 2012), and in a global sample (Hsiang and Jina, 2014; Berlemann and Wenzel, 2018). Heger and Neumayer (2019) study the long-term economic growth impact of the Indian Ocean tsunami of 2004 for Aceh, using both GDP and annual night lights, and find a positive effect that can be explained by the large aid inflow and coordinated reconstruction efforts. Again, no effect on economic growth is observable at the national level. We also relate to a broad literature that studies the impacts of other natural disasters on economic growth (Noy, 2009; Cavallo et al., 2013; Fomby et al., 2013; Felbermayr and Gröschl, 2014).\(^2\) A critique is that many of these studies have used aggregate national GDP indicators to study the impacts of disasters which often are local events (Felbermayr et al., 2022; Botzen et al., 2019). We contribute to this literature by combining insights of impacts on economic activity in the affected region through conventional economic statistics with an analysis of changes in night light activity to assess the value of the latter in studying impacts of natural disasters on local economic activity. Third, our work relates closely to studies that have examined the social and economic impacts of Hurricane Katrina, which we will discuss in detail in the next section. These studies analyze the effects of Katrina on neighborhoods in New Orleans (Logan, 2006), on the economic welfare of displaced individuals (Paxson and Rouse, 2008; Groen and Polivka, 2008; Deryugina et al., 2018; Groen et al., 2020), business survival and recovery (Jarmin and Miranda, 2009; Basker and Miranda, 2018), and its substantial wider effects on the affected regional economies (Vigdor, 2008; Hallegatte, 2008; Xiao and Nilawar, 2013). We incorporate and synthesize the existing empirical evidence in the next section, before turning to the analysis on the effects the effects of Hurricane Katrina on night light intensity of the affected region. Fourth, we relate to a growing literature on the use of nighttime light for empirical analysis of economic growth and development, starting with the seminal contributions by Henderson et al. (2012) and Chen and Nordhaus (2011). Most relevant to our work are the studies with a focus on sub-national development patterns by e.g. Michalopoulos and Papaioannou (2013, 2014), Hodler and Raschky (2014), and Henderson et al. (2017). Ghosh et al. (2013) and Donaldson and Storeygard (2016) provide excellent overviews of the various applications of night lights in this literature. Recent tests of the relation between night lights and GDP at the local level using regional, city-level, and prefecture-level data (e.g. Hodler and Raschky, 2014; Storeygard, 2016; Kocornik-Mina et al., 2020) show a promising correspondence with the lights-to-GDP elasticity established by Henderson et al. (2012).\(^3\) However, for our purposes we are interested in the relationship between night lights and economic activity in the context of a natural shock. We contribute to this discussion by presenting new findings about the relationship between night lights and economic activity in shock times for the detailed case study of hurricane Katrina. Finally, and more broadly, our study connects with the literature on estimating the costs of climate change, sea level rise, and the increasing risk from hurricanes and flooding that coastal cities face in the near future (Hallegatte et al., 2013; Aerts et al., 2014; de Ruig et al., 2019). We study in this paper one case of a heavily urbanized coastal region that is exposed to the risks of hurricane landfalls. Global warming and sea level rise are expected to aggravate these risks in many parts of the world (IPCC, 2014). Understanding the consequences of hurricanes on coastal economies is
\(^2\)For reviews of this literature, see Cavallo et al. (2011); Klomp and Valckx (2014); Botzen et al. (2019).
\(^3\)Note that this literature also has critical contributions that show that the lights-to-GDP elasticity is not necessarily equal across the globe and between different regions within countries. See Bickenbach et al. (2016) and Gibson et al. (2020, 2021) for a discussion. We contribute to this discussion by studying one region in detail and explicitly assessing the relation between light intensity and economic indicators in the context of a large natural disaster.
therefore important for risk management and planning. Since adequate data to study local economic impacts are not available in large parts of the (developing) world, we aim to contribute to this discussion by assessing the extent to which remotely sensed night light can be of use in this context.
2 Direct and economic consequences Hurricane Katrina
We first summarize the immediate impact of Hurricane Katrina, and assess its economic impact on the affected region. We then make the link with visible effects of Katrina on the affected counties from space, by assessing the changes in night light intensity of the affected areas. We then assess the recovery in light intensity over the subsequent years, before turning to a comparison between economic impacts and effects on night light intensity.
2.1 Hurricane Katrina: landfall and economic impacts
On 29 August 2005 Hurricane Katrina made landfall close by New Orleans. Although it was downgraded from a Category 5 to a strong Category 3 hurricane, it was as an exceptionally large storm when it approached the shoreline with wind speeds up to approximately 200km per hour (Knabb et al., 2005). The storm killed almost 2,000 people and caused substantial damage of $125 billion in total due to winds, extreme precipitation, and major storm surge flooding (National Hurricane Center, 2018). A large part of these damages occurred in New Orleans that experienced massive flooding of about 80% of its land (Pistrika and Jonkman, 2010). Several levees that were meant to protect the city of New Orleans – which is situated largely below sea level – were overtopped or breached due to the storm surge (see Figure 1). Major unanticipated flooding occurred especially in Orleans Parish and St. Bernard Parish. These areas were inundated for a long time as it took 43 days until all flood waters were removed from the city (Knabb et al., 2005). The distributional impact across the City of New Orleans resembles a clear pattern of segregation that was present long before Katrina struck. The parts of the city that proved most vulnerable (see 1) were those that were majority black and low income neighborhoods, and recovery was also slowest in these areas (Logan, 2006). Other Parishes were mainly affected by wind and less severe flooding of a shorter duration for which warnings were issued. As a consequence, more housing units were destroyed in the inner city compared with these outside Parishes (Vigdor, 2008). Some areas were never rebuilt. Wider devastating effects were recorded in the south coast of Louisiana and Mississippi, where in some counties well over half of the residential housing stock was severely or completely damaged.
Hurricane Katrina had large impacts on the population and economic activity in New Orleans that differ between its Parishes and vary over time. Especially the Orleans and Bernard Parishes experienced severe population declines in about 2 years after Katrina, which were also the Parishes that experienced most severe flooding. The short-term population decline was even more severe. Within a week the population reduced from more than 400,000 to almost zero because people evacuated the city, of which about half had returned 2 years later after which the population more or less stabilized until mid-2008 (Vigdor, 2008). Deryugina et al. (2018) estimated that a third of the evacuees from New Orleans still had not returned by 2013. Katrina reinforced a trend of an already shrinking population, which may explain why the population has not fully recovered.
Already pre-Katrina the city was experiencing continued out-migration due to lacking economic opportunities, which especially applied to the central city (Vigdor, 2008). Economic activity further deteriorated after Karina, which is reflected in lower employment. Private sector jobs declined by approximately 70,000 jobs in the New Orleans metropolitan area. The most severe decline in employment is observed in services-oriented sectors, which lost part of their customer base due to the population decline. Even though some positive employment growth occurred in the construction sector, this did not offset the declines of well over 10 to 20 percent in most other industries, ranging from business and trade to state and local government services (Vigdor, 2008). The overall loss in employment indicates that economic activity declined, but this does not necessarily mean that income declined as well. Perhaps surprisingly, the decline of income is only roughly half that of population and employment, mirroring the unequal effect that the hurricane had to different income groups. The low-lying and predominantly poorer and black neighborhoods of New Orleans were hit hardest (Logan, 2006).\footnote{The worst-affected neighborhoods had substantially higher numbers of renters, households below the poverty line, and unemployed compared to undamaged communities (Logan, 2006).} It were the low-income and primarily African American
former residents who in large numbers were unable to return to the city after the disaster (see e.g. Paxson and Rouse, 2008). Groen and Polivka (2008) describe that evacuees suffered substantially in terms of labor market outcomes in the year after Katrina, although on average these effects diminished over time. Moreover, evacuees who did not return to New Orleans had worse labor market outcomes than those who did return in the short run, part of which is explained by individual and family characteristics also discussed by Logan (2006) and Vigdor (2008). The long-run development of household income of those who lived in New Orleans during Katrina has been analyzed by Deryugina et al. (2018) using tax return data. They find labor income declined shortly after Katrina by $2,000 and by $2,300 in 2006 compared with similar households who lived outside of New Orleans when Katrina occurred, mirroring the findings by Groen and Polivka (2008). However, this income decline disappeared in 2008 when incomes of Katrina victims were $1,300 higher (Deryugina et al., 2018). Explanations for this result are that wages in New Orleans increased in the years after Katrina to compensate for local price rises, especially for housing that was in short supply, and that evacuees moved to areas with improved job opportunities and higher wages. In addition, a strengthening local labor market with relatively scarce labor supply caused further upward pressure on relative wages (Groen et al., 2020). Focusing on business establishments rather than individuals, Basker and Miranda (2018) find very low survival rates for businesses that incurred physical damage from Katrina, especially for smaller and less productive establishments. Xiao and Nilawar (2013) focus on the regional impacts of the disaster and observe positive spillover effects on income and employment growth from heavily affected counties to their surrounding counties. This pattern suggests the presence of spatial demand shifts away from the core affected area into neighboring less affected counties. All in all, the social and economic impact of Katrina was enormous.
2.1.1 Visible impacts from space
A first analysis shows that the devastating impacts of Katrina are visible even from space. We collect the DMSP annual average stable night light composites provided by the National Oceanic and Atmospheric Association, and plot average annual night light intensity for the city of New Orleans below in Figure 2. The data comes at a resolution of 30 arc seconds (roughly 1km$^2$ at the equator), and intensity is given in digital numbers ranging from DN0 to DN63 reflecting dark to very bright respectively. Even though New Orleans is a densely urbanized location where brightness of lights is as high as the satellite can record, city lights fell drastically in many parts of the city as a result of the flooding and wind damage caused by Katrina. In the eastern part of the city, as well as in its eastern suburbs (Chalmette) night light intensity almost halved, reflecting the severity of flooding in that part of the city. While some recovery is apparent in 2006, visible impacts especially in the eastern part of the city remain visible even in the raw light data. Next, we zoom out and assess direct impacts along the coastline of Louisiana and Mississippi. We collect the damage figures from the U.S. Department of Housing and Urban Development (2006), which reports damage assessments to occupied housing units based on FEMA’s data on Individual Assistance Registrants and Small
\footnote{This is reminiscent of the out-migration of black population after the Great Mississippi Flood in 1927 reported by Hornbeck and Naidu (2014).}
\footnote{Note that while the night light data are provided in a resolution of 30 arc seconds, the sensor resolution is much coarser and represents a ground footprint at nadir of roughly 25 square kilometers (Elvidge et al., 2013). For this reason, we do not focus on pixel-level outcomes in this study, but rather use the total sum of light per year at the county level.}
Business Administration Disaster Loan Applications. Damage to housing units is divided into three categories: minor damage (<$5200), major damage ($5200-$30,000), and severe damage ($>30,000). Housing damage of category major and severe as a percentage of total occupied housing units by county is reported below in Figure 3.\(^7\) Damages were extremely high at over 50% in Plaquemines Parish (LA) and Orleans Parish (LA), 70% in Hancock County (MS) and close to 80% in St. Bernard Parish (LA). Four other counties have damages close to or over 20% of their housing stock: Jackson County (MS), Harrison County (MS), and St. Tammany Parish (LA) and Jefferson Parish (LA). In our main analysis, we focus on these eight most severely affected counties. Our main interest is the extent to which we can capture the regional economic dynamics following these damages by making use of the annual nighttime lights. To do so, we start with a simple descriptive analysis of the association between housing damage and light intensity between 2004 and 2005. We first plot changes in the total sum of light by county on the same map (see Figure 3 below), and find a pattern that is strikingly similar to that of the housing damage map in Figure 3. Indeed, an (unconditional) correlation plot reveals the same pattern, with a correlation of -0.60 that is significant at 1% (see Figure 4 below). The immediate impact of Hurricane Katrina is thus evidently captured quite well in the changes in night light intensity.
### 2.2 Regional impacts and recovery in night lights
We can further illustrate the reductions in light intensity by taking a closer look at the night light images for the affected region at large. However, two features of the night light data make comparison over space and across time challenging. The first issue is that the DMSP annual composite data is known for its problematic intertemporal and between-satellite measurement differences, due to varying gain settings of the sensor over time and ageing of the satellites (for a detailed discussion see Elvidge et al., 2009b, 2014). This makes it difficult to compare night light intensity within an area over time. In order to facilitate cross-time comparison, we calibrate the light composites by making use of the Elvidge et al. (2014) invariant area calibration method. The calibration exercise is based on a reference image for an area where true light intensity remains approximately unchanged throughout the study period, which then allows separating true changes in light intensity from pure satellite measurement error. In Appendix B, we discuss this calibration in detail and also propose alternative methods of adjusting the data: notably an alternative calibration by Zhang et al. (2016) and an econometric fixed effects approach more customary in economics (Henderson et al., 2012). Out of these options, the calibration by Elvidge et al. (2014) performs best for our purposes. In all main results that follow, we therefore use calibrated night light images following the methodology of Elvidge et al. (2014). We test our results for robustness with the alternative calibration proposed by Zhang et al. (2016) and by making use of an econometric panel fixed-effects correction proposed by Henderson et al. (2012) in Appendix A. Our main results are very robust to these alternative correction methods. A second issue is that of top-coding in the DMSP annual night light composites: an upper limit to the DMSP-OLS sensor results in saturation of recorded light intensity at DN63 (Small et al., 2005). This implies that any light intensity above this saturation threshold is not captured in the data. As a result, predominantly bright
\(^7\)The distribution of damages by county is also reported in Figure A1 of Appendix A.
Figure 2. Night lights for the City of New Orleans before and after Katrina. Excerpt of the Darmouth Flood Observatory flood map reported in 1 as reference area (top left). Night lights as observed from space by DMSP-OLS, raw uncorrected data from satellite F15. Brighter areas are indicated by green, whereas purple implies darker areas. Much of the city was at maximum brightness of DN63 in 2004 (top right) but fell below this threshold in the year 2005 (bottom left). Note how especially the eastern part of the city dims and has only partly recovered by 2006 (bottom right). As will be discussed below, it took almost a decade to recover light levels to their old intensities in these neighborhoods.
Urban centers are top-coded, as is also the case for the city of New Orleans.\textsuperscript{8} This is problematic for several reasons, but specifically results in problems in our case when assessing decreases in night light intensity as a result of Katrina for a high-income area with bright urban centers such as New Orleans: true decreases in night light intensity that takes place above this saturation threshold may be obscured in these pixels.\textsuperscript{9} We therefore investigate the importance of top-coding for our results in this section.
\textsuperscript{8}Bluhm and Krause (2018) propose a method to impute true light values for top-coded pixels by assuming a Pareto distribution on top lights. Although this approach may be of great value to the general literature that studies economic growth and the spatial distribution of economic activity, we cannot make use of any imputed measures as we study a shock.
\textsuperscript{9}The problem is much less severe in low and middle-income countries. There the share of top-coded pixels is close to zero. See Felbermayr et al. (2022) and Kocornik-Mina et al. (2020) for a discussion.
Figure 3. Housing damage and night lights in 2005. Left: Percentage of occupied housing units with major or severe damage from Hurricane Katrina. Own calculations, based on U.S. Department of Housing and Urban Development (2006) data from FEMA Individual Assistance Registrants and Small Business Administration Disaster Loan Applications. Counties with major/severe damage of 15% or more are labelled in the map. Damages for the counties in Western Louisiana (most notably Cameron, Vermillion and Calcasieu) are related to hurricane Rita that made landfall at the coastline of Texas later in the year 2005. These are not to be related with the impact of Katrina, but show a similar pattern of damages and night light intensity reductions. Right: Immediate night light reduction by county (2005 w.r.t 2004). Based on own calculations. Sum of night light based on calibrated light series using the Elvidge et al. (2014) method, discussed in detail in Section 2.2 below. Colour-coding based on the standard deviation method (see Appendix Figure A2 for the distribution of night light changes)
The distribution of night light intensity values in the study region is presented in Figure 5. Clearly the majority of New Orleans city is top-coded, with only its edges falling below the saturation threshold prior to Katrina.\footnote{There is a third issue with the DMSP data that revolves around overflow, or otherwise referred to as blooming (Bennett and Smith, 2017; Gibson et al., 2020, 2021). Overflow, related to geolocation errors in the DMSP data, results in light intensity being recorded slightly away from its point source, such that urban areas have a larger extent of lit pixels than actual built-up land. This is an issue particularly in studies that use DMSP night light data at high spatial detail, up to the pixel level of the data (e.g. Bertinelli and Strobl, 2013; Kocornik-Mina et al., 2020). Moreover, local economic activity arguably does not reside on square kilometers, but rather in larger economic and administrative (spatial) units. In order to be able to draw a parallel between measured economic activity and night lights, we therefore aggregate night light intensity to the sum of light at the county level. As such, the issue of blooming and geolocation errors is of limited concern in our context.} Note how similar top-coding is present along the urbanized coastlines of Harrison and Jackson County. Of course, this issue is not unique to our particular study area, but is true more generally for high-income countries like the United States. Taking the substantial top-coding in the study area as a matter of fact, we turn to assessing changes in light intensity after Katrina.
Two panels of images now follow to compare the changes over time in 2005 and 2006. First, the left panel of Figure 6 gives the distribution of night light intensity across the study region for 2005. The right panel then plots absolute decrease in night light intensity (the digital number for 2004 subtracted from 2005 for each pixel). Figure 7 does the same, but then for 2006 (see below). Focusing first on New Orleans, the immediate effects of Katrina become apparent in the eastern part of the city.
Figure 4. Correlation plot of housing damage and night light reduction in 2005. Housing damage in 2005 for Louisiana and Mississippi from the U.S. Department of Housing and Urban Development (2006). Night time lights for 2005 are calibrated using the Elvidge et al. (2014) method, discussed in detail in Section 2.2 below, indexed to 2004=100. Note that the damage and associated night light reduction for Cameron Parish (Louisiana) is associated with hurricane Rita that made landfall at the coastline of Texas in 2005. We do not focus on this particular example in the remainder of the paper, as our focus is on hurricane Katrina.
and in the suburbs of Chalmette, as we saw before even in the raw data in Figure 2. Reduction in light intensity is most severe in the northeastern tip of the city, with light reductions 30 up to 50 points, translating in reductions that amount to well over 50 percent. Moreover, notable reductions occur in previously top-coded parts of the city. While we cannot exclude the possibility that the true decrease in light intensity is even stronger, here too reductions run well over 10 percentage points. Note that in the west of the city hardly any change is detected, which is very much in line with the geographical spread of flooding (see Figure 1).
Two other main areas that suffer heavy light reduction can be clearly identified from these figures. First, Plaquemines Parish has a long inhabited strip along the Mississippi River ending at the town of Venice, Louisiana, which was suffered enormous damages from Hurricane Katrina. Light reductions are evident along the entire river, with the highest reduction located in Venice. Note that no top-coding was present in this area in 2004. The second area is Bay St. Louis, Mississippi, and the coastline along Harrison County. Major light reductions are visible in all urban zones around the bay, notably in Waveland, Diamondhead, and Pass Christian. Reductions in the order of 10-20 points are also visible further along the coastline in Long Beach and Gulfport. Again, no top-coding was present here in 2004.\footnote{These reductions match closely with the damage maps from FEMA for this area, described in detail in Basker and Miranda (2018). Extensive damage along the coastline is reflected by large drops in light intensity, while milder reductions in light intensity are matched by mild damage from the FEMA maps.}
Figure 5. Night light intensity in the study region prior to Katrina, 2004. Based on own calculations using satellite F15, corrected with the Elvidge et al. (2014) calibration method. Top-coded pixels are indicated in orange. County names in white.
Next we turn to the year 2006, depicted in Figure 7. A first observation is that the worst reductions in night light have largely disappeared from the map: reduction over 20 points – compared to 2004 – are rare in 2006. However, the eastern part of New Orleans remains depressed, notably also along the Mississippi river near Chalmette. While a substantial part of the city returns to being top-coded, this is clearly not the case for the north-eastern neighborhoods of the city. This is true even for a large strip that was top-coded in 2004. In a similar vein, the riverbed of Mississippi still shows depressed night light values all the way to the town of Venice. There are signs of recovery around the St. Louis Bay, but light intensity is still 10 to 20 points lower in many parts of the metropolitan area around the bay. Top-coding thus mainly affects the changes that we observe in New Orleans city, thereby affecting the observed changes in light intensity for the Parishes Orleans, St. Bernard, and Plaquemines – which, as will be discussed below, are the counties for which we observe permanent reductions in population and employment. This means that we have to interpret our comparison of changes in night light intensity to changes in economic indicators for these areas with care. It is likely that the observed reductions in night lights are an underestimate of the true effect on night light intensity. However, even with this caveat in mind, the overall patterns of direct impact and recovery in terms of night light changes are closely in line with our expectations, based on the geographical spread of flooding and on the impact numbers that we know from previous studies.
This is not the end of the story. Our analyses in the subsequent section focus on the following two issues: (1) the extent to which this reduction in night light intensity corresponds to reductions in economic activity, as captured through county level income, employment, population, and GDP, and (2) the extent to which recovery of night light intensity over time corresponds
As with New Orleans city, we find close correspondence between flood zones, property damage, and night light reduction. For a detailed discussion on the FEMA damage maps, see Jarmin and Miranda (2009).
Figure 6. Absolute change in night light intensity from 2004 to 2005. Based on own calculations using satellite F15, corrected with the Elvidge et al. (2014) calibration method. *Left*: Night light intensity in the study region, year of Katrina 2005. Top-coded pixels are indicated in orange. *Right*: Absolute difference of pixel DN value between 2004 and 2005.
Figure 7. Absolute change in night light intensity from 2004 to 2006. Based on own calculations using satellite F15, corrected with the Elvidge et al. (2014) calibration method. *Left*: Night light intensity in the study region, year of Katrina 2006. Top-coded pixels are indicated in orange. *Right*: Absolute difference of pixel DN value between 2004 and 2006.
to recovery in these economic indicators. Since the impacts are clearly largest in the defined core group of 8 coastal counties, we collect for these counties annual data on their economic indicators and assess the longer-run impacts of Katrina on their economies. We then compare these developments to changes in night light intensity over time.
3 Relating night light changes to economic indicators
The economic impact of hurricane Katrina on the county economies along the coast becomes evident from the graphs in Figure 8 (see below), which plot population, aggregate employment and income, real GDP, and night light intensity by county for the years 2000-2018.\(^{12}\) To allow comparison of impacts with recovery over time, we standardize the series of each county to their respective levels in 2004 (2004=1). The graphs are sorted by normalized housing damage, expressed as percentage of total occupied housing units with major or severe damage. Some notes are warranted before discussing the graphs. First, the economic data collected from the Bureau of Economic Analysis are aggregates for calendar years. Hurricane Katrina made landfall in August of 2005, and is therefore only captured in the final quarter of 2005. The majority of losses from the hurricane are therefore captured in the records for 2006. We stress that this includes any short-term recovery as well, implying that immediate losses in the first weeks after the hurricane may be partly offset by recovery in subsequent months. Second, a similar notion is important when assessing loss in night light intensity for the counties for 2005 with respect to 2004. As the DMSP night light composites are annual averages, only the months September-December are affected by Katrina’s impact – i.e. only one-third of the year. This implies that reduction in night light in the months directly after the impact may be considerably larger than the currently presented figures. Third, population figures come from the Census Bureau midyear population estimates. As these are assessed midyear, the population effect of Katrina is only captured in 2006. As such, all reported figures represent a lower bound of the true short-run effects of Katrina. We now turn to assessing the impact of Katrina on the worst-affected counties. We first point to some general observations. The general patterns are clear and reassuring: reductions in night light intensity are clearly strongest for the most affected counties of St. Bernard, Hancock, Plaquemines, and Orleans, as was also shown in Figures 6 and 7. All counties experience major or severe damage to housing units of over 50 percent, which is associated with reductions in light intensity of 20 to 30 percent. These reductions are clearly in line with large losses in population. In contrast, the bottom four counties in Figure 8 experience smaller housing damage of 20 to 35 percent, and experience much smaller population losses. Harrison, Jackson, St. Tammany, and Jefferson experience smaller economic impacts in comparison to the top four counties in Figure 8, and in line with these patterns reduction in night light intensity is smaller at 3 to 13 percent.
3.1 Population changes and night lights
To guide the discussion, we now separate the counties into three groups, based on population effects. We first discuss the relation between population effects and changes in night light intensity, before turning to the other economic indicators. The
\(^{12}\)The DMSP night light series run up to 2012 only, following the calibration results of Elvidge et al. (2014). Note that the annual stable night light composites from the DMSP-OLS instrument were discontinued after 2013.
three groups are (1) permanent reduction in population, (2) temporary reduction in population, and (3) no substantial change. The first group consists of the three most-affected counties in Louisiana – St. Bernard, Plaquemines, and Orleans Parish – plus the less-affected county of Jefferson. The former three experience population losses in 2006 of 24%, 53%, and 76% respectively. This population loss is recovered in part, but population levels off at 80% of pre-Katrina levels by 2018 for Plaquemines and Orleans, and for 65% in the case of St. Bernard. In tandem with this development, night light intensity drops between 20 to 30 percent in 2005 and remains depressed below pre-Katrina levels thereafter. However, the decline in night light intensity does not appear to be strictly proportional to population loss; while night light intensity does not decline further than 30 percent, population losses for St. Bernard and Orleans are well over two-fold of their loss in light. Strikingly, the recovery paths of night light compared to population for these two counties do show remarkable similarity. Growth rates are comparable in the first years following Katrina, which leads St. Bernard to recover its night light intensity to pre-Katrina levels by 2008, and leveling off after that, even though St. Bernard is still 60 and 40 percent below its pre-Katrina population level in 2008 and 2012 respectively. Instead, night light intensity remains permanently depressed at roughly 10 percent below pre-Katrina levels in Orleans. The third county, Plaquemines, experiences an immediate and also permanent reduction of roughly 20% of its population. Night light intensity declines by roughly 30 percent, but shows fast growth in 2006 and 2007. After that, light levels remain permanently depressed slightly below pre-Katrina levels. The fourth county with a permanent reduction in population – albeit less severe at around 7% - is Jefferson, Louisiana. Night light intensity falls by about roughly 5% in 2005, and remains permanently depressed until the end of the series in 2012.
The second group consists of counties that experience smaller and only temporary reductions in population: Hancock, Harrison, and Jackson. Population reduction is 14%, 11%, and 3% matched by a reduction in night light intensity of 20%, 13%, and 8% respectively. For all three counties, night light intensity recovers to or overshoots pre-Katrina levels by the next year, and remains above pre-Katrina levels in subsequent years.
The third classification applies only to St. Tammany, which experiences no population loss at all, even though 25 percent of its occupied housing units had major or severe damage. Population was steadily growing prior to Katrina hit in 2005 at an average rate of 2.5% between 2000-2005, compared to 2.7% in 2006. However, the growth rate does decline substantially in the years after. In line with a lack of any apparent immediate population effect, there is no change in light intensity with respect to 2004. Population growth seems unrelated to light growth before Katrina, while the three years after Katrina are associated with both positive population and light growth. However, while population continues to grow at roughly 1% per year after 2008, night light intensity remains roughly constant at roughly 12 percent above pre-Katrina levels.
As a preliminary conclusion, the effects of Katrina on counties’ night light intensity corresponds with their respective changes in population, although more so qualitatively than quantitatively. Reductions in light intensity are roughly a third at maximum, whereas population losses were over twofold in some counties. However, recovery patterns in population numbers closely match those of recovery in light intensity.
Figure 8. Night light and economic indicators following Katrina for the 8 most-affected counties. Based on own calculations. All variables are indexed with 2004=1. Aggregate employment, income, population, and real GDP data come from the U.S. Bureau of Economic Analysis (2020). Night lights are calibrated using the Zhang et al. (2016) method.
3.2 Other indicators: employment, income, and GDP
We now extend the discussion to include effects of Katrina on economic activity in the counties, reflected by aggregate employment, income, and GDP. A first observation is that in the first group of counties the loss in employment is considerably smaller than that of total population. Nonetheless, employment losses overall are roughly proportional to losses in total population, and are therefore also closely related to changes in night light intensity.\(^{13}\) With the exception of all counties experiencing a spike in income in 2007, related to the massive federal recovery assistance funds disbursed in that year (Xiao and Nilawar, 2013), aggregate income changes in relation to Katrina are more heterogeneous. For St. Bernard and Orleans Parish, income changes follow declines in population and employment closely. The six other counties instead experience a slight increase of aggregate income of up to 10% relative to 2004. In both Hancock and Plaquemines, aggregate income remains 10 to 15 percent above 2004-levels for the entire duration of the study period, even though both counties lost a substantial proportion of their population. In both cases, the increase in income – combined with a substantial growth in GDP in Hancock – may partly explain fast recovery of total night light intensity. The bottom four counties in Figure 8 show a consistent pattern: no signs of very substantial impact in either of the economic indicators, and corresponding patterns in aggregate income and GDP, with again a shared spike in 2007. Aggregate income and GDP show a strong correlation for Hancock and Orleans as well, but in St. Bernard and Plaquemines GDP is a lot more variable. Both counties show a notable decline in GDP after 2008, which is not explained by either employment or aggregate income.
In summary, the impact of Katrina on the counties’ economies has clearly not been uniform. While the size of economic effects is related to the extent of damages, there is no single coherent explanation that captures economic changes in terms of population and income as a function of damages. Some counties experienced lasting population losses, where others – most notably Hancock – recovered fairly quickly and experienced (temporary) booms in income and GDP. In turn, night light intensity does not do a perfect job at capturing these dynamics, but performs as expected in qualitative terms: the heaviest-hit counties show the largest declines in night light intensity, and light intensity recovers to pre-disaster levels in the subsequent years. However, recovery of night light intensity towards pre-Katrina levels is much faster than for population and employment and income in the heaviest hit counties of St. Bernard and Orleans. Growth in income and employment after Katrina is positively correlated with night light intensity as well. The relation with GDP seems less evident across the 8 counties, compared to the other indicators. However, overall the qualitative patterns are promising: night light intensity can inform us about regional economic downturns in this case study.
3.3 Correlations between night lights and economic indicators
To further structure the discussion, we assess the correlation between the change in total sum of light and the change in economic indicators for the eight affected counties. We distinguish two periods: the period before Katrina (2000-2004), and
\(^{13}\)While employment is proportional to total population, displacement of population may affect the working population differently than the non-working population. As discussed in the effects of Katrina in New Orleans, we know that the low-income segment of New Orleans’ population was disproportionally displaced (Logan, 2006).
the period starting in the year of Katrina (2005-2012). Since the population record for 2005 is based on the midyear estimate in July, we limit the population to 2006-2012 for the second period for this indicator only.\textsuperscript{14} Results are reported in Figure 9.
The results are rather striking. In the period before Katrina, the correlations are weak and predominantly negative (see Figure 9). The correlation with population is strongest – and negative – driven by light levels that are higher in the period prior to 2004 in all 8 counties. This pattern is visible in all of the counties, while population was either growing or stagnant in these years. This is not the case for employment, which instead shows close to no correlation with light intensity before Katrina. For both income and GDP, the correlation is again negative but weaker than with population. Note that this likely has to do with top-coding in the night light data, making light intensity unresponsive to economic changes prior to the negative shock caused by Katrina. For GDP specifically the negative correlation is purely driven by St. Bernard Parish – when excluding St. Bernard, the correlation is weak and positive at 0.22.
In stark contrast, there is a clear positive and substantially stronger correlation in the years after Katrina for all 4 indicators (see Figure 9). Unconditional pairwise correlations of 0.65 and 0.54 respectively indicate that the change in night light as a result of Katrina is most closely related to population and employment respectively. The scatterplots clearly reflect that reductions in night light underestimate the reductions in population and employment in some of the counties, as discussed in the previous section. Again, this likely relates to the top-coding issue, and may also be partly explained by the timing of the hurricane in the last part of the calendar year. Still, the correlation between the change in light intensity and population is strong, while it is moderate for employment.
While similarly positive, the correlation between indexed light intensity and income and GDP is weaker at 0.38. The lower correlation can be explained by developments in the counties Hancock, Harrison, Jackson, St. Tammany, and Jefferson, all of which had income and GDP levels (far) exceeding pre-Katrina levels. Instead, growth in light intensity was much lower for these counties. A notable spike that is visible in all counties is the year 2007, in which relief transfers boost both income and GDP. There appears to be no correlation between these transfers and light intensity for this year, further lowering the correlation with income and GDP. In fact, the correlation between the change in income and GDP between 2006 and 2007 with the total sum of light is 0.27 and -0.58 respectively – i.e. for GDP there is even a strongly negative correlation with light intensity for this particular year.
The main findings are then twofold: first, the correlation between night light intensity and the four considered economic indicators is much stronger after Hurricane Katrina struck than before.\textsuperscript{15} The positive and – in the case of population and
\textsuperscript{14} Including the 2005 data for population dramatically reduces the correlation from 0.65 to 0.34, purely as a result of the 2005 population figure being unresponsive to the Katrina shock in 2005 by construction – population estimates are midyear and thus precede the landfall of the hurricane.
\textsuperscript{15} Within the scope of our paper, we cannot answer why the relation between night light and economic activity is rather weak in equilibrium times before the disaster shock, and what can explain the negative correlation with population and income we observe prior to the shock. However, top-coding in the night light data is arguably one important factor. This discussion speaks to a broader literature that uses night light intensity in equilibrium growth regimes to proxy GDP or economic activity more broadly at the subnational level (e.g. Michalopoulos and Papaioannou, 2013; Hodler and Raschky, 2014; Storeygard, 2016). Part of the explanation may be that top-coding in much of the affected areas prior to the landfall of Katrina obscures otherwise meaningful relations between night lights and income and GDP. Future research can aim to answer these questions by focusing on an event that affected urban areas with a lower degree of top-coding. Alternatively, the results presented in this paper may be indicative of a stronger relation between changes in night light intensity and economic indicators in shock times versus equilibrium times in a high-income country like the United States.
employment – strong correlations with economic activity show that changes in night light intensity can be used successfully to capture local effects on economic activity of a large shock such as Hurricane Katrina. Second, and within the limits of our study, our results suggest that change in light intensity is mostly reflective of changes in resident population and the total number of employed people within the affected area, and less so but positively related to aggregate income and real GDP. We test robustness of these findings to the use of the alternative calibration method by Zhang et al. (2016), as well as to the fixed-effects corrected light data. Results are reported in Appendix A. For the Zhang et al. (2016) calibrated data all correlations for the period 2005-2012 are lower than for the baseline results using the Elvidge et al. (2014) calibration. This can be explained by the anomalous year-corrections in 2010 and 2012, discussed in detail in Appendix B. When excluding 2010-2012, we find similar correlations for the two calibration methods (results available upon request). Alternatively using an econometric fixed-effects approach overall yields comparable results (see Appendix B for a discussion on the methodology), but correlations between indexed light and income and real GDP are considerably higher at 0.57 and 0.68 respectively. This puts their correlation in a similar range as population (0.55). However, the correlation with employment is still considerably stronger at 0.75.
4 Discussion
An emerging literature has used night lights to study the local impacts of natural disasters and used changes in night lights as an indicator primarily of local economic activity. Night light analysis of impacts of natural hazards is especially useful in areas that lack local data of population and economic activity. But often it remains an open question what observed night
light changes actually represent, especially in the case of downturns (Bennett and Smith, 2017). In our study we examined changes in night lights following the impacts of Katrina on New Orleans and the coastline of Louisiana and Mississippi. This is a relevant case study for analyzing what changes in night lights represent since for New Orleans both night light data and local population and economic statistics exist. Moreover, a variety of studies examined the direct and indirect socioeconomic impacts of Katrina, which allows for placing our insights into a broader picture of the various effects of the hurricane. The following main lessons emerge from our study.
The immediate effects observed in night lights reflect well the heterogenous severity of direct impacts of the hurricane in the different geographical areas. Flooding and direct damage data indicate that the most severely hit parishes are Orleans and St. Bernard, for which also severe drops in night lights can be observed shortly after Hurricane Katrina. This observation suggests that night lights can be used as an indicator for the short-term severity of a natural disaster and reveal worst hit areas, echoing findings reported by Gillespie et al. (2014) on the impacts of the 2004 Indian Ocean Tsunami in Sumatra.
Moreover, short run changes in night lights reflect observed changes in the population over time. However, there are some limitations to the night light approach. Population losses in some counties, such as Orleans were much more severe than the night lights showed. This may be explained by the fact that Katrina made landfall in August, thus making up only a third of the mean annual night light intensity of the area. Population recovery patterns are overall also seen in the night lights, but the recovery in lights is faster and do not accurately reflect permanent population decline. Economic studies have mainly interpreted changes in night lights as representing changes in economic activity. Our study confirms that there is also a correlation between night lights and income and GDP, although the relation with the latter indicators and light intensity fits less well than with population and employment. Also here the recovery of night lights is more optimistic in hard hit counties compared with the actual recovery in income and GDP. Overall, we find that night light changes more strongly reflect population and employment impacts, and less so GDP changes.\(^{16}\)
However, top-coding in urban centers makes part of the change in light invisible to the censor and is thus not captured in the night lights data. In future research, the newer VIIRS data could be used to address the issue of top-coding (see Elvidge et al., 2013).\(^{17}\) Recent examples are Zhao et al. (2018) and Gao et al. (2020), who study the effects of hurricanes Irma and Maria on light intensity in Puerto Rico and the 2015 Ghorka earthquake in Nepal respectively. However, although the time series for the VIIRS data product is steadily expanding, only disasters after 2012 can be studied with this data. Understanding the effects of more historical examples thus still requires the DMSP data that we use in the current study. We stress that even though top-coding is an issue in the studied area, we can still observe the impacts of the hurricane quite clearly. Studying areas with a
\(^{16}\)In Appendix A, we assess correlations between the change in the sum of light intensity and the change in income and GDP with the correction approach using fixed effects in a panel regression framework. In this approach, both the night light and economic variables are demeaned with year fixed effects, and then transformed into index numbers. As discussed in Appendix A, the year fixed effects take out the satellite measurement error, but also all other common temporal variation in the panel of all U.S. counties. As such, the correlation results for the fixed effects approach cannot be compared directly to those with the calibration correction. Still, we stress that the correlation between change in total light intensity and aggregate income and aggregate real GDP is 0.57 and 0.68 respectively for the fixed effects method (see Figure A6), compared to 0.55 and 0.75 for population and employment respectively.
\(^{17}\)Although less important in the context of the present study, the newer VIIRS data also address the issue of blooming and the rather coarse native resolution of the DMSP satellite.
lower degree of top-coding, which is a much smaller problem even within urban areas in developing countries (Kocornik-Mina et al., 2020), may therefore reveal stronger relations between light intensity and economic indicators.
Furthermore, most studies in this field report a negative impact of natural disasters on local night lights only in the year of occurrence (e.g. Bertinelli and Strobl, 2013; Gillespie et al., 2014; Elliott et al., 2015; Kocornik-Mina et al., 2020). First, we show here that decreases in night light intensity after severe disasters can span beyond this period for a disaster of this magnitude. This confirms that changes in night light intensity do not just come from temporary power outages, which remains a worry in some of the studies. Second, we show that even for this extreme case recovery of night light intensity is rather quick – in the order of a one to a few years – whereas recovery in the economic indicators is much slower. This places conclusions in the literature about fast local recovery based on rebounding of night light in a different light. For example, Kocornik-Mina et al. (2020) find that economic activity within cities does not relocate to less risky areas after the occurrence of a major flood in the city, based on the finding that on average no negative effects on light intensity exist beyond the year of the flood. This is the case, even though the authors limit their study to large-scale urban floods that displaced at least 100,000 people. Our results suggest that night light intensity may reflect reduction in population and economic activity only partly, such that relocation of economic activity and population may in reality have occurred. For the case of Katrina, we indeed show that this happens. We again stress that night lights serve as a means to proxy local economic impacts in areas where no alternative data is available, but that this only provides part of the picture.
Concluding, even though we observe that night lights seem to be able to capture general patterns in population and economic impacts that can be useful in data scarce regions, they are no substitute for assessments of economic data if the aim is to have a profound understanding of the economic consequences of a natural disaster. In-depth analysis of economic data, such as sectoral impacts and wage development, provides more detailed insights than night light data. For instance, economic impacts of Hurricane Katrina were a complex combination of disruptions in certain sectors and positive effects for sectors involved in reconstruction as well as substitution effects between companies within a sector (Vigdor, 2008; Hallegatte, 2008). Such a complexity cannot be disentangled with night light data. Moreover, real wage growth may not follow the GDP and employment patterns that night light data partly capture. For example, Deryugina et al. (2018) show based on tax return data that Katrina victims eventually experienced higher wage growth than non-victims. These in-depth analyses of economic data indicate positive long run economic effects for households of the hurricane that cannot be directly derived from analyses of night light data. Combining the insights from these studies on the effects of Katrina is important to understand the value of night light data in this context for two reasons. First, night light intensity is spatially explicit and highly detailed, but reflects an immobile area rather than its mobile residents. Focusing on the impacted counties alone therefore makes the analysis blind to general equilibrium effects and potential spillovers of population and economic activity towards neighboring counties. Xiao and Nilawar (2013) provide an example of how such effects occurred in less-affected counties in the case of Katrina, and Felbermayr et al. (2022) apply this framework more generally in a global analysis of disaster impacts on local economic activity. Second, displaced population results in lower population numbers in the affected areas, and recovery of an area’s economy depends on a combination of return migration, reconstruction, and recovery of and/or new economic activity. Using night light intensity, we can only see the combined derivative of these processes.
Finally, an important consideration for the interpretation of disaster impacts from night light data is whether or not population and economic trends move in the same direction. In our case these trends did not have opposite effects on night light activity. However, interpretation of night light changes is more ambiguous in case opposite trends occur. For example, several studies find population growth after disasters (Vigdor, 2008), which combined with adverse economic impacts of a disaster would obscure clear trends and hamper a straightforward interpretation of night light data.
5 Conclusions
The use of night light data is becoming increasing popular in studies that aim to estimate the impacts of natural disasters on local economic activity. However, it is often unclear what observed changes in night lights exactly represent since they have been used as a proxy for changes in GDP levels or growth, urbanization, and temporary and permanent population movements. Our study contributes to this emerging literature by providing insights into the interpretation of night light changes. In particular, we examined how these changes following a severe hurricane relate with local population, employment, and income statistics. For this purpose we used Hurricane Katrina as an exemplary case since both detailed night light data and sub-national economic and population statistics are available for the areas impacted by Katrina. Moreover, various previous studies have analysed the social and economic consequences of Katrina, which allows for placing our night light findings in the context of this broader evidence on impacts of this disastrous hurricane.
We find that overall the night light changes reflect the general pattern of direct impacts of Katrina as well as the subsequent recovery. The heaviest-hit counties show the largest declines in night light intensity, and light intensity recovers to pre-disaster levels in the subsequent years. However, recovery of night light intensity towards pre-Katrina levels is much faster than for population and employment and income in the heaviest hit counties. Moreover, our results show that change in light intensity is mostly reflective of changes in resident population and the total number of employed people within the affected area, and less so but positively related to aggregate income and real GDP. The correlation between night light intensity and the considered economic indicators is much stronger after Hurricane Katrina struck than before. The positive and – in the case of population and employment – strong correlations with economic activity show that changes in night light intensity can be used successfully to capture local effects on economic activity of a large natural disaster, such as Hurricane Katrina.
Based on our main results, we conclude that changes in night light intensity prove a valuable proxy for changes in local economic activity following a natural disaster, despite its various shortcomings discussed in this paper. Analysis of disaster impacts using night light data are ideally complemented with detailed analysis of economic data which provide additional more in-depth insights into disaster impacts, like we discussed for our case. Nevertheless, in areas where such economic data is unavailable, our results suggest that night light data can be used to approximate the impacts of natural disasters on regional economies. Future research can conduct similar analyses as conducted in this paper for other natural disasters to improve our understanding of the interpretation of night light data for direct impact and recovery, especially for disaster events of a less extreme nature than Hurricane Katrina.
Code and data availability. Code and data will be made available in a publicly accessible repository.
Appendix A: Appendix A: Figures and Tables
Figure A1. Distribution of damage to occupied housing units. Based on own calculations. Damage figures from the U.S. Department of Housing and Urban Development (2006). Note that the extremely high housing damage figure for Cameron Parish relates to hurricane Rita rather than Katrina, as is the case for the counties Vermilion and Calcasieu.
Figure A2. Histogram of changes in night light intensity between 2004 and 2005 for the affected area. Based on own calculations. Affected area refers to all counties with non-missing housing damage based on the report from the U.S. Department of Housing and Urban Development (2006), i.e. those included in Figure 3. Night time lights are calibrated using the Elvidge et al. (2014) method, and indexed with 2004=100. A kernel density is plotted on top. Given the approximate normality of the distribution, the maps about changes in night lights make use of a standard deviation method for color-coding.
Figure A3. Night lights and economic indicators, including alternative light corrections
Figure A4. Indexed economic indicators and total sum of light by the 8 affected counties after Katrina (starting in 2005 vs 2006). Correlations economic indicators and night lights (varying years). Night lights based on Elvidge et al. (2014) calibration, indexed to 2004 = 1. Right panel: Population data is for 2006-2012 only.
Figure A5. Indexed economic indicators and total sum of light by the 8 affected counties before Katrina. Correlations economic indicators and night lights (Zhang calibration). Night lights based on Zhang et al. (2016) calibration, indexed to 2004 = 1. Right panel: Population data is for 2006-2012 only.
Figure A6. Indexed economic indicators and total sum of light by the 8 affected counties before Katrina. Night lights based on fixed-effects correct, indexed to 2004 = 1. See Appendix B for methodology. Right panel: Population data is for 2006-2012 only.
Figure A7. Indexed economic indicators and total sum of light by the 8 affected counties before Katrina (raw data). Correlations economic indicators and night lights (raw data). Raw night light data, indexed to 2004 = 1. Right panel: Population data is for 2006-2012 only.
Appendix B: Appendix B: Cleaning of the night light series
The DMSP annual composite data is known for its problematic intertemporal and between-satellite measurement differences, making it difficult to compare night light intensity over an area across time. The problem stems from the lack of systematic recording of changes in gain settings of the DMSP-OLS sensor that vary across time and between satellites. The reason for this is the main function of the satellite being to detect moonlight clouds, rather than that of artificial night light per se (Elvidge et al., 2009a). As a result, while the annual composites retain a range of DN0 to DN63 (with DN meaning digital number), the true respective radiance associated with these digital numbers varies between the different satellite-year composites. This makes direct comparison of raw digital numbers across years problematic.
A number of approaches have been suggested in the literature, which can be generally grouped into two main classes. First, the approach from remote sensing is to calibrate the annual composite images to a reference image, being either an area that is assumed to have invariant night light intensity over time (e.g. Elvidge et al., 2009a, 2014; Wu et al., 2013), or by making use of a globally or regionally consistent bias across images (e.g. Zhang et al., 2016; Li et al., 2013). The basic idea of the invariant area method is that any differences in night light intensity between yearly images is the result of measurement error, and thus contains the difference in gain settings between the various satellite-year images. By globally calibrating the year-images to this reference area, a ‘corrected’ time series is produced. A meta-analysis of this approach is discussed in detail by Pandey et al. (2017), who find that among the existing calibration studies Zhang et al. (2016) and Elvidge et al. (2014) produce the most consistent calibration results, with only marginal differences between the two when assessing the global images.\(^{18}\)
The second approach finds its origin in the economic literature that makes use of night lights, and applies a panel fixed effect setting to address measurement error in night light intensity over time (e.g. Chen and Nordhaus, 2011; Henderson et al., 2012). The basic idea here is that the gain setting changes affect the images in a globally consistent manner, such that estimating a dummy coefficient for changes across years to a reference base year effectively takes out any difference in sensitivity to light intensity across satellite-years.\(^{19}\) It is important to note that this correction is applied at the aggregated county level, rather than at the pixel-level, as is the case for the intercalibration methods. We thus first compute the sum of light intensity by county-year based on the uncorrected images. We know from Strobl (2011) that hurricanes do not affect national GDP growth rates in the US, and moreover that impacts at the county level net out at the state level within a year. It is therefore safe to assume that we can use the universe of U.S. counties to control for common changes in night light intensity, that are unrelated to the landfall of hurricane Katrina. Note that this also takes out all other changes that are common to the entire United States in night light intensity across the spectrum of DN0 to DN63. Moreover, and most importantly, true light intensity is found to be largely stable for 1992-2013 for this area. Relying on the resulting invariant area assumption, all images are then calibrated to the image for this area in 1999 (satellite F12) using a second-order polynomial fit. Calibrated digital values that exceed the maximum range of DN63 are truncated at DN63. When assessing the global performance of this calibration method, Pandey et al. (2017) also truncate the lower-bound of the digital values at DN0. I follow their example here.
\(^{18}\)For example, Elvidge et al. (2009a, 2014) propose Sicily as a candidate invariant area. This area is found to have the best spread of night light intensity across the spectrum of DN0 to DN63. Moreover, and most importantly, true light intensity is found to be largely stable for 1992-2013 for this area. Relying on the resulting invariant area assumption, all images are then calibrated to the image for this area in 1999 (satellite F12) using a second-order polynomial fit. Calibrated digital values that exceed the maximum range of DN63 are truncated at DN63. When assessing the global performance of this calibration method, Pandey et al. (2017) also truncate the lower-bound of the digital values at DN0. I follow their example here.
\(^{19}\)Chen and Nordhaus (2011) separately control for satellite fixed effects, besides the common year fixed effects. We do not do so here since we make use of single satellite-years rather than taking the average of satellite-years when multiple satellite images are available in a year (see Felbermayr et al. (2022) for a discussion on this issue). We use the following satellite-years: F101992-94, F12 1995-98, F141999, F152000-06, F162007-09, F182010-13.
intensity, resulting from country-wide economic conditions, technological advance, and energy costs Henderson et al. (2012).\textsuperscript{20} While commonly accepted in the economic literature, the fixed effect approach relies on the assumption that taking out the mean of changes across years is sufficient to correct for measurement changes across time, whereas the calibration method allows for a non-linear effect of the gain settings on the range of digital values in the light composites. This makes the two methods slightly different in their approach for correcting for the measurement differences across time. While not explicitly accounting for non-linearities, however, the fixed effect approach does not rely on assumptions of an invariant area.
In this appendix we compare the results of the calibration and fixed effects corrections to the raw data for the 8 counties that were most heavily affected by Hurricane Katrina. The calibration series were produced with the coefficients for the second-order polynomial fits reported in Zhang et al. (2016) and Elvidge et al. (2014) respectively. The raw image digital values for each satellite-year composite are then recalculated using the coefficients from the respective studies. In both cases, values were truncated to DN0 at the lower end and to DN63 at the upper end, before aggregating the images to sum of light (SOL) for the U.S. counties (following Elvidge et al., 2014; Pandey et al., 2017).
The series corrected with the year fixed effect approach were constructed by first computing the SOL for all U.S. counties (3079 in total) based on the raw images, and then adjusted as follows: (1) We estimate a general OLS model, with SOL from the raw images as the dependent variable, and a set of year dummies as the explanatory variable (2) From this linear model we compute corrected night light intensity by subtracting the estimated coefficients for the year dummies from each county-year observation.
We now discuss the results of the various correction methods. In figures B1 and B2 below, we plot the raw series combined with the two calibrated series, and the series corrected with the year fixed effects. Figure B1 reports the total sum of light by county. The first and most important observation is that the three alternative corrections to the raw night light data show a high degree of similarity. Note how they are more stable over time than the raw series, notably in the period 2002-2007, and how – with the exception of St. Bernard – the two classes of correction methods follow each other closely. Especially the dip from 2003-2007 in the raw data is evident when compared to the corrected series. This dip is not specific to the affected counties, but instead is a feature shared by the entire panel of U.S. counties and is thus taken out in the corrected series.
The case of St. Bernard stands out, since its year fixed effect correction deviates strongly from the other three series in both absolute terms and in terms of qualitative behavior. This can be explained by its low level of average light intensity with respect to the other counties. In 2004, St. Bernard has an average DN value of DN4.5, compared to the U.S. mean of DN7.3, while the mean of the other 7 main affected counties is DN14.9. As such, the fixed effects correction likely under-corrects the digital values for the latter 7 counties, while it over-corrects the values for St. Bernard. This also explains why we find no such deviations between the fixed effects correction and the calibration corrections for the other counties. While the fixed effect method relies on fewer assumptions and may be preferred in impact regression frameworks where the focus is on causal impact identification (such as in Bertinelli and Strobl, 2013; Elliott et al., 2015; Felbermayr et al., 2022), the calibration corrections prove more reliable in producing stable county-specific series for the current application. Since our focus is on absolute light
\textsuperscript{20}This also implies that when we relate changes in fixed-effects corrected night light intensity to the respective economic indicators in Section 2.2, we also demean the economic indicators on year dummies.
Figure B1. Corrected night light data (absolute sum of light). 4 series: raw data (gray), fixed-effect adjustment, invariant area calibration (Elvidge et al., 2014), and global consistent bias calibration (Zhang et al., 2016). Total sum of light by county (SOL).
levels, the excessive measurement error for individual cases – such as the clear overcorrection of light changes in low-light counties such St. Bernard – hinders the analysis.
Even though corrected absolute levels help us when assessing recovery after Katrina, we can show that regardless of the correction method the changes over time are fairly stable across the correction methods. Figure B2 reports changes over time in indexed series, with 2004 = 1. A number of observations are to be made: (1) the immediate impact of Katrina on the total sum of light in the 8 affected counties is close to identical between the 4 series. That is, while absolute levels may differ, the relative change from 2004 to 2005 is identical across the various series; (2) again, the two calibration methods show striking similarity; (3) as with the absolute levels in Figure B1, St. Bernard stands out with its fixed effect correction performing clearly not performing as intended. Another important feature of the raw data becomes evident when setting it off against the corrected series: while the raw data suggests a relatively quick recovery from Katrina in the subsequent years, both the calibration and fixed effect correction methods indicate that growth in night light intensity is not specific to these counties (suggesting
Figure B2. Corrected night light data (indexed to 2004 = 1). 4 series: raw data (gray), fixed-effect adjustment, invariant area calibration (Elvidge et al., 2014), and global consistent bias calibration (Zhang et al., 2016). Total sum of light by county, indexed to 2004=1.
a recovery from the negative shock), but is shared by the entire United States. Especially the year 2010 is associated with a massive increase in light intensity, which seems to stem mostly from the switch to a new satellite (F18), and thus a new instrument with different gain settings. Once we correct for this common feature in the data, recovery appears in fact slower and for a number of counties the sum of light does not return to pre-Katrina levels at all within the available data period.
Although the two calibration methods produce comparable results, the years 2010 and 2012 are important exceptions. Pandey et al. (2017) report that in a global sample, the calibration methods by Zhang et al. (2016) and Elvidge et al. (2014) produce only marginally different results. However, for a subset of countries, among which importantly is the U.S., the Zhang et al. (2016) method performs worse than Elvidge et al. (2014) in smoothing the time series specifically for the years 2010 and 2012. This is reported in detail in Zhang et al. (2016, pp. 5826-5827). This pattern is clearly visible for the subset of counties considered in this study (see Figure B1 and B2). While the Elvidge calibration produces rather smooth series for the period 2009-2012, the Zhang calibration series clearly show drops in 2010 and 2012, which for e.g. the Mississippi counties are comparable in size.
to the declines in night light intensity in 2005 as a result of Katrina. Comparison to county figures for population, income, and GDP indicate no apparent reason for this dip, and no other natural disaster or adverse event is able to explain this substantial reduction in night light intensity suggested by the Zhang calibration series. This is further supported by a similar stability in the fixed-effects corrected series for the respective Mississippi counties. As a result of this, we use only the calibration method of Elvidge et al. (2014) in the main results, and test robustness of our findings to the fixed effects correction and to the alternative Zhang et al. (2016) calibration in Appendix A. Results prove to be rather stable.
**Author contributions.** Vincent Schippers: conceptualization, data curation, formal analysis, investigation, methodology, software, visualization, writing - original draft and editing. Wouter Botzen: conceptualization, investigation, methodology, supervision, writing - original draft and editing.
**Competing interests.** The authors declare no competing interests.
**Acknowledgements.** We thank Mark Sanders, Bas van Bavel, and Bram van Besouw for helpful suggestions and feedback. All remaining errors are our own.
References
Aerts, J. C., Botzen, W. W., Emanuel, K., Lin, N., De Moel, H., and Michel-Kerjan, E. O.: Evaluating flood resilience strategies for coastal megacities, Science, 344, 473–475, 2014.
Basker, E. and Miranda, J.: Taken by storm: Business financing and survival in the aftermath of Hurricane Katrina, Journal of Economic Geography, 18, 1285–1313, 2018.
Bennett, M. M. and Smith, L. C.: Advances in using multitemporal night-time lights satellite imagery to detect, estimate, and monitor socioeconomic dynamics, Remote Sensing of Environment, 192, 176–197, 2017.
Berleman, M. and Wenzel, D.: Hurricanes, economic growth and transmission channels: Empirical evidence for countries on differing levels of development, World Development, 105, 231–247, 2018.
Bertinelli, L. and Strobl, E.: Quantifying the local economic growth impact of hurricane strikes: An analysis from outer space for the Caribbean, Journal of Applied Meteorology and Climatology, 52, 1688–1697, 2013.
Bickenbach, F., Bode, E., Nunnenkamp, P., and Söder, M.: Night Lights and Regional GDP, Review of World Economics, 152, 425–447, 2016.
Bluhm, R. and Krause, M.: Top lights-bright cities and their contribution to economic development, CESifo Working Paper No. 7411, 2018.
Botzen, W. W., Deschenes, O., and Sanders, M.: The economic impacts of natural disasters: A review of models and empirical studies, Review of Environmental Economics and Policy, 13, 167–188, 2019.
Cavallo, E., Noy, I., et al.: Natural disasters and the economy — A survey, International Review of Environmental and Resource Economics, 5, 63–102, 2011.
Cavallo, E., Galiani, S., Noy, I., and Pantano, J.: Catastrophic Natural Disasters and Economic Growth, Review of Economics and Statistics, 95, 1549–1561, 2013.
Chen, X. and Nordhaus, W. D.: Using Luminosity Data as a Proxy for Economic Statistics, Proceedings of the National Academy of Sciences, 108, 8589–8594, 2011.
Dartmouth Flood Observatory: DFO Event 2005-114 - Hurricane Katrina: New Orleans area - Rapid Response Inundation Map 1, https://floodobservatory.colorado.edu/2005114.html, 2005.
de Ruig, L. T., Barnard, P. L., Botzen, W. W., Grifman, P., Hart, J. F., de Moel, H., Sadrpour, N., and Aerts, J. C.: An economic evaluation of adaptation pathways in coastal mega cities: An illustration for Los Angeles, Science of the Total Environment, 678, 647–659, 2019.
Del Valle, A., Elliott, R. J., Strobl, E., and Tong, M.: The short-term economic impact of tropical cyclones: Satellite evidence from Guangdong province, Economics of Disasters and Climate Change, 2, 225–235, 2018.
Deryugina, T., Kawano, L., and Levitt, S.: The economic impact of Hurricane Katrina on its victims: Evidence from individual tax returns, American Economic Journal: Applied Economics, 10, 202–33, 2018.
Donaldson, D. and Storeygard, A.: The View from Above: Applications of Satellite Data in Economics, Journal of Economic Perspectives, 30, 171–98, 2016.
Ebener, S., Murray, C., Tandon, A., and Elvidge, C. C.: From Wealth to Health: Modelling the Distribution of Income Per Capita at the Sub-National Level Using Night-Time Light Imagery, International Journal of Health Geographics, 4, 2005.
Elliott, R., Strobl, E., and Sun, P.: The Local Impact of Typhoons on Economic Activity in China: A View from Outer Space, Journal of Urban Economics, 88, 50–66, 2015.
Elvidge, C. D., Baugh, K. E., Kihn, E. A., Kroehl, H. W., Davis, E. R., and Davis, C. W.: Relation Between Satellite Observed Visible-Near Infrared Emissions, Population, Economic Activity and Electric Power Consumption, International Journal of Remote Sensing, 18, 1373–1379, 1997.
Elvidge, C. D., Sutton, P. C., Ghosh, T., Tuttle, B. T., Baugh, K. E., Bhaduri, B., and Bright, E.: A Global Poverty Map Derived from Satellite Data, Computers & Geosciences, 35, 1652–1660, 2009a.
Elvidge, C. D., Ziskin, D., Baugh, K. E., Tuttle, B. T., Ghosh, T., Pack, D. W., Erwin, E. H., and Zhizhin, M.: A Fifteen Year Record of Global Natural Gas Flaring Derived from Satellite Data, Energies, 2, 595–622, 2009b.
Elvidge, C. D., Baugh, K. E., Zhizhin, M., and Hsu, F.-C.: Why VIIRS data are superior to DMSP for mapping nighttime lights, Proceedings of the Asia-Pacific Advanced Network, 35, 62, https://doi.org/10.7125/APAN.35.7, 2013.
Elvidge, C. D., Hsu, F.-C., Baugh, K. E., and Ghosh, T.: National Trends in Satellite Observed Lighting, Global Urban Monitoring and Assessment through Earth Observation, 23, 97–118, 2014.
Fan, X., Nie, G., Deng, Y., An, J., Zhou, J., and Li, H.: Rapid detection of earthquake damage areas using VIIRS nearly constant contrast night-time light data, International Journal of Remote Sensing, 40, 2386–2409, 2019.
Felbermayr, G. and Gröschl, J.: Naturally Negative: The Growth Effects of Natural Disasters, Journal of Development Economics, 111, 92–106, 2014.
Felbermayr, G., Gröschl, J., Sanders, M. W., Schippers, V. G., and Steinwachs, T.: The Economic Impact of Weather Anomalies, World Development, 151, 105745, 2022.
Fomby, T., Ikeda, Y., and Loayza, N. V.: The Growth Aftermath of Natural Disasters, Journal of Applied Econometrics, 28, 412–434, 2013.
Gao, S., Chen, Y., Liang, L., and Gong, A.: Post-earthquake night-time light piecewise (PNLP) pattern based on NPP/VIIRS night-time light data: A case study of the 2015 Nepal earthquake, Remote Sensing, 12, 2009, 2020.
Ghosh, T., L Powell, R., D Elvidge, C., E Baugh, K., C Sutton, P., and Anderson, S.: Shedding light on the global distribution of economic activity, The Open Geography Journal, 3, 147–160, 2010.
Ghosh, T., Anderson, S. J., Elvidge, C. D., and Sutton, P. C.: Using nighttime satellite imagery as a proxy measure of human well-being, sustainability, 5, 4988–5019, 2013.
Gibson, J., Olivia, S., and Boe-Gibson, G.: Night lights in economics: sources and uses, Journal of Economic Surveys, 34, 955–980, 2020.
Gibson, J., Olivia, S., Boe-Gibson, G., and Li, C.: Which night lights data should we use in economics, and where?, Journal of Development Economics, 149, 102602, 2021.
Gillespie, T. W., Frankenberge, E., Fung Chum, K., and Thomas, D.: Night-time lights time series of tsunami damage, recovery, and economic metrics in Sumatra, Indonesia, Remote Sensing Letters, 5, 286–294, 2014.
Groen, J. A. and Polivka, A. E.: The effect of Hurricane Katrina on the labor market outcomes of evacuees, American Economic Review, 98, 43–48, 2008.
Groen, J. A., Kutzbach, M. J., and Polivka, A. E.: Storms and jobs: The effect of hurricanes on individuals’ employment and earnings over the long term, Journal of Labor Economics, 38, 653–685, 2020.
Hallegatte, S.: An adaptive regional input-output model and its application to the assessment of the economic cost of Katrina, Risk analysis, 28, 779–799, 2008.
Hallegatte, S., Green, C., Nicholls, R. J., and Corfee-Morlot, J.: Future flood losses in major coastal cities, Nature climate change, 3, 802–806, 2013.
Heger, M. P. and Neumayer, E.: The impact of the Indian Ocean tsunami on Aceh’s long-term economic growth, Journal of Development Economics, 141, 102365, 2019.
Henderson, J. V., Storeygard, A., and Weil, D. N.: Measuring Economic Growth from Outer Space, American Economic Review, 102, 994–1028, 2012.
Henderson, J. V., Squires, T., Storeygard, A., and Weil, D.: The Global Distribution of Economic Activity: Nature, History, and the Role of Trade, Quarterly Journal of Economics, 133, 357–406, 2017.
Henderson, M., Yeh, E. T., Gong, P., Elvidge, C., and Baugh, K.: Validation of urban boundaries derived from global night-time satellite imagery, International Journal of Remote Sensing, 24, 595–609, 2003.
Hodler, R. and Raschky, P. A.: Regional Favoritism, Quarterly Journal of Economics, 129, 995–1033, 2014.
Hornbeck, R. and Naidu, S.: When the levee breaks: black migration and economic development in the American South, American Economic Review, 104, 963–90, 2014.
Hsiang, S. M. and Jina, A. S.: The causal effect of environmental catastrophe on long-run economic growth: Evidence from 6,700 cyclones, Working Paper 20352, National Bureau of Economic Research, https://www.nber.org/papers/w20352, 2014.
IPCC: Climate Change 2014: Synthesis Report, Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. [Core Writing Team, Pachauri, R.K., Meyer, L.A. (Eds.)]. IPCC, Geneva, Switzerland, p. 151, 2014.
Ishizawa, O. A., Miranda, J. J., and Strobl, E.: The Impact of Hurricane Strikes on Short-Term Local Economic Activity: Evidence from Nightlight Images in the Dominican Republic, International Journal of Disaster Risk Science, 10, 362–370, 2019.
Jarmin, R. S. and Miranda, J.: The impact of Hurricanes Katrina, Rita and Wilma on business establishments, Journal of Business Valuation and Economic Loss Analysis, 4, 2009.
Klomp, J. and Valckx, K.: Natural disasters and economic growth: A meta-analysis, Global Environmental Change, 26, 183–195, 2014.
Knabb, R. D., Rhome, J. R., and Brown, D. P.: Tropical Cyclone Report Hurricane Katrina, Technical report, National Hurricane Center, https://www.nhc.noaa.gov/data/tcr/AL122005_Katrina.pdf, 2005.
Kocornik-Mina, A., McDermott, T. K., Michaels, G., and Rauch, F.: Flooded cities, American Economic Journal: Applied Economics, 12, 35–66, 2020.
Kohiyama, M., Hayashi, H., Maki, N., Higashida, M., Kroehl, H., Elvidge, C., and Hobson, V.: Early damaged area estimation system using DMSP-OLS night-time imagery, International Journal of Remote Sensing, 25, 2015–2036, 2004.
Li, X. and Li, D.: Can night-time light images play a role in evaluating the Syrian Crisis?, International Journal of Remote Sensing, 35, 6648–6661, 2014.
Li, X., Chen, X., Zhao, Y., Xu, J., Chen, F., and Li, H.: Automatic intercalibration of night-time light imagery using robust regression, Remote sensing letters, 4, 45–54, 2013.
Li, X., Zhang, R., Huang, C., and Li, D.: Detecting 2014 Northern Iraq Insurgency using night-time light imagery, International Journal of Remote Sensing, 36, 3446–3458, 2015.
Logan, J. R.: The impact of Katrina: Race and class in storm-damaged neighborhoods, unpublished manuscript. Spatial Structures in the Social Sciences, Brown University, 2006.
Ma, T., Zhou, C., Pei, T., Haynie, S., and Fan, J.: Quantitative estimation of urbanization dynamics using time series of DMSP/OLS nighttime light data: A comparative case study from China’s cities, Remote Sensing of Environment, 124, 99–107, 2012.
Mård, J., Di Baldassarre, G., and Mazzoleni, M.: Nighttime light data reveal how flood protection shapes human proximity to rivers, Science Advances, 4, eaar5779, 2018.
Michalopoulos, S. and Papaioannou, E.: Pre-Colonial Ethnic Institutions and Contemporary African Development, Econometrica, 81, 113–152, 2013.
Michalopoulos, S. and Papaioannou, E.: National Institutions and Subnational Development in Africa, Quarterly Journal of Economics, 129, 151–213, 2014.
Miranda, J. J., Ishizawa, O. A., and Zhang, H.: Understanding the Impact Dynamics of Windstorms on Short-Term Economic Activity from Night Lights in Central America, Economics of Disasters and Climate Change, 4, 657–698, 2020.
Mohan, P. and Strobl, E.: The short-term economic impact of tropical Cyclone Pam: an analysis using VIIRS nightlight satellite imagery, International Journal of Remote Sensing, 38, 5992–6006, https://doi.org/10.1080/01431161.2017.1323288, 2017.
National Hurricane Center: Costliest U.S. tropical cyclones tables updated, Tech. rep., National Hurricane Center, https://www.nhc.noaa.gov/news/UpdatedCostliest.pdf, 2018.
Nguyen, C. N. and Noy, I.: Measuring the impact of insurance on urban earthquake recovery using nightlights, Journal of Economic Geography, 20, 857–877, 2020.
Noy, I.: The Macroeconomic Consequences of Disasters, Journal of Development Economics, 88, 221–231, 2009.
Pandey, B., Zhang, Q., and Seto, K. C.: Comparative evaluation of relative calibration methods for DMSP/OLS nighttime lights, Remote Sensing of Environment, 195, 67–78, 2017.
Paxson, C. and Rouse, C. E.: Returning to New Orleans after Hurricane Katrina, American Economic Review, 98, 38–42, 2008.
Pistrika, A. K. and Jonkman, S. N.: Damage to residential buildings due to flooding of New Orleans after hurricane Katrina, Natural Hazards, 54, 413–434, 2010.
Skoufias, E., Strobl, E., and Breivik Tveit, T.: Flood and Tsunami Damage Indices Based on Remotely Sensed Data: An Application to Indonesia, Natural Hazards Review, 21, 04020042, 2020.
Skoufias, E., Strobl, E., and Tveit, T.: Can we rely on VIIRS nightlights to estimate the short-term impacts of natural hazards? Evidence from five South East Asian countries, Geomatics, Natural Hazards and Risk, 12, 381–404, 2021.
Small, C., Pozzi, F., and Elvidge, C. D.: Spatial analysis of global urban extent from DMSP-OLS night lights, Remote Sensing of Environment, 96, 277–291, 2005.
Storeygard, A.: Farther on Down the Road: Transport Costs, Trade and Urban Growth in Sub-Saharan Africa, Review of Economic Studies, 83, 1263–1295, 2016.
Strobl, E.: The Economic Growth Impact of Hurricanes: Evidence from US Coastal Counties, Review of Economics and Statistics, 93, 575–589, 2011.
Strobl, E.: The economic growth impact of natural disasters in developing countries: Evidence from hurricane strikes in the Central American and Caribbean regions, Journal of Development economics, 97, 130–141, 2012.
Sutton, P. C. and Costanza, R.: Global Estimates of Market and Non-Market Values Derived from Night-Time Satellite Imagery, Land Cover, and Ecosystem Service Valuation, Ecological Economics, 41, 509–527, 2002.
Sutton, P. C., Elvidge, C. D., and Ghosh, T.: Estimation of Gross Domestic Product at Sub-National Scales Using Night-Time Satellite Imagery, International Journal of Ecological Economics and Statistics™, 8, 5–21, 2007.
U.S. Bureau of Economic Analysis: Regional Economic Accounts, https://www.bea.gov/data/economic-accounts/regional, 2020.
U.S. Department of Housing and Urban Development: Current Housing Unit Damage Estimates: Hurricanes Katrina, Rita, and Wilma, https://www.huduser.gov/publications/pdf/gulfcoast_hsngdmgest.pdf, 2006.
Vigdor, J.: The economic aftermath of Hurricane Katrina, Journal of Economic Perspectives, 22, 135–54, 2008.
Wu, J., Wang, Z., Li, W., and Peng, J.: Exploring Factors Affecting the Relationship Between Light Consumption and GDP Based on DMSP/OLS Night-Time Satellite Imagery, Remote Sensing of Environment, 134, 111–119, 2013.
Xiao, Y. and Nilawar, U.: Winners and losers: analysing post-disaster spatial economic demand shift, Disasters, 37, 646–668, 2013.
Zhang, Q. and Seto, K. C.: Mapping urbanization dynamics at regional and global scales using multi-temporal DMSP/OLS nighttime light data, Remote Sensing of Environment, 115, 2320–2329, 2011.
Zhang, Q., Pandey, B., and Seto, K. C.: A robust method to generate a consistent time series from DMSP/OLS nighttime light data, IEEE Transactions on Geoscience and Remote Sensing, 54, 5821–5831, 2016.
Zhao, X., Yu, B., Liu, Y., Yao, S., Lian, T., Chen, L., Yang, C., Chen, Z., and Wu, J.: NPP-VIIRS DNB daily data in natural disaster assessment: evidence from selected case studies, Remote Sensing, 10, 1526, 2018. |
DISCLAIMER:
Many S&S parts are designed for high performance, closed course, racing applications and are intended for the very experienced rider only. The installation of S&S parts may void or adversely affect your factory warranty. In addition such installation and use may violate certain federal, state, and local laws, rules and ordinances as well as other laws when used on motor vehicles used on public highways. Always check federal, state, and local laws before modifying your vehicle. It is the sole and exclusive responsibility of the user to determine the suitability of the product for his or her use, and the user shall assume all legal, personal injury risk and liability and all other obligations, duties, and risks associated therewith.
FOR CLOSED COURSE COMPETITION USE ONLY. LEGAL IN CALIFORNIA ONLY FOR RACING VEHICLES IN CLOSED COURSE COMPETITION USE. NOT LEGAL FOR SALE OR USE NATIONWIDE ON ANY FEDERAL POLLUTION CONTROLLED MOTOR VEHICLE UNDER THE CLEAN AIR ACT.
IMPORTANT NOTICE:
Statements in this instruction sheet preceded by the following words are of special significance.
⚠️ WARNING
Means there is the possibility of injury to yourself or others.
⚠️ CAUTION
Means there is the possibility of damage to the part or vehicle.
NOTE
Other information of particular importance has been placed in italic type. S&S recommends you take special notice of these items.
SAFE INSTALLATION AND OPERATION RULES:
Before installing your new S&S part, it is your responsibility to read and follow the installation and maintenance procedures in these instructions and follow the basic rules below for your personal safety.
- Gasoline is extremely flammable and explosive under certain conditions and toxic when breathed. Do not smoke. Perform installation in a well ventilated area away from open flames or sparks.
- If vehicle has been running, wait until engine and exhaust pipes have cooled down to avoid getting burned before performing any installation steps.
- Before performing any installation steps, disconnect battery to eliminate potential sparks and inadvertent engagement of starter while working on electrical components.
- Read instructions thoroughly and carefully so all procedures are completely understood before performing any installation steps. Contact S&S with any questions you may have if any steps are unclear or any abnormalities occur during installation or operation of motorcycle with an S&S part on it.
- Consult an appropriate service manual for your vehicle for correct disassembly and reassembly procedures for any parts that need to be removed to facilitate installation.
- Use good judgment when performing installation and operating the vehicle. Good judgment begins with a clear head. Don’t let alcohol, drugs or fatigue impair your judgment. Start installation when you are fresh.
- Be sure all federal, state and local laws are obeyed with the installation.
- For optimum performance and safety and to minimize potential damage to carb or other components, use all mounting hardware that is provided and follow all installation instructions.
- Exhaust fumes are toxic and poisonous and must not be breathed. Run vehicle in a well ventilated area where fumes can dissipate.
WARRANTY:
All S&S parts are guaranteed to the original purchaser to be free of manufacturing defects in materials and workmanship for a period of six (6) months from the date of purchase. Merchandise that fails to conform to these conditions will be repaired or replaced at S&S’s option if the parts are returned to us by the purchaser within the 6 month warranty period or within 10 days thereafter. In the event warranty service is required, the original purchaser must call or write S&S immediately with the problem. Some problems can be rectified by a telephone call and need no further course of action. A part that is suspect of being defective must not be replaced by a Dealer without prior authorization from S&S. If it is deemed necessary for S&S to make an evaluation to determine whether the part was defective, a return authorization number must be obtained from S&S.
The parts must be packaged properly so as to not cause further damage and be returned prepaid to S&S with a copy of the original invoice of purchase and a detailed letter outlining the nature of the problem, how the part was used and the circumstances at the time of failure. If after an evaluation has been made by S&S and the part was found to be defective, repair, replacement or refund will be granted.
ADDITIONAL WARRANTY PROVISIONS:
(1) S&S shall have no obligation in the event an S&S part is modified by any other person or organization.
(2) S&S shall have no obligation if an S&S part becomes defective in whole or in part as a result of improper installation, improper maintenance, improper use, abnormal operation, or any other misuse or mistreatment of the S&S part.
(3) S&S shall not be liable for any consequential or incidental damages resulting from the failure of an S&S part, the breach of any warranties, the failure to deliver, delay in delivery, delivery in non-conforming condition, or for any other breach of contract or duty between S&S and a customer.
(4) High performance engines are highly sensitive to tune (timing, air-fuel ratios, etc.). The S&S Turbo Kit for the KRX is factory tuned for optimal results. Any unauthorized changes to the S&S tune invalidates the warranty.
Before installing your new S&S parts:
- All torque specifications and assembly procedures are critical. Failure to follow the instructions may result in catastrophic failure of the engine or other components.
- Some factory hardware and components will be reused. Retain all hardware until installation is complete and proper operation has been verified.
- Foreign debris such as dust and dirt can cause excessive wear and possible failure of engine components. Thoroughly clean the vehicle before beginning installation.
- Running the vehicle on low fuel can cause air to be pumped through the fuel system and result in engine damage. Ensure there is sufficient fuel in the tank. Aggressive cornering or uneven ground requires greater levels of fuel to keep the fuel pickup fully submerged.
Special tools:
- 12mm crows foot
- 1\(\frac{1}{16}\) in crows foot
- 5mm ball end allen extension
- Piston circlip install tool
NOTES ON ACCESSORIES:
- Rear Window is not compatible with turbo kit
- Kawasaki® 6 point harness, tire rack, alternator kit, polycarbonate roof, and rear bumper are compatible with no modifications
- Kawasaki® Sport Roof is compatible with modification. Mounting bracket is included
Installation Instructions
Pre-Installation
1. Start the car and ensure normal operation. Address any issues prior to installation.
NOTE: Driveability issues must be addressed before installing the turbo kit.
2. Thoroughly clean the vehicle and work area.
Kit Contents
3. Unbox the two large kit boxes and lay out components.
4. Check the quantity of components in the boxes. Use the provided packing parcel list as a reference.
NOTE: There may be loose components in some boxes, be sure to check packaging material for missing parts.
Engine Disassembly
5. Let the engine cool to room temperature and disconnect the battery.
6. Remove bed and upper cowl. The breather is mounted to the bed and must be removed before the bed can be fully removed.
7. Remove intake air transfer tube
8. Disconnect electrical harness mounting clips on bed frame. Cut the zip ties mounting the O2 sensor grommets
9. Remove the muffler.
10. Crack the coolant vent plug. Do not remove or coolant will leak. Remove vent mounted bolts from the bed frame.
11. Remove bed frame.
12. Remove the stock intake plenum. Set aside the three compression limiters and bolts.
13. Remove the CVT intake and exhaust ducts.
14. Remove breather
15. For California models, remove the Evaporative System hoses from the ports on the throttle body.
16. Disconnect fuel rail supply line and remove throttle bodies.
17. Remove exhaust pipe shielding. Do not remove the exhaust pipe from the head at this time.
18. Remove the radiator cap in the front of the vehicle. Drain the coolant. The two drain plugs are located in the middle of the vehicle above the rear plastic skid plate.
19. Remove the coolant vent screw from the engine completely to allow the remainder of the coolant to drain.
20. Drain the oil.
21. Remove the spark plugs.
22. Remove the valve cover.
23. Remove rear tires and shocks.
NOTE: Lowering the rear of the car as much as possible makes kit installation easier.
24. Remove the alternator access cover and plug on the passenger side of the engine. Remove the timing mark plug.
25. Check and record the valve lash. Review measurements versus the Kawasaki® service manual. Needed adjustments should be made later during installation.
26. Using a ratchet and socket, rotate the engine forward (clockwise when viewed from the passenger side of the vehicle) until the 2/T mark on the alternator rotor aligns with the notch in the timing inspection hole, indicating top dead center (TDC) on cylinder 2. The intake and exhaust cam sprocket timing marks should be lined up with the top of the head. If they are 180° off, rotate the engine one additional turn.
27. Rotate the engine 30–40° backwards, until the passenger side exhaust cam lobe is just leaving the valve bucket. At this position all cam lobes should be off of the buckets.
28. Loosen the cam chain slide tensioner spring screw. Remove the two screws retaining the tensioner.
29. Remove the cam caps. Loosen the screws evenly to prevent the caps from becoming crooked if there is any load from valves. Ensure the ring dowels are retained. There are two per cap
NOTES: Do not use a ball end allen to remove the socket head screws. Using a straight allen reduces the chance of stripping the head. Cam caps are matched to each head. Do not interchange them between heads.
30. Disconnect the coolant lines from the cylinder and head. There is one line to disconnect from the head rear of the engine and two on the cylinder in front of the engine.
31. Remove the chain slide screw from the head on the passenger side of the engine.
32. Remove the M6 head helper screws. There are two on the passenger side of the head and one on the driver side.
NOTE: Do not use a ball end allen to remove the socket head screws. Using a straight allen reduces the chance of stripping the head.
33. Remove the M11 head bolts in the following order.
![Diagram showing bolt removal sequence]
34. Remove the head with the exhaust pipe still attached. Retain the two locating dowels.
**NOTE:** Do not tip the head upside down or remove the cam buckets.
35. Remove the exhaust pipe from the head.
36. Remove the cylinder. Retain the two locating dowels.
37. Lay clean rags on the case deck to prevent small parts falling into the case. Using a small screwdriver or pick, remove the wrist pin circlip from the passenger side piston. The wrist pin may need to be gently tapped out using a socket and mallet.
38. Remove the rags. Lifting up firmly on the cam chain, rotate the engine 180° to bring the driver side piston to approximate TDC. Replace the rags and remove the driver side piston circlip and wrist pin.
**Engine Assembly**
39. Install the rings on the new pistons. Using a circlip install tool, insert a circlip in the driver side of the piston. The valve notches are oriented to the rear/intake side.
40. With the clean rags still in place, coat the wrist pin with a small amount of clean engine oil and slide through the piston and connecting rod. Install the passenger side circlip.
**NOTE:** Ensure the valve pockets are oriented to the rear of the engine.
41. Rotate the engine and repeat on the passenger side piston, remove the rags when the pistons are installed.
42. Install a new base gasket. Place the locating dowels in the case.
**NOTE:** M6x12mm bolts may be used to hold down the base gasket during cylinder installation.
43. Rotate the engine so both pistons are level.
44. Orient the piston rings gaps as shown. Ensure the oil ring is completely seated and the spring ring is not overlapping on itself.
45. Apply a small amount of clean engine oil to each of the rings
46. Place the cylinders over the chain slides and set the cylinder on top of each piston
47. Compress the rings for each piston with your fingers to insert them into the cylinder. An assistant may be helpful with this process. It may help to start one piston before the other.
**NOTE:** Masking tape may be used on top of the base gasket to protect fingers. If used, be sure to remove all tape before the cylinder is completely seated.
48. Once all piston rings are inserted in the cylinder, wiggle the cylinder back and forth until it sits flush onto the locating dowels.
**WARNING**
Take care to not bend any rings during assembly. If the cylinder does not slide on easily, do not force it!
49. Holding the cylinder down and the cam chain up, rotate the piston on each side to bottom dead center (BDC) and coat each cylinder wall with a small amount of clean engine oil.
**NOTE:** Keep tension on the cam chain to reduce the risk of skipping a tooth on the lower sprocket.
50. Connect coolant hoses on the cylinder. Orient the spring clamps in the same position as they were when removed to reduce the chance of leaks.
51. Install a new head gasket and insert the locating dowels.
52. Before installing the head on the car, place new exhaust port gaskets on the studs and install the turbo onto the head. Tighten the nuts.
**NOTE:** Do not tip the head upside down.
Torque Exhaust Manifold Nuts: 180 in-lbs/20 Nm
53. Start a M6 x 20mm screw in the oil drain port on the turbo closest to the head. Do not fully tighten, and leave at least ½" open.
54. If valve adjustment was needed, adjust the valves to be within spec.
55. Insert the o-ring into the oil drain tube groove. Slide the slotted end of the oil drain tube onto the started bolt. Install the other M6x20mm screw and tighten.
Torque Oil Drain Flange Bolt: 106 in-lb/12 Nm
56. Place the head on the cylinders and locate on the dowels. Coat the head bolt threads, washers, and helper bolt threads in clean engine oil and install.
Torque the M11 and then the M6 bolts in the sequence below.
Torque M11 Cylinder head bolts: First: 22 ft-lbs/30 Nm
Second: 52 ft-lbs/70 Nm
M6 Cylinder head bolts: 106 in-lb/12 Nm
57. Install and tighten the cam chain torque slide bolt on the passenger side of the head. Torque Cam chain slide bolt: 216 in-lb/18 Nm
58. Connect the coolant hose to the head.
59. Clamp the intake cam in a vice with non-marring jaws or using a rag. Remove the two screws to remove the cam sprocket.
**NOTES:** *Intake and Exhaust cams can be identified by the “IN” or “EX” cast into the body.*
*Do not use a ball end allen to remove the socket head screws. Using a straight allen reduces the chance of stripping the head.*
60. Clean the oil from the cam hole and screw threads. Install the Turbo Cam Sprocket so the holes labeled “IN” align with the threaded cam holes. Apply green threadlocker to the screw threads and install. Repeat with the exhaust cam.
Torque Cam Sprocket Bolt: 106 in-lbs/12 Nm
61. Lightly oil the cam journals and valve buckets.
62. With the engine set at TDC on cylinder 2, install the exhaust cam first, and then intake cam so the timing marks align with the top of the head. Do not install the cam cap.
**NOTE:** *Ensure all of the slack in the cam chain is at the rear of the engine.*
63. Rotate the engine 30–40° backwards (CCW) until all cam lobes are unloaded from the buckets. Ensure the cam chain does not jump a tooth.
**NOTE:** *Keep tension on the chain while rotating.*
64. Install both cam caps and tighten screws in the order stamped on the cap. Ensure each cap has two ring dowels.
Torque Cam Cap Screws: 106 in-lbs/12 Nm
65. Remove the cap and spring from the cam chain tensioner. Release the stop and slide the rod into the body.
66. Install the body with the stop facing upwards. Apply blue threadlocker to the bolts and tighten.
Torque Cam Tensioner Mounting Bolts: 89 in-lbs/10 Nm
67. Push the tensioner spring in until it stops clicking, and install the tensioner cap over it. Tighten.
Torque Cam Tensioner Cap: 180 in-lbs/20 Nm
68. Rotate the engine forward to TDC on cylinder 2 and re-check the cam timing. The intake and exhaust marks should be aligned with the top of the head.
**NOTE:** *There should be no slack in the chain between the sprockets.*
69. Rotate the engine by hand two full turns, check for smooth motion throughout. Any hard stops should be addressed.
70. Re-check valve timing and valve lash.
**NOTE:** *Re-checking the valves is very important and must be done!*
71. Lightly oil the cam lobes and timing chain.
72. Install valve cover along with the cover seal and tighten bolts.
Torque Valve Cover Bolts: 89 in-lbs/10 Nm
73. Install the supplied HR 10 spark plugs.
Torque Spark Plugs: 115 in-lbs/ 13Nm
74. Connect ignition coils to spark plugs. Make sure the rubber boot is seated.
75. Install the alternator rotor bolt cap, cover, and inspection cap.
Torque Timing Inspection Cap: 180 in-lb/20 Nm
Torque Alternator Rotor Bolt Cap: 180 in-lb/20 Nm
Torque Alternator Rotor Bolt Cap Cover bolts: 89 in-lb/10 Nm
**Flyweight Upgrade**
76. Remove the clutch housing cover and primary clutch carrier.
77. Install the appropriate bolts in the secondary to loosen the belt. Remove the belt.
78. Remove the primary clutch.
79. Evenly loosen the eight bolts holding the primary clutch cover.
80. Remove the snap rings and flyweight pin, and remove flyweights from the clutch assembly.
81. Inspect the flyweights for excessive wear. Worn weights should be replaced prior to installation.
82. Insert a brass slug into each of the inner 3 holes closest to the pivot of the flyweight. Using a punch and hammer, mushroom the slug until tight. Repeat for the additional flyweights.
**NOTES:** *The slugs are designed to be added to OEM flyweights. The flyweights must be between 91–93 grams*
*Use three slugs for 31” or smaller tires, two slugs for tires greater than 31”*
83. Re-install the flyweights, pins, and snap rings. Ensure the flyweight...
84. Install the cover, aligning the arrows on the cover and movable sheave. Apply blue threadlocker and tighten evenly. Torque Primary Clutch Cover Bolts: 111 in-lbs/5 Nm
85. Install the primary clutch on the shaft. Tighten the nut. Torque Primary Clutch Nut: 170 ft-lbs/230 Nm
86. Install the belt and then the primary clutch carrier. Tighten nuts and bolts.
**NOTE:** The belt is directional, which is labeled with arrows on the belt.
Torque Primary Clutch Carrier M6 Bolts: 132 in-lbs/15 Nm
Torque Primary Clutch Carrier M8 Nuts: 180 in-lbs/20 Nm
87. Remove the bolts from the secondary and rotate until the belt is at the top of the secondary sheave.
88. Install the clutch housing cover by installing the bolts in the following order.
93. Using a wire cutter, remove the two tabs on the bottom of the pump by the fuel inlet.
94. Install the upgraded fuel filter, orienting it on the pump locating peg.
95. Install the fuel chamber.
96. Place the fuel pump back in the tank. Install the retaining ring and tighten the bolts evenly. Torque Fuel Pump Bolts: 52 in-lbs/5.9 Nm
97. Connect the fuel line and electrical connector. Ensure the locking tab on the fuel connector is fully depressed.
98. Install the seat and rails. Tighten Bolts. Torque Seat Rail Bolts: 180 in-lbs/20 Nm
**Intercooler Installation**
**ACCESSORY NOTES:**
- If an accessory 6-point harness has already been installed, remove the rear brackets and skip the two following steps.
- Minor adjustment of any accessory tire rack may be needed to clear the intercooler.
- If the accessory sport roof has been installed, the seal and seal rail will need to be cleared. A relocation bracket is provided for the mount.
99. Cover the intake ports of the engine, turbo inlet and outlet, and any other exposed internal components.
100. Using a Dremel or similar tool, remove the four cutout areas in the firewall behind the seats. Remove the horizontal section of the firewall directly above the cutout sections.
**NOTE:** The body screws retaining the firewall to the frame may need to be removed for easier cutting.
101. Set the intercooler mounting tab in front of the frame and behind the firewall. If an accessory 6-point seat belt has already been installed, place the intercooler behind the seat belt mounting brackets.
102. Install and tighten the four M8 mounting bolts. Torque Intercooler lower M8 bolts: 144 in-lbs /16.5 Nm
103. Install the upper brace on the intercooler tab. Install and tighten the two M6 screws. Torque Intercooler upper M6 bolts: 106 in-lbs/12 Nm
104. Remove the four thumb screws from the intercooler brace tee clamp. Place the clamp on the upper frame rail and intercooler brace with the thumb screws on the underside.
105. Make sure the intercooler is vertical. Firmly tighten the two lower screws by hand, then the two upper screws.
106. Install the two intercooler fans onto the intercooler shroud with the wiring pointing down. Tighten the M6 x 12mm bolts. Torque Intercooler Fan M6 bolts: 106 in-lbs/12 Nm
107. Feed the Intercooler Fan Harness through the firewall port behind the driver’s seat. The relay, fuse holder, and square terminal should be in the cockpit, and the ground, fan connectors, and bullet terminal in the engine bay/wheel well.
108. Remove the ECM cover and disconnect the ECM from the wiring harness.
109. On the gray connector, remove the gray plug from the corner terminal with a fingernail. **Do NOT use a pliers.**
110. Remove the blue wedge block from the face of the connector. Pry out gently with a small screwdriver in the slots to remove.
111. Insert the square terminal into the back of the connector. Ensure the orientation matches the other terminals in the connector. Push gently until the terminal locks into place. Reinstall the wedge block, making sure the new terminal is pushed toward the connector clip.
112. Connect the ring by the fuse holder to the positive terminal on the battery.
113. Mount the relay block with the self tapping sheet metal screw in the ECM bay just below the 60 amp main fuse to the center of the vehicle. Install the relay.
114. Push the extra wire into the engine bay. Route the wire along the main harness past the ground junction in front of the driver’s side rear wheel.
115. Connect the ring terminal to the ground junction.
116. Route the bullet terminal to the accessory connections above and behind the rear wheel. Connect the bullet terminal to the positive terminal. Ensure the terminal is fully protected by the
117. Connect the intercooler fans.
118. Secure the harness with zip ties to avoid contact with moving or hot parts.
**NOTE:** Proper operation of the fans can be checked by disconnecting the intake air temperature sensor, located on the plenum. The fans should turn on and turn off when the sensor is reconnected.
**Intake Plenum, Throttle, Fuel System Installation**
119. Remove the injectors, fuel rail, and MAP sensor from the throttle assembly.
120. Remove the o-ring from the included MAP sensor and install the adapting hose. Install the retaining clamp.
121. Remove the rubber mount from the stock MAP sensor by pulling firmly from the bottom. Pull the mount through the new MAP sensor mounting hole.
122. Install the MAP sensor on the throttle, ensuring the hose is firmly on the throttle port.
123. Apply thread sealant to the threads of the fuel damper and install in the end port of the fuel rail.
Torque Fuel Damper: Finger snug plus 1-2 full rotations.
124. Install the Quick-Connect fitting in the center fuel rail port with an aluminum sealing washer. Tighten.
Torque Fuel Rail Quick Connect Fitting: 180 in-lbs/20 Nm
125. Apply the included lubricant to the high flow injector o-rings. Carefully insert the injectors into the new fuel rail bore using a twisting motion. Ensure an o-ring has not twisted or rolled in the bore. The connectors should be oriented to the top/front of the vehicle.
126. Remove the throttle body injector dust seals from the stock injectors and install on the high flow injectors. Discard the seals on the high flow injectors.
127. Cover the inlet port of the throttle body and any other exposed internals.
128. Using a grinder, remove the lugs located on the top of the throttle body as shown below. Make sure to remove any sharp edges.
129. Install the injector/rail/damper assembly on the throttle body. Gently wiggle the injectors back and forth while tightening the screws. Properly installed injectors should be able to rotate with moderate force.
Torque Fuel Rail Screws: 31 in-lbs/3.5 Nm
130. Install the injector wiring adaptors.
131. Install the throttle assembly in the intake boots on the cylinder head. Ensure the throttle is fully seated in the boots and tighten the clamps.
Torque Throttle body boot clamps: 18 in-lbs/2.0 Nm
132. Connect the injector wiring plugs and map sensor plug to the wiring harness.
133. Remove the clamp bracket retaining the fuel line from the transmission case. Reinstall the bolt.
134. Install the upper and lower plenum mounting brackets, and ⅝” NPT drain plug in the plenum. See below for orientation.
Torque M6 Plenum mounting bracket bolts: 106 in-lb/12 Nm
Torque ⅝” NPT Plenum drain plug: finger tight plus one turn
**NOTE:** Do not over-tighten, brass inserts may be stripped
135. Slide 2.5” couplers and clamps fully onto the intake runners.
136. Using the factory bolts and spacers, install the plenum in the three factory mounting locations. Install the fuel line clamping bracket on the front/upper bracket as shown below.
Torque M8 Plenum mounting bracket bolts: 180 in-lb/20 Nm
137. Slide the couplers forward onto the throttle bodies as far as possible. Orient clamps to avoid interference with the fuel line and tighten.
138. Install the temperature sensor in the plenum using the existing hardware and supplied M5x14mm bolt as shown. Torque M5 bolt: 88 in-lb/10 Nm
139. Route fuel line on right/passenger side of intake runners. Clamp the line to the clamping bracket.
140. Push the fuel line onto the fuel rail fitting and lock the retaining tab.
141. Cycle the key to build fuel pressure. Check for fuel leaks around the injectors, fuel rail fitting, and fuel rail damper. Address any leaks present.
**Turbo Oil System Installation**
142. Remove the plug from the end of the oil rifle. A significant amount of torque will be needed to remove it as it is installed with a green threadlocker.
143. Install the nut fully onto the oil return adaptor. Place the o-ring into the groove just below the nut.
144. Remove the factory oil fill cap from the engine case and install on the return adaptor. Thread the adaptor into the case until the o-ring is lightly seated. Back off until the NPT port is oriented toward the oil filter.
145. Install the \( \frac{3}{8}'' \times 5'' \) section of hose as far as possible onto the oil drain tube. The use of dish soap can aid installation. Install two spring clamps over the tube.
146. Install the \( \frac{1}{2}'' \) NPT x \( \frac{3}{8}'' \) barbed brass fitting into the hose then thread it into the adaptor.
Torque Oil drain brass barb fitting: Finger Snug plus 1–2 full turns.
147. Slide the hose down the oil drain onto brass fitting if needed. Place a spring clamp on each end of the tube.
148. Tighten the oil return adapter nut down on the o-ring.
149. Install the \( \frac{3}{8}'' \) P-Clamp on the tube. Screw into the thread by the oil filter and tighten.
Torque Oil Drain P-Clamp Bolt: 106 in-lb/12 Nm
150. Apply thread sealant and install the ½ BSPT x ¼ NPT adaptor in the oil rifle and tighten.
Torque Oil supply adaptor: 30 ft-lbs
151. Install the 1/4 NPT x AN-4 into the adapter. Torque: 16 in-lbs/2 Nm
152. Install the M12 x AN-4 adapter onto the turbo inlet port using a copper sealing washer. Tighten.
**NOTE:** Apply a small amount of clean oil to the supply inlet port to lubricate the turbo.
Torque: M12 x AN-4 240 in-lb/27 Nm
153. Drill ¼” hole in frame mount using supplied template.
154. Install the turbo oil supply line onto AN-4 fittings.
Torque using ⅜ in crows foot wrench.
**NOTE:** A wrench may be needed on the previously installed adapters to prevent loosening during install. Do not overtighten AN-4 fittings.
Torque AN-4 Oil fitting: 160 in-lb/18 Nm
155. Install P-Clamp on the supply line and bolt to the hole drilled in the 14 frame using M6x20mm and Nylock nut.
**Exhaust System Installation**
156. Install the muffler, using the factory brackets and bolts. Ensure the rubber mounting dampers are in place.
Torque Muffler Bracket M8 Bolt: 180 in-lb/20 Nm
157. Place a muffler gasket and exhaust gasket on each end of the exhaust pipe. Place the exhaust pipe loosely on the muffler studs and slide the V-band clamp over the V-band flanges on the turbo outlet. Start the nuts on the muffler studs.
158. Tighten the V-band flange, then the four nuts on the muffler studs.
**NOTE:** The V-band clamp must be tightened first or the exhaust pipe can be overstressed from misalignment and fail prematurely!
Torque V-band Clamp Nut: 106 in-lb/12 Nm
Torque Muffler Stud Nuts: 26 ft-lb/35 Nm
159. Install M6 clip nuts on the heat shield brackets on the exhaust pipe. Place the heat shield and install M6 bolts into the two clip nuts and the boss on the turbine housing and tighten.
Torque Exhaust Heat Shield Bolts: 106 in-lb/12 Nm
160. Install the stock O2 sensor with the supplied adaptor. Tighten.
Torque Oxygen Sensor: 216 in-lb/25 Nm
**Charge Air System Installation**
161. Install the factory bed frame. Tighten the bolts.
Torque Bed Frame M6 Bolts: 192 in-lb/22 Nm
Torque Bed Frame M12 Nuts: 32ft-lb/48 Nm
162. Install the silicone coupler and stainless charge tubes from the filter box to the turbo inlet. Orient the silicone couplers to avoid the shock. Tighten the clamps.
*NOTE:* Use dish soap or similar to ease installation of the silicone couplers.
Torque T-Bolt Band Clamp: 60 in-lb/6.8 Nm
163. Install the silicone couplers and tube from the plenum to the blow off valve.
164. Drill ¼” holes in the bed frame for the P-clamps.
*NOTE:* The coupler to the plenum will need trimmed. Use straight edge to ensure clean cuts.
165. Install the two P-clamps onto the tube as shown below with provided M6 bolts and lock nuts. Tighten.
166. Cut the holes in the top cowl with the supplied templates using a dremel or similar tool. Install the cowl but do not fasten it down.
167. Trim one side of the elbow coupler 1.25”. Trimmed side will go on the turbo outlet.
168. Install the intercooler inlet and outlet tubes. Slide the couplers and T-bolt band clamps to the intercooler fully over the tubes. Thread them through the cowl holes, then slide the couplers onto the intercooler.
169. Tighten the T-bolt clamps
Torque T-Bolt Band Clamp: 60 in-lb/6.8 Nm
170. Hold the blow off valve over the silicone couplers in the orientation
shown below. Mark the filter box to turbo inlet coupler with a permanent marker where it meets the body of the blow off valve. Cut with a hose cutter.
171. Install the blow off valve as shown. Use a worm clamp on each side of the valve and tighten. Screw the adjuster all the way in then back out 3 full turns.
Torque: 60 in-lb/6.8 Nm
172. Insert the T fitting into the end of the 18” rubber hose from the blow off valve. Route the line to the inside towards the bed frame.
173. Connect the engine vacuum lines from the throttle body to the other end of the T fitting, as shown below.
174. Fasten the line to the bed frame with a cable tie.
175. Run the line from the T fitting to the check valve, and then from the check valve to the purge valve in the cab.
**NOTE:** *The orientation of the check valve is important, make sure the rounded side is facing the T fitting Breather Installation*
176. Remove the factory bracket from the breather. Remove the shorter of the two hoses on the bottom of the breather. Remove the long hose from the top of the breather.
177. Install the clip nuts on the breather relocation bracket. Install the breather on the bracket and the bracket onto the bed frame. Torque Breather Bolts: 106 in-lb/12 Nm
178. Cut the long hose that was running from the breather to the intake plenum in the locations below.
179. Route the cut hose from the empty port on the breather to the drivers side port on the valve cover. Secure with the OEM spring clamps.
180. Insert the plastic barbed fitting into the end of the supplied hose, secure with a spring clamp.
181. Attach the open end of the supplied breather hose to the top breather port and insert the barb fitting into the T fitting on the turbo inlet port coupler.
NOTE: When viewing the coupler from the driver’s side of the vehicle, the T should be around 2 o’clock.
182. Fasten the cowl with the OEM hardware.
183. Fasten the bed with the OEM hardware.
184. Replace the oil drain plug
185. Add new 15W-40 synthetic oil
186. Install the coolant drain plug and refill the coolant.
NOTE: Make sure the coolant breather bolt is closed after refilling the coolant
187. Reinstall any remain body panels that were removed during assembly
188. Check the work area for any remaining parts. Calibration
189. Follow the attached instructions for flashing a tune using the PV3
NOTE: It is recommended to display Intake Air Temperature (F), Manifold Air Pressure (Psi), Engine Speed (RPM), and Throttle Position(%). Break In
190. Start the car and let the engine coolant warm to 140°F before driving.
191. Check for boost leaks around all couplers and sensors. Check for exhaust leaks around gaskets.
192. Address any leaks present.
193. Drive the car conservatively until engine temperature reaches normal operating temperature.
194. Shut off the engine and let it return to room temperature.
195. Repeat the last two steps to complete the engine heat cycles. Maintenance Schedule Follow Kawasaki®’s periodic maintenance chart as stated in the service manual.
NOTE: Check engine oil before every ride.
| PART # | DESCRIPTION | TORQUE SPEC |
|------------|--------------------------------------------------|------------------------------|
| 500-1485 | Banjo Bolts | 144 in-lbs |
| 500-1493 | Fuel Quick Connect Fitting | 180 in-lbs |
| | Head Bolts (11mm) | First: 22 ft-lbs |
| | | Second 52 ft-lbs |
| | Head Bolts (6mm) | 106 in-lbs |
| | Cam Chain Slide Bolt | 216 in-lb |
| | Exhaust Port Stud Nuts | 180 in-lbs |
| | Muffler Stud Nuts | 26 ft-lbs |
| LFT-0886 | Intercooler Lower Mounting Bolts | 144 in-lbs |
| LFT-0320 | Intercooler Upper Mounting Bolts | 106 in-lbs |
| | Valve Cover Bolts, | 89 in-lbs |
| | Cooling system vent bracket bolts | 78 in-lbs |
| | Cooling system vent plug | |
| | Camshaft Chain Tensioner Cap | 180 in-lbs |
| | Cam Tensioner Mounting Bolt | 89 in-lbs, Blue Threadlocker |
| | Cam Sprocket Bolt | 106 in-lbs, Green Threadlocker|
| | Primary Clutch Cover Bolt | 111 in-lbs, Blue Threadlocker|
| | Clutch Housing Bolt | 80 in-lbs |
| | Primary Clutch Carrier Nut | 180 in-lbs |
| | Primary Clutch Carrier Bolt | 132 in-lbs |
| | Primary Clutch Shaft Nut | 170 ft-lbs |
| | Muffler Bracket Bolts | 180 in-lbs |
| | Seat rail bolts | 180 in-lbs |
| | Fuel Pump Bolts | 52 in-lbs |
| 560-0278 | Fuel Damper | Finger Snug plus 1–2 turns, Thread Sealant |
| | Fuel Rail Screws | 31 in-lbs |
| 500-1477 | T-Bolt Band Clamp | 60 in-lbs |
| 500-1488 | Oil drain brass barb fitting | Finger snug plus 1–2 full turns |
| 500-1489 | Oil Drain P-Clamp Bolt | 106 in-lbs |
| LFT-0320 | Oil Drain Flange Bolts | 106 in-lbs |
| 500-1484 | Oil supply adaptor, ½ BSPT x M12x1.5 | Finger snug plus 1–2 full turns |
| | Oil Supply Bracket M8 Bolt | 180 in-lbs |
| | Oil Supply P-Clamp M6 Bolt | 106 in-lbs |
| | Muffler Bracket M8 Bolt | 180 in-lbs |
| 500-1483 | V-band Clamp Nut | 106 in-lbs |
| | Muffler Stud Nuts | 26 ft-lbs |
| LFT-0320 | Exhaust Heat Shield Bolts | 106 in-lbs |
| LFT-0320 | Intercooler Fan M6 bolts | 106 in-lbs |
| | Oxygen Senso | 216 in-lbs |
| | Bed Frame M6 Bolts | 192 in-lbs |
| | Bed Frame M12 Nuts | 32 ft-lbs |
| 560-0282 | HR 10 Spark Plug | 115 in-lbs |
SPECIFICATIONS:
| SYMPTOM | CAUSE | REMEDY |
|--------------------------|----------------------------------------------------------------------|------------------------------------------------------------------------|
| Intake temperature too high | • Boost leak
• Turbo wastegate actuator hose disconnected
• Intercooler clogged | • Check for disconnected charge air tube
• Check for disconnected turbo wastegate actuator hose
• Unclog intercooler |
| Engine misfires at high throttle/boost | • Injector connection faulty
• Fuel damper faulty
• Turbo wastegate actuator hose disconnected
• Leaking/Disconnected MAP sensor hose | • Check for disconnected charge air tube
• Check for disconnected turbo wastegate actuator hose
• Inspect MAP sensor |
| Engine misfires at mid throttle/boost | • Injector connection faulty
• Fuel damper faulty
• Leaking/Disconnected MAP sensor hose | • Check for disconnected injectors
• Inspect MAP sensor |
| Engine overheats | • Low on coolant
• Radiator fan malfunctioning
• Radiator plugged
• Headgasket Failure | • Refill coolant
• Replace radiator cooling fan
• Unclog radiator
• Replace head gasket |
| Engine bogs at high RPM | • Clogged fuel filter
• Faulty fuel pump
• Low fuel pressure | • Replace fuel filter
• Replace fuel pump
• Repair any fuel leaks |
Packaging Parcel List
A. 560-0322 Performance Package, Turbo Kit, Box A, 2020-up KRX Models
a. 560-0326 Hardware Kit, Turbo Kit, 2020-up KRX Models
i. LFT-0320 Screw, SHC, M6 x 12mm L, 1 mm Pitch QTY 29
ii. 500-1541 Screw, Self Tapping, Sheet Metal, #8 x ½”
iii. 500-1134 Nut, Clip-on, M6, Stainless Steel QTY 9
iv. 500-1486 Washer, Sealing, M12, Aluminum QTY 1
v. 500-1534 Screw, Flanged, Hex Head, M8x 1.25mm x 20mm QTY 5
vi. LFT-0959 Screw, SHCS, M6 x 1.0 x 40mm QTY 2
vii. LFT-0334 Nut, Lock, Flanged, Non Marring, Nylon Insert, Class 8, M6 x 1, Zinc Plated, Steel QTY 8
viii. 500-0272 Washer, Flat, M6, Zinc QTY 4
ix. 50-8331A Plug, Pipe, w/Yellow Sealant Patch, ¾ Taper, ¼-27 NPTF x .188”, Zinc, Steel
x. LFT-1307 Washer, M6x1.5mm, Zinc QTY 6
xi. 500-0499 Screw, SHC, M5x14mm, Stainless Steel
xii. 500-1654 Screw, Hex, M8x12mm, 1mm pitch, Zinc QTY 2
xiii. 50-8124-S Fitting, Hose, Tee, Male, 1.94” x .1875”, White, Nylon
xiv. 500-1271 Screw, SHC, M6 x 20mm, Stainless Steel QTY 3
b. 560-0309 Plenum, Upper, Weldment, 2020-up KRX Models
c. 170-0666 Bracket, Plenum, Lower, 2020-up KRX Models QTY 2
d. 170-0663 Plenum, Intake, Rotomolded, 2020-up KRX Models
e. 550-1044 Muffler, Kit, Power Tune XTO, Race Only, KRX
f. 500-1540 Decal, Turbo, 2020-up KRX Models QTY 2
g. 510-0893 Publication, Instruction Sheet, Turbo Kit Assembly, 2020-up KRX Models
h. 560-0271 Bracket, Breather, Stainless Steel, 2020-up KRX Models
B. 560-0323 Performance Package, Turbo Kit, Box B, 2020-up KRX Models
a. 560-0263 Turbocharger, Assembly, 10psi, 2020-up KRX Models
b. 500-1483 Clamp, V-Band, 3”, Stainless Steel
c. 560-0327 Coupler, Kit, Turbo Kit, 2020-up KRX Models
i. 170-0682 Coupler, 2” ID x 3” L, Black, Silicone QTY 2
ii. 500-1475 Coupler, 90deg, 2” ID, Silicone, Black QTY 2
iii. 170-0683 Coupler, 2.5” ID x 3” L, Black, Silicone QTY 2
iv. 500-1476 Coupler, T-Hose, 2” ID x 1” ID Branch, Silicone, Black QTY 2
v. 170-0656 Hose, Air Filter to Turbo, Silicone, 2020-up KRX Models
d. 560-0272 Hose, Breather, Formed, 2020-up KRX Models
e. 560-0325 Performance Package, Turbo Kit, Electronic, Gasket, Engine Internals, 2020-up KRX Models
i. 551-1699 Gasket, Exhaust Manifold, 2020-up KRX Models QTY 2
ii. 560-0281 Sprocket, Cam, 10 deg offset, 2020-up KRX Models QTY 2
iii. 560-0274 Sensor, MAP, 3 Bar, 2020-up KRX Models
iv. 560-0275 Tube, MAP Sensor, Formed, Black, Epichlorohydrin Polymer, Duro 75, 2020-up KRX Models
v. 560-0273 Harness, Wiring, Intercooler Fan, 2020-up KRX Models
vi. 570-0052 Relay, 12 V, 5 Pin, CM Series
vii. 550-0375 Adapter, Oxygen Sensor, 12mm to 18mm Stainless Steel
viii. 551-1771 Gasket, Muffler, 2020-up KRX Models
ix. 551-1770 Gasket, V-Band, 3”, 2020-up KRX Models
x. 560-0282 Spark Plug, 10mm, HR 10, 2020-up KRX Models QTY 2
xi. 500-1653 Hose, 3/16”ID x 9”L, Buna-N Rubber
xii. 500-1551 Plug, Kit, Brass, 2020-up KRX Models
i. 500-1533 Plug, .230” x .375”, Brass QTY 12
xiii. 50-8124-5 Fitting, Hose, Tee, Male, 1.94” x .1875”, White, Nylon
xiv. 560-0317 Hose, 3/8”ID x 18”L, Buna-N Rubber
xv. 500-1542 Check Valve, .1875”, Nylon
xvi. 570-0051 PV3, Calibration, Turbo, 10psi, 2020-up KRX Models
f. 560-0328 Clamp, Kit, Turbo Kit, 2020-up KRX Models
i. 500-1477 Clamp, Hose, Band Style, Lined, 57-65mm ID QTY 13
ii. 500-1478 Clamp, Hose, Band Style, Lined, 83-91mm ID
iii. 500-1479 Clamp, HOse, Worm Drive, Lined, SAE #16 QTY 3
iv. 500-1480 Clamp, P Style, 2” ID QTY 2
v. 560-0318 Clamp, Hose, Band Style, 65-75ID QTY 4
vi. 500-1490 Clamp, P-Style, ¾” ID QTY 2
vii. 500-1553 Clamp, Spring-Band, Constant Tension, 13/16”
viii. 500-0990 Cable, Tie, 11.3” Black QTY 10
ix. 50-8003 Cable, Tie, ¼” x 3”, Black, Nylon QTY 20
x. 500-1555 Clamp, Snap, ¼”, Plastic
g. 560-0329 Tube, Kit, Oil, Turbo, 2020-up KRX Models
i. 160-0235 Adapter, Oil Fill, Aluminum, 2020-up KRX Models
ii. 560-0270 Nut, Oil Fill, Aluminum, 2020-up KRX Models
iii. 500-1489 Fitting, ½” NPT, x ¾” Barb, Brass
iv. 500-1491 Fitting, Barbed, ¼” x 1”, Black, Plastic
v. 500-1487 O-Ring, -126, 1.362”ID x 1.568” OD, Viton
vi. 500-1596 Fitting, 1/2” BSPT x 1/4” NPT, Aluminum
vii. 500-1598 Fitting, M12 x 1.5mm x AN4, Stainless Steel
viii. 500-1597 Fitting, 1/4” NPT x AN4, Zinc Plated, Steel
ix. 500-1599 Washer, Sealing, M12, Copper
x. 500-1629 Hose, High Temperature, 5/8” ID x 5”, Black, Textile Reinforced
xi. 500-1488 O-Ring, -17, .676” ID x .816” OD, Viton
xii. 560-0269 Tube, Oil, Turbo Drain, 2020-up KRX Models
xiii. 160-0236 Tube, Oil, Turbo Supply, 2020-up KRX Models
xiv. 560-0292 Clamp, Spring-Band, Constant Tension, 1½” QTY 2
xv. 500-1556 Clamp, P-Style, Vibration Damping, ¼” ID QTY (1-2)
h. 551-1773 Heat Shield, Exhaust, Turbo, Laser Cut, Stainless Steel, 2020-up KRX Models
i. 560-0324 Performance Package, Turbo Kit, Box C, Engine Internals, Exhaust, Fuel System, Charge Tube, 2020-up KRX Models
i. 560-0321 Intercooler, Assembly, 2020-up KRX Models
1. 170-0658 Intercooler, 8” x 18” x 3.5”, 2020-up KRX Models
2. 560-0312 Shroud, Intercooler, Powdercoated, Denim Black, 2020-up KRX Models
3. 560-0311 Guard, Intercooler, Powdercoated, Denim Black, 2020-up KRX Models
4. 500-1482 Bumper, Adhesive, ¾” OD x ½” H, Polyurethane Rubber QTY 8
5. 560-0320 Bumper, Adhesive, 1⅛” OD x ¾” H, Polyurethane Rubber QTY 6
6. Decal, Caution Hot, 2020-up KRX Models
ii. 920-0157 Piston, Set, Standard, Forged, Gapped, 9.0:1 Compression, 2020-up KRX Models
iii. 170-0702 Valve, Kit, Blow-off, Black, Aluminum
iv. 170-0662 Tube, Intercooler to BOV, Stainless Steel, 2020-up KRX Models
v. 170-0661 Tube, Turbo to Intercooler, Stainless Steel, 2020-up KRX Models
vi. 170-0731 Tube, Air Filter to Turbo, Stainless Steel, 2020-up KRX Models
vii. 170-0732 Tube, BOV to Plenum, Stainless Steel, 2020-up KRX Models
viii. 560-0313 Brace, Intercooler, Powdercoated, Denim Black, 2020-up KRX Models 560-0267 Clamp, Intercooler Brace, 2020-up KRX Models
ix. 560-0310 Bracket, Roof, Powdercoated, Denim Black, 2020-up KRX Models
x. 5650-0330 Fuel System, Kit, Turbo Kit, 2020-up KRX Models
1. 560-0276 Fuel Injector, Set, w/ Adapters, 85lb/hr 2020-up KRX Models
2. 560-0277 Fuel Rail, High Flow Injectors, Machined, Aluminum, 2020-up KRX Models
3. 560-0278 Damper, Fuel, ¾” NPT
4. 500-1493 Fitting, M12 x 1.5 x 5/16” Quick Connect
5. 560-0291 Filter, Gas, In-Tank, 2020-up KRX Models
6. 500-1552 Sealant, Thread, Loctite, 567,.2oz
7. 500-1486 Washer, Sealing, M12, Aluminum
8. 510-0870 Decal, S&S® Off Road, 2” x 3”
9. 510-0871 Decal S&S® Off Road, 4” x 6”
10. 510-0895 Publication, Instruction Sheet, Fuel System Kit, 2020-up KRX Models
xi. 560-0316 Fan, Intercooler, Modified, 7.5”, 12 V QTY 2
xii. 560-0279 Gasket, Head, 2020-up KRX Models
xiii. 560-0280 Gasket, Base, 2020-up KRX Models
xiv. 170-0669 Tube, Exhaust, Turbo, Weldment, 2020-up KRX Models
1. Plenum, Intake, Rotomolded, 2020-up KRX Models ............... **170-0663**
2. Bracket, Plenum, Lower, 2020-up KRX Models .................. **170-0666**
3. Plenum, Upper, Weldment, 2020-up KRX Models .............. **560-0309**
4. Coupler, 2.5” ID x 3” Length, Black, Silicone .................. **170-0683**
5. Screw, SHC, M6 x 12mm L, 1mm Pitch .......................... **LFT-0320**
6. Clamp, T-Bolt, Lined, 67–75mm ID ................................. **560-0318**
1. Tube, Intercooler to BOV, Stainless Steel,
2020-up KRX Models ........................................... **170-0662**
2. Tube, BOV to Plenum, Stainless Steel,
2020-up KRX Models ........................................... **170-0732**
3. Coupler, T-Hose, 2.00”ID x 1”ID Branch, Silicone, Black .... **500-1476**
4. Coupler, 2”ID x 3”Length, Black, Silicone. .................. **170-0682**
5. Tube, Turbo to Intercooler, Stainless Steel,
2020-up KRX Models ........................................... **170-0661**
6. Coupler, 90 deg, 2”ID, Silicone, Black ....................... **500-1475**
7. Clamp, P Style, 2” ID ............................................. **500-1480**
8. Washer, Flat, M6, Zinc ........................................... **500-0272**
9. Screw, SHCS, M6 x 1.0 x 40mm ................................. **LFT-0959**
10. Nut, Lock, Flanged, Nonmarring, Nylon Insert, Class 8, M6 x 1,
Zinc Plated, Steel .................................................. **LFT-0334**
11. Clamp, Hose, Band Style, Lined, 57-65mm ID ............... **500-1477**
1. Muffler, Kit, Power Tune XTO, 49 State, KRX, S&S® Turbo ...............550-1039
2. Tube, Exhaust, Weldment, 2020-up KRX Models ..................170-0669
3. Clamp, V-Band, 3”, Stainless Steel ...........................................500-1483
4. Heat Shield, Exhaust, Turbo, Laser Cut, Stainless Steel,
2020-up KRX Models .................................................................551-1773
5. Gasket, Muffler, 2020-up KRX Models .................................551-1771
6. Nut, Clip-On, M6, Stainless Steel .............................................500-1134
7. Screw, SHC, M6 x 12mm L, 1mm Pitch .................................LFT-0320
8. Adapter, Oxygen Sensor, 12mm to 18mm, Stainless Steel ....550-0375
9. Washer, M6 x 1.5mm, Zinc .........................................................LFT-1307
10. Adapter, Oxygen Sensor, 12mm to 18mm, Stainless Steel ..550-0375
1. Tube, Oil, Turbo Drain, 2020-up KRX Models ...............560-0269
2. Adapter, Oil Fill, Aluminum, 2020-up KRX Models ..........160-0235
3. Clamp, P-style, 5/8” ..............................................500-1490
4. Nut, Oil Fill, Aluminum, 2020-up KRX Models .............560-0270
5. Hose, 5/8”ID x 4”L, Black, Textile Reinforced .............560-0290
6. Fitting, 1/2” NPT x 5/8”Barb, Brass ..........................500-1489
7. O-ring, (-126), 1.362”ID x 1.568” OD, Viton® .............500-1487
8. O-ring, (-017), .676”ID x .816” OD, Viton® ................500-1488
9. Clamp, Spring-Band, 15/16”....................................560-0292
1. Hose, Breather, Formed, 2020-up KRX Models ...............560-0272
2. Bracket, Breather, Stainless Steel, 2020-up KRX Models ....560-0271
3. Fitting, Barbed, Black, Plastic.................................500-1491
4. Nut, Clip-On, M6, Stainless Steel............................500-1134
5. Clamp, Spring-Band, Constant Tension, 13/16".............500-1553
1. Fitting, 1/4" NPT x AN4, Zinc Plated, Steel ..................500-1597
2. Fitting, M12 x 1.5mm x AN4, Stainless Steel ...............500-1598
3. Washer, Sealing, M12, Copper .................................500-1599
4. Fitting, 1/2" BSPT x 1/4" NPT, Aluminum ...................500-1596
5. Hose, Oil, Turbo Supply, 2020-up KRX Models .............160-0236A
24
26
28
DISCLAIMER:
S&S parts are designed for high performance applications and are intended for the very experienced rider only. The installation of S&S parts may void or adversely affect your factory warranty. In addition such installation and use may violate certain federal, state, and local laws, rules and ordinances as well as other laws when used on motor vehicles used on public highways, especially in states where pollution laws may apply. Always check federal, state, and local laws before modifying your vehicle. It is the sole and exclusive responsibility of the user to determine the suitability of the product for his or her use, and the user shall assume all legal, personal injury risk and liability and all other obligations, duties, and risks associated therewith.
SAFE INSTALLATION AND OPERATION RULES:
• Before installing your new S&S parts:
• All torque specifications and assembly procedures are critical. Failure to follow the instructions may result in catastrophic failure of the engine or other components.
• Gasoline is extremely flammable and explosive under certain conditions and toxic when breathed. Do not smoke. Perform installation in a well ventilated area away from open flames or sparks.
• If the vehicle has been running, wait until engine and exhaust pipes have cooled down to avoid getting burned before performing any installation steps.
• Disconnect battery to eliminate potential sparks and inadvertent engagement of starter while working on electrical components.
• Read instructions thoroughly and carefully so all procedures are completely understood before performing any installation steps. Contact S&S with any questions you may have if any steps are unclear or any abnormalities occur during installation or operation of a vehicle with a S&S part on it.
• Consult an appropriate service manual for your vehicle for correct disassembly and reassembly procedures for any parts that need to be removed to facilitate installation.
• Use good judgment when performing installation and operating the vehicle. Good judgment begins with a clear head. Don’t let alcohol, drugs or fatigue impair your judgment. Start installation when you are fresh.
• Be sure all federal, state and local laws are obeyed with the installation. For optimum performance and safety and to minimize potential damage to carb or other components, use all mounting hardware that is provided and follow all installation instructions.
• Exhaust fumes are toxic and poisonous and must not be breathed. Run vehicles in a well ventilated area where fumes can dissipate.
• Some factory hardware and components will be reused. Retain all hardware until installation is complete and proper operation has been verified.
• Foreign debris such as dust and dirt can cause excessive wear and possible failure of engine components. Thoroughly clean the vehicle before beginning installation.
Overview-
The included Power Vision 3, (PV3) unit has been preloaded with the Tune File for your turbocharger kit. These instructions will cover the steps to connect the PV3 and flash the correct Tune File into your vehicle’s ECU.
The PV3 has the following features-
• Flashing Tune Files
• Gauges display & Data Logging
• Access to diagnostic data
Controls and connections of the Power Vision are shown in Figure 1
Installing the PV3
Note- Loading the Tune File will require the vehicle to be keyed on for a few minutes. If the battery is low or weak it is recommended a battery charger be connected during the Tune File loading process.
Note- The PV3 does not need to be mounted to the vehicle.
Caution- If you choose to mount the PV3 make sure the PV3 and included diagnostic cable will not interfere with the operation and/or steering of the vehicle.
Connecting the PV3 Attach the diagnostic cable to the diagnostic port on the PV3 and to the 6 pin diagnostic port on the vehicle. The vehicle diagnostic port can be found inside of the cab, The port is below the dash near the gas pedal.
Getting Started
1. Connect the PV3 to the vehicle.
2. Once connected, turn the key on and wait a moment for the PV3 to power up and display the Getting Started Screen
a. Click Enter button to initiate the ECU read process
3. Click the Enter button to Continue Read ECU. Note that this may take up to 15 minutes.
4. PV3 will indicate READ SUCCESSFUL when the ECU Read has been completed. Press Return button.
5. PV3 will now show the Ready To Flash screen. Click Enter button
6. The next screen to appear is the Flash Tune Screen. All S&S Tune files are denoted by a part number followed by a tune file description. Two tunes should be present, one will be the recently read stock file and the other will be the applicable S&S tune for your vehicle.
7. On the PV3 unit, use the up/down arrows to highlight the correct tune file PN for your application and then hit the Enter button.
8. The PV3 will now ask to pair to your vehicle. This is required to flash the tune, so press enter to pair.
9. Once the tune file is selected, the TUNE INFO screen will appear. This screen gives the full description of the tune file. Ensure description matches the configuration of your vehicle and then hit Enter to start ECU Flash.
10. The PV3 will ask if you want to flash with the selected tune file. Press enter to flash.
11. The PV3 will go through the Tune Writing process then display Flash Complete screen
At this point the tune file has successfully been flashed to the vehicle’s ECU. Turn the ignition key off for at least ten seconds to complete the process.
Accessory Functions in Power Vision
The following steps cover the accessory menus and options available in your Power Vision
Viewing the Vehicle Information-
This menu allows you to view the device status (paired/not paired), VIN #, Model ID, ECU serial number, tune compat, and checksum compat.
Select Vehicle Tools>Vehicle Information
Viewing Diagnostic Codes-
This menu allows you to read and clear diagnostic trouble codes.
1. To read codes, select Vehicle Tools>Diagnostics>Active Codes
2. To clear codes, select Vehicle Tools>Diagnostics>Clear Codes
Reading the ECU-
This menu allows you to read the ECU. This process takes about fifteen minutes.
Select Vehicle Tools>Read ECU
Restoring the ECU-
This menu allows you to restore the ECU. Use Restore ECU if the device does not complete the flash or if your vehicle will not start.
Select Vehicle Tools>Restore ECU
Configuring Gauges-
This menu allows you to configure up to four different gauge screens. Each gauge screen has four configurable channels.
1. From the Main Menu, select Device Tools>Configure Gauges
2. Select a gauge screen and press Enter
3. Select a channel and press Enter
4. Select a channel from the list and press Enter
5. Select the precision or units for that channel and press Enter
6. Continue setting up the remaining channels as desired
7. Continue configuring the remaining gauge screens as desired
Deleting Data Logs
This menu allows you to delete data logs.
1. Select Vehicle Tools>Delete Data Log
2. Select a specific log to delete or select *Delete All Logs*
Viewing Device Information
This menu allows you to view the device firmware version, serial number, and stock code.
Select Device Tools>Device Information
Logging Data
1. Press the Log button to begin logging. The Power Vision screen will illuminate a bright red banner across the top when logging
2. Press the Log button again to stop logging
3. Use the Power Core software to view log files
Reformatting the Disk-
This menu allows you to reformat the disk and erase all data.
Warning: Reformating will erase all of the preloaded calibrations on your device. DO NOT perform this function unless advised to do so by S&S Cycle technical support.
Select Device Tools>Reformat Disk
Changing the Settings
This menu allows you to rotate the screen allowing you to change the orientation of the Power Vision device along with adjusting the screen brightness.
Select **Device Tools>Settings>Rotate Screen** to flip the screen.
Select **Device Tools>Settings>Brightness** to change the screen brightness.
Updating the Device-
This menu allows you to update the device with the latest firmware.
1. Go to www.dynojet.com/PowerVision.
2. From the top navigation menu, select Support>Downloads.
3. Select Power Vision 3.
4. Download the Power Vision 3 Firmware
5. Save the file to your device.
6. Select Device Tools>Update Device
*Warning: Your Power Vision was preloaded with the latest firmware at the time of programming. Do not perform a device update unless advised to do so by S&S Cycle Technical Support.* |
Using Participatory Mapping and a Participatory Geographic Information System in pastoral land use investigation: The impacts of rangeland policy in Botswana
Basupi, L V., Quinn, C H., Dougill, A J
June 2016
No. 97
SRI PAPERS
SRI Papers (Online) ISSN 1753-1330
First published in 2016 by the Sustainability Research Institute (SRI)
Sustainability Research Institute (SRI), School of Earth and Environment,
The University of Leeds, Leeds, LS2 9JT, United Kingdom
Tel: +44 (0)113 3436461
Fax: +44 (0)113 3436716
Email: firstname.lastname@example.org
Web-site: http://www.see.leeds.ac.uk/sri
About the Sustainability Research Institute
The Sustainability Research Institute conducts internationally recognised, academically excellent and problem-oriented interdisciplinary research and teaching on environmental, social and economic aspects of sustainability. Our specialisms include: Business and organisations for sustainable societies; Economics and policy for sustainability; Environmental change and sustainable development; Social and political dimensions of sustainability.
Disclaimer
The opinions presented are those of the author(s) and should not be regarded as the views of SRI or The University of Leeds.
# Table of Contents
ABOUT THE AUTHORS ........................................................................................................... 4
1. INTRODUCTION ............................................................................................................. 5
2. MATERIALS AND METHODS ....................................................................................... 8
2.1. Study area .................................................................................................................. 8
2.2. Focus group discussions ............................................................................................ 11
2.3. Participatory Mapping and PGIS ............................................................................... 12
2.4. Data analysis .............................................................................................................. 17
3. RESULTS ......................................................................................................................... 18
3.1. Grazing zones before the land use transformation .................................................. 18
3.2. Traditional management institutions and access to pasture resources .................. 21
3.3. Indigenous grazing system and traditional patterns of seasonal mobility ............. 22
3.4. Spatial comparisons and the impacts of grazing policies ........................................ 24
3.5. Access to water resources .......................................................................................... 28
3.6. Current land use ......................................................................................................... 29
4. DISCUSSION .................................................................................................................. 34
4.1. Indigenous knowledge, rangeland privatisation and spatial mobility ..................... 34
4.2. Participatory mapping, PGIS and government planning .......................................... 35
5. CONCLUSION ................................................................................................................ 38
6. ACKNOWLEDGEMENTS ............................................................................................... 39
7. REFERENCES ................................................................................................................ 40
Abstract
Since the 1980s, the spatial extent of communal grazing lands in Botswana have been diminishing due to rangeland privatisation and fencing linked to animal health policies. Spatial comparisons of pastoral land use transformations are particularly important where accessibility to grazing and water resources remains at the core of sustainable pastoralism policies. Moreover, achieving success in pastoral development research requires a sound understanding of the traditional pastoralists’ information systems including the nature of pastoralists’ indigenous spatial knowledge. This study explores indigenous spatial knowledge through participatory mapping and PGIS to understand and analyse pastoralists’ grazing patterns, spatial mobility and the impacts of subdivisions and privatisation policies in Botswana’s Ngami rangelands. The study used focus group discussions, historical analysis through key informant interviews, policy content analysis and participatory mapping exercises along with community guided transect walks. Maps produced provide insights into the traditional pastoralists’ tenures, patterns of land use and impacts of rangeland policy on traditional livestock spatial mobility and access to grazing lands. Privatisation and rangeland enclosures has resulted in the restricted movement of livestock, overstocking of floodplains and riparian rangelands with some natural water pans which were critical for wet season grazing now inaccessible to local communities. We conclude that the integration of herders’ spatial knowledge can be used to foster better articulation and understanding of pastoralists’ tenures which are often lacking in the communal land administration systems. Such integration of methods could usefully contribute to sustainable pastoral land management policy toolkits in semi-arid rangeland environments capable of enabling decision making for Sustainable Land Management.
Keywords: Communal grazing Lands; Indigenous knowledge; Spatial mobility; Privatisation; Sustainable Land Management; Okavango Delta
Submission date 16-03-2016; Publication date 16-06-2016
ABOUT THE AUTHORS
**Basupi, L V.**, is a PhD candidate at the Sustainability Research Institute, University of Leeds, funded by the Government of Botswana and Botswana International University of Science and Technology. He holds an MSc in Environmental Science from the University of Botswana. Before joining SRI in 2014 for his PhD, he was an employee of the Government of Botswana in the Department of Lands for 5 years. His doctoral research involves investigating impacts of subdivisions of communal lands and pastoral system shifts on pastoral land use and livelihoods systems south of the Okavango Delta in Ngamiland District, Botswana. The research combines qualitative participatory methods with quantitative spatial analysis using a *Participatory Geographic Information System* (PGIS) to investigate the local dynamics of land use and pastoral systems.
**Quinn, C H.**, is an Associate Professor in the Sustainability Research Institute, University of Leeds. She is an environmental social scientist with over 15 years of experience working on interdisciplinary projects in Africa and the UK. Her research interests lie at the interface between social and agricultural dimensions of environmental change and sustainability. Specifically, her focus is on the livelihoods, vulnerability and adaptive capacity of farmers, and their relationships with governance, ecosystem services and landscapes, and supply chains.
**Dougill, A J.**, is Professor of Environmental Sustainability and Dean of the Faculty of Environment at the University of Leeds, UK. He is a dryland environmental change researcher who has developed research approaches that integrate a range of disciplines including soil science, ecology, development studies and environmental social sciences. He has expertise in leading the design and implementation of interdisciplinary ‘problem-based’ research projects focused on sustainability issues at range of scales. His work has developed innovative research methodologies for using scientific approaches together with local participation to ensure locally relevant research outputs across dryland Africa.
1. INTRODUCTION
Policies, laws and regulations that govern communal grazing lands have important implications for the livelihoods of pastoral communities (Benjaminsen et al., 2009, Rohde et al., 2006, Chanda et al., 2003). In sub-Saharan Africa, the consequences are particularly significant (Galaty, 2013, Tache, 2013, Mwangi, 2007, Peters, 1994) as many countries have undergone or are undergoing rapid tenure transformations (Ho, 2014, Toulmin, 2009, Lebert and Rohde, 2007). The changes in statutory land tenure systems such as privatisation have interrupted pastoralists’ capacity for utilising customary land rights by using traditional mobility strategies for coping with eventualities such as drought and disease incidences (Kaye-Zwiebel and King, 2014, Lengoiboni et al., 2010).
Pastoralism in arid or semi-arid lands is characterised by substantial spatial heterogeneity in land use, resource access, management regimes and the ways in which pastoralists respond to environmental constraints (Tsegaye et al., 2013). The management system must be responsive to variability and uncertainty. The survival of herds depends on the pastoralists’ ability to move to better areas with remaining fodder availability (Vetter, 2005). Therefore, extensive spatial scales of exploitation become a prerequisite for a successful pastoral production system (Moritz et al., 2013, Notenbaert et al., 2012). As an adaptive strategy, mobility allows pastoralists to guard against temporarily variable environmental conditions (Ellis, 1995) and also access key resources (water and fodder) that are heterogeneously distributed (Kaye-Zwiebel and King, 2014). Pastoral systems are also characterised by high dependency on local knowledge. The spatial knowledge systems held by herders help them determine what the temporal and spatial distribution of resources might be in any given year and are
central to sustainable pastoral herd mobility (Oba, 2013). The rationale for herd mobility is reinforced by the recognition that drylands systems are non-equilibrium in nature and that resource sustainability is largely a function of spatial and temporal variability in rainfall and/or fire (Dougill et al., 2016, Kakinuma et al., 2014, Dougill et al., 1999).
While pastoral mobility is at the core of many livelihoods in African rangelands, traditional patterns of mobility are increasingly under threat (Oba, 2013). Since the 1970s, development interventions and agricultural policy changes have interfered with indigenous rangeland management institutions, notably the alienation of valuable grazing and water resources, curtailment of mobility, sedentarization of pastoralists, establishment of artificial water points such as boreholes and the imposition of formal administration institutions (Ho, 2014, Homann et al., 2008, Cleaver and Donovan, 1995). The push towards subdivisions and privatisation continue to undermine the nature in which pastoralists’ grazing activities are organised and spatially distributed in communal lands.
In Botswana, the significant policy arrangements that have impacted communal rangelands are the Tribal Grazing Land Policy (TGLP) of 1975 (Magole, 2009, White, 1992, Childers, 1981) and the National Policy on Agricultural Development (NPAD) of 1991 (RoB, 1991). Largely influenced by the tragedy of the commons thesis (Hardin, 1968), both policies viewed traditional pastoral systems as destructive and hence responsible for land degradation and low productivity (Rohde et al., 2006, Cullies and Watson, 2005). The assumption was that the effect of unregulated communal grazing coupled with the perceived increases in livestock numbers was responsible for rangeland degradation and the consequences would, over time, become severe. Livestock needed to be regulated in line with ecological carrying capacity and the only
way this was to be achieved was through privatisation since it was assumed that communal land tenure arrangements fail to regulate pastoralists’ access to resources (APRU, 1976, RoB, 1975). In Ngamiland, fences were introduced from the late 1980s following the first phase of TGLP ranches allocated in 1981. Today, pastoralists find themselves surrounded by private ranches and disease control fences which bisect rangelands and separate communal pastoralists from critical grazing resources.
To date, only very few studies have offered integration of pastoralists’ spatial knowledge, spatial comparisons and/or participatory mapping approaches and PGIS to analyse pastoral management systems and the impacts of such transformations as described above. Studies have emphasised the overarching need to generate spatial environmental knowledge regarding pastoralists’ tenures and land use in order to develop the capacity of local communities and help governments to reconcile pastoral tenure conflicts and manage resources in dryland areas (Bennett et al., 2013, Lengoiboni et al., 2010, Turner et al., 2014). This study draws on participatory research methods and geospatial technologies to explore local indigenous spatial knowledge in understanding traditional pastoralists’ spatial mobility and the impacts of subdivisions and privatisation policies in Botswana’s Ngamiland district. The study provides important spatial information based on pastoralists’ knowledge that could potentially be used to inform planning. Participatory mapping and Participatory Geographic Information System (PGIS) approaches emphasise the involvement of local communities to produce distinctive spatial knowledge of their communities (Smith et al., 2012, Dunn, 2007).
The aim of this study is to explore indigenous spatial knowledge through participatory mapping to understand and analyse pastoralists’ grazing spaces and patterns of spatial mobility prior to the 1975 rangeland policy and after policy intervention. The
study objectives are to: (1) investigate the spatial extent of communal grazing, traditional patterns of transhumance and regulatory mechanisms to access grazing lands before the land tenure transformation to its current situation in Ngamiland District, Botswana; and (2) determine the current land use patterns and spatial impacts of rangeland policies on access to grazing and water resources as per the pastoralists’ spatial knowledge.
2. MATERIALS AND METHODS
Participatory approaches were used to collect primary data in the 7 study villages between April and August 2015. The criteria for selecting the study sites were based on proximity to the ranches and/or veterinary cordon fences, cattle numbers and distance from the ranches so as to determine the impact along a gradient. The sites were categorised as follows depending on their locations: Toteng/Sehithwa/Bodibeng/Bothatogo (located adjacent to the ranches and Lake Ngami: Lake villages), Kareng (Western sandveld village) and Semboyo/Makakung (Northern sandveld or Setata fence villages) (see figure 1). Through the village leadership meetings, the participatory research methods to be used in the data collection exercise and how the findings will be shared with the community and other relevant authorities were explained.
2.1. Study area
The study area is located on the southern fringe of the Okavango Delta (Figure 1). Ngamiland was chosen because of the number of ranches (over 180) demarcated in
the district (both through the Tribal Grazing Land Policy (TGLP) of 1975 and the National Policy on Agricultural Development (NPAD) of 1991), which makes it relevant to the problem being investigated. Also because the Okavango Delta is host to a large diversity of natural resources, including wildlife, diverse vegetation and water resources, land fragmentation through veterinary cordon fences and protection areas to separate wildlife and livestock is prominent. The area is dominated by open low shrub and tree savannahs, vast sand veld, alluvium (along the rivers) and limited hard veld (Burgess, 2004, BRIMP, 2002). The climate is sub-tropical (semi-arid) with distinct hot, wet summers, and cold dry winters. Recorded average rainfall ranges between 450 and 550 mm (DMS, 2013). The distribution of rainfall over space and time is highly variable and is the most determining factor in grazing distribution (DoL, 2009). Field data collection was conducted around Lake Ngami and areas south of the Setata veterinary cordon fence, where the primary livelihood activity is subsistence pastoralism. Selection and use of natural resources, as well as diseases pandemics (both human and livestock), have influenced settlements and migration patterns (including configuration of kinship networks) of different ethnic groups along the Okavango Delta (Mbaiwa et al., 2008). Settlements have been largely confined to the margins of the permanent swamps. The sandveld area known as Hainaveld where the privatised ranches have been demarcated is located to the south of Lake Ngami.
Figure 1: Ngamiland study area, its land uses and study sites Source: Authors
2.2. Focus group discussions
Focus group discussions (Hay, 2010) were conducted in each study village as follows, Semboyo \((n = 9)\), Makakung \((n = 12)\), Bothatogo \((n = 10)\), Bodibeng \((n = 8)\), Toteng \((n = 9)\), Sehithwa \((n = 8)\), Kareng \((n = 6)\). Focus groups targeted different stakeholders and groups in the community, especially pastoralists (cattle herders) with experience in communal areas, members of the communal farmers’ associations and farmers committees. Two additional focus groups targeted only women (a mix of female agro-pastoralists selected from the lake villages, \(n = 14\)) and youth groups (youth groups engaged in pastoral farming and those that were active in community projects, selected across the study villages, \(n = 14\)) to incorporate divergent views and also to avoid a situation whereby influential male members of a group dictate the mapping and discussion process. Farmers’ committees, villages’ leaderships and village development committees were used to solicit names of people who could participate in focus groups since they knew the people and were always available and willing to help. First they were, briefed on specifications for participants’ required.
Discussions were structured around a set of questions on traditional mechanisms controlling access to communal lands, institutional forces governing patterns of spatial mobility, major changes in land tenure and pastoral land use arrangements since the early 1980s when fences were introduced, problems experienced in the communal areas and perspectives on current land tenure and land use. From this volunteers were identified who guided the transect walks and provided invaluable knowledge in the naming of places and landscape features. A total of 7 transect walks were carried out and the number of volunteers were as flows Semboyo \((n = 4)\), Makakung \((n = 6)\), Bodibeng \((n = 2)\), Bothatogo \((n = 6)\) Toteng \((n = 3)\), Sehithwa \((n = 2)\), Kareng \((n = 4)\).
All interviews and discussions were conducted in Setswana language and tape recorded.
2.3. Participatory Mapping and PGIS
Using a cognitive mapping process (Chan et al., 2014), we utilised sketch maps drawn by farmers during the focus groups to determine the grazing areas, spatial extent and patterns of seasonal livestock mobility before and after the fences. Participatory mapping can form an important aspect of indigenous spatial knowledge generation (Chapin et al., 2005, Neitschman, 1995) since it allows resource users to convey not only positions of activities but also the background details concerning the locations and drivers of land use activities (Levine and Feinholz, 2015). The process involves using maps as spatial tools to acquire indigenous knowledge and portraying this in a spatial way through the use of GIS (Dunn, 2007, Talen, 2000). Indigenous spatial knowledge is the unique knowledge held by indigenous communities, acquired through practical experience and developed around specific geographic areas (McCall and Dunn, 2012). Pastoralists’ maps can be fitted into the government cadastral classification to improve awareness of pastoralists’ customary tenures and thus protect indigenous grazing lands patterns and transhumance corridors.
Participants were provided with two printed land cover base maps (Figure 2) at a spatial scale of 1:250,000. These maps were produced using data obtained from Department of Surveys and Mapping in the form of processed Landsat 8 imagery data running through 2013 (dry season; June and August) and 2014 (wet season; December and February). The classification was achieved using ArcGIS ‘cluster unsupervised classification’ whereby pixels are grouped using reflectance properties. Accuracies were improved by combining summer and winter data rather than single
data analyses. The map recorded different land cover categories; Dense Savanna / Forest, Open Low Shrubland, Cultivated Rainfed Crops, Swamp Vegetation (Bare and Low Herbaceous), Natural Bare Ground, Natural Waterbodies, Pans. In order to validate the land cover map, ground truthing was carried out over a period of two weeks of extensive field survey during the month of June 2016 (dry season). The field survey covered most of the accessible areas and landmark features such as natural water bodies or pans, rivers, plains and gravel roads used by pastoral communities in the study area. A Global Positioning System (GPS) was used to record all the coordinates of the features visited. Local volunteers assisted in the naming of landscape features; rivers, roads, pans and plains. The aim was to produce a base map to aid the participatory mapping process.
District land use data were obtained from various government departments; Department of Lands, Ministry of Agriculture, Department of Tourism and Tawana Land Board. Each department had a map to show its areas of interest and operation. For example, Tawana Land Board had general land uses while the Ministry of Agriculture had a more detailed map of agricultural land uses; cattle crushes, livestock boreholes and commercial ranches. These maps were compared and only those that provided the greatest detail of land uses were used. The land cover map was geo-referenced and then overlaid with land use data. This was to allow land use features such as roads, settlements and boreholes to appear on the land cover map so that participants could identify their grazing spaces around these features. The principal land features on the map that farmers could identify were the Okavango delta, swamp areas, Lake Ngami, roads, rivers, pans and pastoralists settlements and fences. Borehole data obtained from Tawana Land Board were also used to help focus group
participants in identifying grazing lands and cattle posts; borehole numbers were shown on the map and attribute data about the boreholes, such as names of owners, printed on a separate page.
Mapping sessions were conducted with each focus group. At the beginning, participants were asked to identify their settlements, prominent landscape features and to locate their grazing areas or cattle posts. Secondly, pastoralists were asked to delineate their historical pasture boundaries before the current fences, identifying them according to seasons. Based on their practical knowledge, participants were then asked to describe areas identified as grazing areas in terms of resources and access mechanisms. On a separate map showing the fences and ranches, participants were asked to identify their contemporary grazing spaces, including livestock movement patterns. The placement of a boundary or migratory movement patterns was achieved through consensus among group members. The degree of understanding of the map varied from one focus group to another. While some chose to depict their grazing areas as polygons, others chose to draw lines showing their migration to particular grazing areas. In order to validate features on participatory maps with features on the ground, community guided GPS transect walks were conducted with volunteers from each mapping group.
Results from the focus group discussions and participatory maps were checked for consistency by running a series of key informant interviews as well as by visiting cattle posts and conflicts prone zones. The purpose of key informant interviews was to collect information from a wide range of people with first-hand knowledge and experience of pastoral systems. The selection of key informants was based on purposive/judgemental sampling, which is the deliberate choice of an informant due to the qualities the informant possesses (Tongco, 2007). Members of different
committees; farmers committees, village development committees and pastoralists in cattle posts, were consulted to provide an initial list of potential respondents. Subsequent informants were identified through snowballing technique (Speelman et al., 2014, Denzin and Lincoln, 2000). Participants were asked if they knew of others who meet the selection criterion and could potential participate in the interviews. A total of 26 informants were interviewed across the study area.
Figure 2: Landcover base Map
Source: Authors
Data Source: Department of Surveys and Mapping, Tawana Land Board, Landsat 8 satellite imagery
2.4. Data analysis
Maps made by local pastoralists were scanned and transformed into digital versions using ArcGIS software. In order to align the coordinates, locations and other topographic features, participatory sketch maps were geo-referenced using the base maps and district land use maps. These were then digitised into layers of digital polylines or polygons delimitating the full extent of boundaries identified by participants or pastoralists impressions of livestock movement patterns before and after the barrier fences. Maps from the different villages were overlaid to produce a consolidated map. The aim of the mapping exercise was to depict the landscape scale picture of the pastoral production system in terms of time and space as per the herders’ spatial knowledge. These were then visualised in ArcGIS as PGIS maps. Land use pressure zones were identified using proximity and geographic distribution analysis through spatial statistics; mean centre and standard distance in ArcGIS (Scott and Janikas, 2010). First we identify the mean centre (the centre of concentration) for the land use features (cattle posts and arable lands). Standard Distance was then used to measure the degree to which these features are concentrated or dispersed around the mean centre, giving a spatial picture of the concentration of land use pressures.
Qualitative data from focus group discussions and key informant interviews were transcribed and analysed using content analysis in order to identify the main themes or issues emerging from the discussions. The content analysis involved the following steps: (i) identification of major themes emanating from the discussions (ii) assigning codes to major themes (iii) classification of responses under the identified themes (iv) writing the research narratives and discussions (Adam et al., 2015).
Policy content analysis of the two policies (TGLP and NPAD) was undertaken to uncover how these policies came about, the intended and unintended benefits and/or impacts, and how the policies are understood by the local pastoral communities. This was analysed along with data from in-depth interviews with key informants.
3. RESULTS
This section presents the results of the study based on the study objectives. The section starts by examining the traditional pastoral systems and grazing zones before the land tenure transformations. Attention has also been given to the historical and institutional forces governing patterns of spatial mobility, resource access and use. This formed the basis from which the spatial impacts of the transformations were studied and spatial comparisons of the past and present made.
3.1. Grazing zones before the land use transformation
In the extensive indigenous grazing lands before the current land tenure and land use transformations, pastoralists identified three distinct grazing zones (Figure 3) according to characteristics of grazing resources, indigenous management systems and seasonal livestock movement patterns. These zones are consistent with the indigenous management system of rotating livestock between main permanent water sources and remote grazing lands in the sand veld areas (Magole, 2009). The identified grazing zones are as follows: (1) Village grazing areas: these formed a radius of about 15 – 20 km around the main settlements. These grazing lands were
reserved for milk cows, smaller calves and some small livestock. The village grazing areas were the most important communal grazing land for those families with small herds of cattle. They derived from these areas not only grazing but also veld products, thatching grass, firewood and water for their livestock; (2) Dry season grazing areas: plains around perennial water sources, swamps, lagoons, lakes and river areas. Before the introduction of fencing and rangeland enclosures, Lake Ngami flood plains and surrounding riverine vegetation have served as dry season grazing reserves. According to information gathered through key informants and through focus group discussions, each herder was expected by the village chief and/or community to take his/her livestock out of these areas immediately after the first rains when water had collected in the sand veld pans; (3) Wet season grazing areas: central to these rangelands were the traditional natural water ponds and pans spreading along vast sands of the dune system in the sand veld areas. These water sources are surrounded by wet season grazing areas.
Figure 3: Combined pastoralists’ participatory map, showing grazing zones and historical migration patterns before major policy interventions took place.
3.2. Traditional management institutions and access to pasture resources
Information gathered through in-depth interviews and oral narratives reveals that before the rangelands policy interventions, pastoralists’ movements were prescribed and regulated through traditional institutional arrangements. Traditional village Chiefs were the custodians of the land and determined rules of access including regulating seasonal livestock movements. Grazing areas were established around seasonal encampments known as cattle posts. Places that contained dry season grazing resources and seasonal water sources were considered critical to the pastoral production system. Clans or kin networks controlled different pans and wells at their cattle posts, including surrounding rangelands. Each of these rangelands was based on some physiographic features and defined genealogically. Rights to a cattle post could be inherited or claimed by virtue of customary use. Rules and regulations that controlled access by non-clan members were through reciprocal access agreements. Pastoralists moved livestock around the three grazing zones in accordance with orally defined demarcations, rules and regulations. Word of mouth was enough, for example, to restrict access to the delta swamps, located to the north of Lake Ngami, whose forage was reserved to be used during periods of severe forage scarcity or as a last resort during drought.
Moreover, risks imposed by environmental conditions such as livestock diseases, livestock predation and recurrent dry spells and sometimes flooding of the Okavango delta, demanded flexibility in pastoralists decision making. Flexible spatial mobility ensured that pastoralists were able to mitigate risks and avert disasters. Pastoralists assert that before the privatisation policies when land was available, they engaged in an adaptive system of livestock herding and management which involved guiding and
controlling livestock movement including herd splitting; dividing livestock into separate herds depending on their age, sex or type for increased niche specialisation. As the chairperson of Kareng Village Famers association argued during key informant interviews, ‘…herd splitting resulted in increased livestock niche specialisation, in reduced competition among livestock for the same vegetation species,…in improved livestock watering practices and in the distribution of grazing pressure as each animal was taken to the pasture land which best suits its characteristics…’
From the early 1960s, some households gained more exclusive use of rangelands by investing in drilling and equipping boreholes. The advent of borehole technology meant that resourceful farmers were able to open up new lands for grazing. Pastoralists sought permissions from their chief to establish cattle posts around their boreholes, which would enable them to have influence over the pasture nearby. Non-borehole owners continued to conduct their seasonal herd movements between perennial water sources including Lake Ngami, riverine floodplains and pans in the sand veld. Some borehole owners also continued the practice of seasonal mobility mainly to alleviate grazing pressures around their boreholes and also to allow underground water to recharge during the wet season. Respondents argued that borehole owners were lucky because privatisation policies, especially the NPAD, later gave them preferential treatment in the ranch allocation process.
3.3. Indigenous grazing system and traditional patterns of seasonal mobility
Pastoralists around Lake Ngami reported during the focus groups that immediately after the first rains, herds moved slowly away from Lake Ngami and surrounding riverine rangelands back to the south. The first rains fall in September/October and
livestock must move to the south to take advantage of renewed pastures and water in the sand veld pans. The move was an attempt to make optimal use of the rain and also lessen pressure on deteriorated dry season pastures. Based on the composition and size of herd owned and available fodder, pastoralists pressed on towards the *Khwebe* hills in the current commercial ranches area. Those with the largest herds made the longest moves while those with fewer cattle moved a shorter distance. In good years, the return might be delayed until late winter (around July or August) because the wells and pans (*macha*) retained water for a longer time. In drought years, for example, during the 1965/1966 and 1982 droughts periods, this return would commence immediately after arable farmers had harvested (around April/May). Once back, the grazing pressure around settlements and water resources increased significantly, so the incentive to delay the return was a positive one. The movement was also vital for small-scale arable farmers who utilised the rivers and floodplains for flood recession arable farming. These fields were not fenced and hence the problem of cattle raiding crops was avoided. Once the harvest was complete and harvests collected, some weaker stock such as lactating cows and calves were returned to feed on crop residues. Pasturing on agricultural fields or village grazing areas was quite brief, lasting for a month and livestock had to move out with the beginning of winter.
Opportunistic movements in response to the highly spatially and temporally variable occurrence of green grass in response to rainfall and fire events were critical. Riverine and floodplain pastures were strictly conserved for use during the dry season or during periods of drought. Pastoralists indicated that permanent grazing in floodplains exposes livestock to parasites such as liver fluke and roundworms which develop rapidly under moist conditions. Owing to this risk, grazing on Okavango Delta system swamps and floodplains was limited to the dry seasons when water levels had
subsided. In extreme situations of drought or abnormal fluctuations in environmental conditions, pastoralists became more mobile and sometimes moved outside the core of their territory, negotiating access with other pastoralists where necessary.
3.4. Spatial comparisons and the impacts of grazing policies
Spatial comparisons of the current situation as mapped by herders’ shows that the functional distinction between village grazing areas, dry season grazing areas and wet season grazing areas have been eroded by rangeland policy interventions. Herds are confined around settlements, as commercial ranches have replaced wet season grazing areas to the south of Lake Ngami. Some cattle posts are located only about 2 km from settlements. To the north these rangelands have been bisected by veterinary fences, significantly reducing the area available for communal pastoralism (Figure 4).
Figure 4: Spatial configuration after the transformations showing all year grazing areas and the directions of livestock movements.
This significant reduction in the amount of communal grazing lands available to pastoralists was not accompanied by a reduction in cattle numbers. Pastoralists argued that cattle numbers continued to increase and are currently very high. This argument is supported by the livestock trend statistics from the Department of Veterinary Services as depicted in figure 5 which indicates a continuing increase in cattle numbers in the communal areas.
**Figure 5: Cattle numbers, 2000 – 2014**
*Data source:* Department of Veterinary Services (DVS)
**Table 1: A GIS-estimate of pastoralists’ grazing areas before the privatisation policies (square kilometres)**
| Study villages | Village grazing areas | Dry season grazing areas | Wet season grazing areas | Total |
|---------------------------------------|-----------------------|--------------------------|--------------------------|-------|
| Semboyo/Makakung (Setata) | 705 | 2,009 | 2,598 | 5,312 |
| Kareng (Western Sandveld) | 695 | 850 | 4,586 | 6,133 |
| Bothatogo/Bodibeng/Toteng/Sehithwa | 1,863 | 2,942 | 6,131 | 10,935|
| (lake villages) | | | | |
| **Total** | **3,263** | **5,801** | **13,315** | **22,380** |
26
As pastoralists’ argued, current rangelands are congested, heavily over-utilised and conflicts are prominent. Table 1 provides a GIS-estimated measure of the areas used by pastoralists before land privatisation and subdivision. The current grazing area between the fences (Figure 4) measures 7,371 km² as all season grazing areas shared by all villages in the study area, compared to 22,380 km² of wet, dry and drought season grazing before the fences. Approximately 65% of the communal lands has been lost to privatisation and subdivisions since 1975. This scenario underscores the impacts of rangelands policies on livestock spatial mobility, traditional grazing patterns and access to rangeland resources.
Interviews with key informants focusing on their spatial knowledge revealed that after the introduction of fences and ranches, spatial mobility declined significantly and year-round use of formerly dry season riverine riparian pastures and village grazing areas increased. This has prompted uncontrolled livestock movements, livestock crop damage, stray livestock and increased human-wildlife conflicts, especially with elephants, as migratory corridors have been bisected by fences. ‘…we think that when fences were constructed, no due consideration was given to wildlife traditional migratory corridors, fences have diverted wildlife, especially elephants into our cattle posts and arable lands…elephants are everywhere and can no longer follow their traditional routes from the delta to the sand veld…the destructions are so much…’ lamented one elderly pastoralist at Bothatogo village during the key informant interviews. Wildlife opportunities to disperse into the sand veld pastures during the wet season are limited as most have been foreclosed by fences.
Pastoralists also assert that control of livestock diseases is difficult because of congestion in communal areas. Livestock numbers in communal areas continued to escalate and grazing pressures around watering points were reported to be high.
Opportunistic ranchers with access to privatised land continue to keep large numbers of cattle in communal areas. This allows them to sell when opportunities for markets arise on either side of the fence. During periods of drought or prolonged dry seasons, they retreat to their own exclusive private ranches. Ngamiland pastoralists’ today struggle to continue a tradition of transhumance or temporary migration that has sustained them for many years as land has been dissected with commercial ranches, veterinary fences and wildlife conservation areas. Livestock movement patterns tend to be chaotic and severely limited. Pastoralists follow individualistic strategies to access grazing and water resources with little regard for the old traditions of consensus. Most reported that it is no longer possible to migrate away from Lake Ngami or the surrounding riverine vegetation during the wet season because there is nowhere to which they can migrate.
3.5. Access to water resources
Competition for water is a major source of land and natural resource use pressure among pastoralists in the study area. Water rights are crucial to the sustainable management of land. Water demand for livestock is ever increasing due to the enclosure of some natural water pans by ranches. Figure 4 indicates some natural water sources that have been either enclosed by commercial ranches or separated by veterinary fences. Pastoralists argued that the decision by the government to allow enclosure of natural water pans by private farms had weakened traditional rangeland management systems, deprived pastoralists of valuable assets and fostered conflict over the remaining water sources, as well as contributing to land degradation caused by livestock congestion around Lake Ngami. During the dry season, the seasonal
rivers dry up. The occurrence of droughts and irregular timing of the rainy season also worsens the situation. Competition over access to water between and within land use systems, especially between livestock and wildlife, is widespread as most of the natural ponds are now enclosed by private farms. Only 30% of the 26 pastoralists interviewed during key informants’ interviews, indicated that they own livestock boreholes of their own, the rest depend on natural water sources or are tenants to those with boreholes. Privatisation and subdivision have created uncertainty with regards to access to and control over water resources. Pastoralists argued that the creation of private water points in communal areas was used as a strategy by elites to gain access to privatised communal lands as the NPAD policy later gave preference to those with water points when allocating ranches. Moreover, pastoralists argued that most of the underground water is saline and some borehole owners, including ranchers, continue to use natural water sources, natural ponds, lagoons, rivers and lakes to water their livestock.
3.6. Current land use
The current size of the communal grazing area is much smaller compared to the pre-interventions area. An assessment of land use categories within this area (Figure 4) shows a spatial configuration of cattle posts concentrated around permanent water sources, settlements, and arable fields. The effects of privatisation and subdivision are reflected mostly by the changing patterns of pastoral land use, including the year-round use of critical grazing reserves which were previously used only for a season in a year. Livestock is concentrated near major settlements, roads, rivers and the lake (Figure 6). Herders are now confined to smaller areas with limited access to the
broader range of ecological zones that were traditionally used for managing environmental variability.
Herding practices such as niche specialisation of herds were dismantled as flexible movements were curtailed. ‘...Hainaveld formed our grazing reserves and wet seasons retreat...these ranches and fences have displaced us from our traditional grazing land and significantly destructed our traditional land use system...our system of pastoral and land management was neither random or irrational, but deliberate and adapted to the conditions of our environment...now the tiny piece of land we have is congested and overgrazed...’, argued a focus group participant at Sehithwa village. The distinctions between land use systems, cattle posts, arable lands and settlements is unclear. The area between the lake and the ranches was described by pastoralists as a zone of competition and stocking pressure due to the ever increasing number of cattle in the area. Pastoralists displaced by the ranches have been encroaching into this zone. The area is decreasing as ranches are expanding into it, pushing the communal pastoralists further towards the villages. Furthermore, pastoralists reported that they have lost access to their ancestral grazing lands. For indigenous pastoralists, the land and its surrounding environment, provided strong spiritual and cultural values; a source of life and a symbol of respect. Privatisation has resulted in the dismantling of customary boundaries and subdivisions of ancestral lands.
Based on land use concentrations and using ArcGIS proximity and geographic distribution analysis, we utilised land use data (cattle posts and arable lands) obtained from Landsat 8 imagery, and GPS based transect walks to estimate land use pressure zones in the study area. The standard distance, at 25182.25 m from the centre of concentration (Lake Ngami) represent the highest degree of compactness of land use (severe pressure zone). Beyond this distance, the dispersion increases and hence
land use pressure decreases (moderate land use pressure zone). The types of land use pressures and their associated impacts (Table 2) were identified by pastoralists during focus group discussions. Figure 6 identifies land use pressure zones. The concentration of land use activities is around Lake Ngami and the ranches hence these areas suffers the greatest land use and grazing pressure.
Figure 6: Land use pressure areas (cattle posts concentrations) and other land uses; ranches, arable fields superimposed to identify areas of competing land use using spatial statistics (mean centre and standard distance)
Table 2: Pressures and associated impacts due to fences and growth in livestock numbers in the communal areas
| Land use pressure | Associated Impacts |
|--------------------------------------------------------|------------------------------------------------------------------------------------|
| Fences and expansion of ranches—restricted access | Lose of grazing and water resources, blockage of livestock and wildlife migratory corridors, curtailment of seasonal migrations. |
| Concentration of cattle closer to permanent water sources, e.g. Lake Ngami | Overstocking of floodplains and riparian rangelands, piosphere based rangeland degradation, destruction of ecosystems, difficult to control disease incidences, e.g. Foot and Mouth (FMD). |
| Land use overlaps; arable land, cattle posts and wildlife | Land use competitions and conflicts; destruction of crops by livestock and wildlife, predation, human elephants conflicts |
| Dual grazing – opportunistic stocking strategies | Resource use conflicts, overstocking in communal areas, land use conflicts and strained local social relations between ranchers and communal area pastoralists |
| Borehole based livestock expansion in an area with poor groundwater | Borehole drilling along dry river valleys where shallow ground water exists, rapid development of sacrifice and bush encroachment zones |
As common pastures are enclosed for private use, natural water ponds and trekking routes are blocked and wildlife migratory corridors are either blocked or diverted. Grazing pressure and conflicts both intensify and communal pastoralists bear the effects of ecosystem deterioration. The research area contains four land use systems. Drawing a transect from the south to the north, land use categories and management regimes range from commercial farming on privately owned ranches (both livestock and game), subsistence agro-pastoralists squeezed in the area between the fences where land use and grazing pressures are intense (settlements, arable and cattle posts), the contested wildlife management area to the south-west (NG5) and a network of veterinary fences followed by a purely commercial wildlife management area and tourism facilities to the north-eastern part where pastoralist production systems are restricted.
4. DISCUSSION
4.1. Indigenous knowledge, rangeland privatisation and spatial mobility
Pastoralists have developed knowledge and skills to cope with environmental variability (Solomon et al., 2007) including comprehensive systems of seasonal migrations and livestock mobility under controlled grazing patterns. The most pertinent challenge faced by pastoralists today is access to sufficient pasture resources and portable water to sustain their livestock through both good and drought years. Pastoralists were particularly wary of problems associated with livestock spatial mobility. Privatisation was supposed to reduce problems of congestion in communal areas, but instead, it has exacerbated the problem of congestion and significantly curtailed livestock mobility. As elsewhere in sub-Saharan Africa, pastoralists continue to suffer extreme marginalisation due to reduced access to pastureland (Lesorogol, 2008, Bogale and Korf, 2007). Researchers have shown how policy interventions in rangelands have ignored traditional pastoral systems, leading to a widespread loss of rangeland productivity and an increase in pastoral poverty (Taylor, 2012, Bassett, 2009, Rohde et al., 2006). In Ngamiland, pastoralists blamed the government policy interventions for the loss of traditional grazing territories, erosion of traditional management institutions’ and the overall rangeland degradation in the communal areas.
The findings of this study show that the inhabitants of Ngami area used to follow a traditional transhumant pattern of pastoralism with seasonal movement to and away from Lake Ngami and surrounding Okavango delta floodplains. The enclosure of formerly wet season pastures and water resources by private ranches in the sand veld
and the curtailment of livestock spatial mobility by veterinary cordon fences undermined the livelihoods of local pastoralists. Our findings suggest that the loss of critical wet season grazing reserves was due to failure to recognise the spatial heterogeneity of Ngamiland pastoral landscape, including diversity within traditional pastoralists’ management strategies. This scenario is compounded by the dual grazing rights problem in which ranchers continue to use loopholes in policies to graze their livestock in the communal areas (Mulale et al., 2014, Magole, 2009, White, 1992). This was reported to be widespread in Ngamiland. The major impact of subdivisions and privatisation is the constriction of livestock spatial mobility, the destruction of traditional grazing patterns and the fragmentation of ecosystems as wildlife habitats and migratory corridors are bisected. Conflicts about access to resources and human-wildlife conflicts have increased as pastoralists argue that wildlife migratory routes have been diverted by the fences.
4.2. Participatory mapping, PGIS and government planning
The study set out to investigate pastoral land use and livestock spatial mobility within the context of herders’ spatial knowledge system using participatory mapping and PGIS. Conventional land administration systems which focus mostly on fixed tenure systems are often not equipped to capture the dynamism inherent in traditional pastoralists’ tenures (Bennett et al., 2013), especially in sub-Saharan African rangelands. This process generated unique spatial knowledge in relation to traditional grazing systems, pasture boundaries and the impacts of rangeland policies. It also facilitated a spatially explicit discussion (Talen, 2000), which enabled pastoralists to articulate their viewpoints in a spatially explicit manner. In addition to spatial data,
participatory mapping processes provide non-spatial information such as histories, social relations and patterns (Levine and Feinholz, 2015). By collecting evidence from the field through participatory mapping and GPS based transect walks, overlapping claims to pasture boundaries can be identified and mapped as spatial units, for example, conflict-prone areas or land use pressure zones. Such information can inform planning and/or strategies for resolving land use conflicts in the communal areas.
As it stands, most governments’ land use classification systems are inherently rigid (Smith, 2003) and fail to incorporate diversity in indigenous pastoral landscapes. Indigenous pastoral lands have mostly been presented as empty spaces (Smith et al., 2012) by some rangeland policies. For example, Botswana’s TGLP assumed that there was an abundance of empty lands which could be turned into ranches or even reserved for future use (Magole, 2009, Childers, 1981). However, many such ‘unused’ lands were actually rangelands that were critically important to pastoralists for managing routine dry spells or drought cycles, as demonstrated in this paper, or used by nomadic hunter-gatherers. Participatory mapping allows for the full conceptualisation of the dryland pastoral landscape, pastoral land use spatial organisation, and the diverse connection between indigenous pastoralists’ practices, ecosystems and local boundaries. Such an approach allows cognitive geographic knowledge to be formalised, geo-referenced and placed within the frameworks of geospatial technologies to reveal and improve geographic understanding (Smith et al., 2012) of pastoral landscapes. Smith (2003) notes that, when map making remains only with government officials or bureaucratic elites, they inherently neglect features of the landscape that are important and most relevant to indigenous communities. We agree and argue by extension that analysing pastoral land use through local actors’
spatial knowledge allows resources users to depict not only their grazing space but the interrelationship between resource temporal arrangements and its spatial functionality.
Local pastoral communities reported that it was the first time that they had been involved in a project in which they drew their own maps and delineated boundaries. One of the challenges, however, of using participatory mapping and PGIS is in raising expectations that ultimately their land use challenges will be addressed. Pasture boundaries and alienation of productive grazing lands and encroachment by ranches remains a source of disputes between pastoralists, government officials’ and ranchers. Pastoralists strongly felt that the maps produced will help them present their case to the relevant authorities or make their case for land heard. Though the study did not aim at resolving pastoralists’ issues and problems, nor advocate for the dismantling of existing private rights, it did offer an alternative way of studying pastoralists’ issues through participatory mapping and PGIS to produce useful cartographic information and empirical evidence with regard to problems of privatisation and subdivisions of communal grazing lands. Such an approach is limited in the pastoral research literature. Future policy directions and indeed the ongoing privatisation processes might need to be considered within the context of local spatial narratives, maps and local environmental knowledge to avoid further consequences to the rural poor.
When combined with participatory mapping, the analytical power of GIS offers opportunities to study territorial issues, resolve conflicts, and study and monitor the impacts of land transformations on the pastoral landscape. As a consequence, this research offers possibilities for the use of participatory mapping and GIS-driven methodologies in pastoral management systems and research studies. The empirical evidence and experience, drawn from this research, shows that pastoralists can work
with researchers to transform their cognitive spatial knowledge into forms that can inform policy. The basic spatial relationship between indigenous people and the natural environment in which they make their living is often poorly understood by government planners and/or policy makers (Herlihy, 2003). Yet instead of playing an active role in research agendas, pastoralists are often the subject of research (Vetter, 2005). Their needs priorities, environmental and spatial knowledge are often omitted in policies that directly affect them. Participatory mapping and PGIS then becomes an alternative way of producing environmental and spatial knowledge by decentralising the process (Herlihy and Knapp, 2003) and putting it in the hands of indigenous resource users. This research has documented the spatial extent of livestock mobility and traditional grazing reserve zones, providing a measure of traditional pastoral land use patterns before and after rangeland policies. By creating indigenous spatial maps of pastoralism and making spatial comparisons of the impacts of rangeland policies over time, the study reveals, in a novel way, the spatial impacts of the contested land transformation that have taken place in Ngamiland since 1975.
5. CONCLUSION
This study demonstrates how participatory mapping and GIS can be used to foster better articulation and understanding of pastoralists’ tenures and grazing patterns. Pastoralists from all the focus groups lamented the diminishing of communal grazing lands and constriction in livestock spatial mobility as ranches have taken away large tracts of land from communal ownership. Pastoralists argued that animal health and rangeland policies do not recognise their indigenous resources rights, traditional grazing territories and management systems. Efforts to negotiate with authorities have
been vague mainly due to lack of documented spatial information on their grazing territories. Pastoralists saw the value of participatory mapping as a way of gaining empirical evidence and detailed information which they can use to engage relevant government entities or defend their grazing space against expropriation by state or opportunistic elites and also help them manage their resources in a sustainable manner. This study reveals that herders are endowed with a wealth of spatial knowledge about their grazing territories. This knowledge is rarely documented or incorporated in conventional government planning processes. The PGIS approach produces valuable pastoral land use and spatial information vital for the sustainable management of land in the dryland environment where mobility and resource access remains at the core of pastoral sustainability. As communal lands continue to shrink and prospects for sustainable pastoralism becomes more uncertain, future research will need to focus on pastoralists’ adaptations within this constrained environment and how pastoralist production systems can be made resilient in the face of continued environmental and policy change.
6. ACKNOWLEDGEMENTS
The research was carried out under research permit number EWT 8/36/4 XXX (73) of the Government of Botswana. We are thankful for the research funding provided by the Botswana International University of Science and Technology, Government of Botswana and University of Leeds Sustainable Agriculture Bursaries.
7. REFERENCES
ADAM, Y. O., PRETZSCH, J. & DARR, D. 2015. Land use conflicts in central Sudan: Perception and local coping mechanisms. *Land Use Policy*, 42, 1-6.
APRU 1976. An Integrated Programme of Beef Cattle and Range Research in Botswana: 1970 – 1976. *Animal Production Research Unit (APRU), Ministry of Agriculture, Gaborone*.
BASSETT, T. J. 2009. Mobile pastoralism on the brink of land privatization in Northern Cote d'Ivoire. *Geoforum*, 40, 756-766.
BENJAMINSEN, T. A., HOLDEN, S., LUND, C. & SJAASTAD, E. 2009. Formalisation of land rights: Some empirical evidence from Mali, Niger and South Africa. *Land Use Policy*, 26, 28-35.
BENNETT, R., ZEVENBERGEN, J. & LENGOIBONI, M. 2013. The pastoralist's parcel: towards better land tenure recognition and climate change response in Kenya's dry lands. *FIG Working Week 2013, Environment for Sustainability, Abuja, Nigeria*.
BOGALE, A. & KORF, B. 2007. To share or not to share? (Non-)violence, scarcity and resource access in Somali Region, Ethiopia. *Journal of Development Studies*, 43, 743-765.
BRIMP 2002. The Biomass Map of Botswana: Growing Season 2000/2001. *Botswana Range Inventory and Monitoring Project (BRIMP), Ministry of Agriculture, Gaborone*.
BURGESS, J. 2004. Country Pasture/Forage Resource Profile. *Ministry of Agriculture, Gaborone*.
CHAN, D. V., HELFRICH, C. A., HURSH, N. C., ROGERS, E. S. & GOPAL, S. 2014. Measuring community integration using Geographic Information Systems (GIS) and participatory mapping for people who were once homeless. *Health & Place*, 27, 92-101.
CHANDA, R., TOTOLO, O., MOLEELE, N., SETSHOGO, M. & MOSWEU, S. 2003. Prospects for subsistence livelihood and environmental sustainability along the Kalahari Transect: The case of Matsheg in Botswana's Kalahari rangelands. *Journal of Arid Environments*, 54, 425-445.
CHAPIN, M., LAMB, Z. & THRELKELD, B. 2005. Mapping Indigenous Lands. *The Annual Review of Anthropology*, 34, 619 - 638.
CHILDERS, G. W. 1981. Western Ngwaketse Remote Area Dwellers: A Land Use and Development Plan for Remote Area Settlements in Southern Botswana. *Government Printers, Botswana*.
CLEAVER, K. M. & DONOVAN, W. G. 1995. Agriculture, poverty, and policy reform in sub-Saharan Africa. *World Bank discussion papers*. *Africa Technical Department series. Washington, D.C.: The World Bank.* , WDP 280.
CULLIES, A. & WATSON, T. 2005. Winners and Losers: Privatising the commons in Botswana. *Willprint, Colchester*.
DENZIN, N. & LINCOLN, Y. 2000. *Handbook of Quantitative Research*, London, Sage Publications.
DMS 2013. Ngamiland Rainfall Statistics. *Department of Meteorological Services, Ministry of Environment Wildlife and Tourism, Gaborone, Botswana*.
DOL 2009. Department of Lands (DoL). Ngamiland Integrated Land Use Plan, Final Report, *Ministry of Lands and Housing, Botswana*.
DOUGILL, A. J., AKANYANG, L., PERKINS, J. S., ECKARDT, F. D., STRINGER, L. C., FAVRETTO, N., ATLHOPHENG, J. & MULALE, K. 2016. Land use, rangeland degradation and ecological changes in the southern Kalahari, Botswana. *African Journal of Ecology*, 54, 59-67.
DOUGILL, A. J., THOMAS, D. S. G. & HEATHWAITE, A. L. 1999. Environmental change in the Kalahari: Integrated land degradation studies for nonequilibrium dryland environments. *Annals of the Association of American Geographers*, 89, 420-442.
DUNN, C. E. 2007. Participatory GIS - a people's GIS? *Progress in Human Geography*, 31, 616 - 637.
ELLIS, J. E. 1995. Climate variability and complex system dynamics: implications for pastoral development. In: Scoones, I. (ed.), *Living with Uncertainty: New directions in pastoral development in Africa*, Intermediate Technology Publications, Exeter, 37 - 46.
GALATY, J. G. 2013. Land grabbing in the Eastern African rangelands. In: Catley, A., Lind., J., and Scoones, I. (Eds). *Pastoralism and Development in Africa: Dynamic Change at the Margins, Pathways to sustainability*, Routlege, 143 - 153.
HARDIN, G. 1968. The tragedy of the commons. *Science*, 162, 1243 - 8.
HAY, I. 2010. *Qualitative Research Methods in Human Geography*, Canada, Oxford University Press.
HERLIHY, P. H. 2003. Participatory research mapping of indigenous lands in Darien, Panama. *Human Organization*, 62, 315-331.
HERLIHY, P. H. & KNAPP, G. 2003. Maps of, by, and for the peoples of Latin America. *Human Organization*, 62, 303-314.
HO, P. 2014. The 'credibility thesis' and its application to property rights: (In)Secure land tenure, conflict and social welfare in China. *Land Use Policy*, 40, 13-27.
HOMANN, S., RISCHKOWSKY, B. & STEINBACH, J. 2008. The effect of development interventions on the use of indigenous range management strategies in the Borana Lowlands in Ethiopia. *Land Degradation & Development*, 19, 368-387.
KAKINUMA, K., OKAYASU, T., JAMSRAN, U., OKURO, T. & TAKEUCHI, K. 2014. Herding strategies during a drought vary at multiple scales in Mongolian rangeland. *Journal of Arid Environments*, 109, 88-91.
KAYE-ZWIEBEL, E. & KING, E. 2014. Kenyan pastoralist societies in transition: varying perceptions of the value of ecosystem services. *Ecology and Society*, 19.
LEBERT, T. & ROHDE, R. 2007. Land reform and the new elite: Exclusion of the poor from communal land in Namaqualand, South Africa. *Journal of Arid Environments*, 70, 818-833.
LENGOIBONI, M., BREGT, A. K. & VAN DER MOLEN, P. 2010. Pastoralism within land administration in Kenya-The missing link. *Land Use Policy*, 27, 579-588.
LESOROGOL, C. K. 2008. Land privatization and pastoralist well-being in Kenya. *Development and Change*, 39, 309-331.
LEVINE, A. S. & FEINHOLZ, C. L. 2015. Participatory GIS to inform coral reef ecosystem management: Mapping human coastal and ocean uses in Hawaii. *Applied Geography*, 59, 60-69.
MAGOLE, L. 2009. The 'shrinking commons' in the Lake Ngami grasslands, Botswana: the impact of national rangeland policy. *Development Southern Africa*, 26, 611-626.
MBAIWA, J. E., NGWENYA, B. N. & SETHORA, M. 2008. Ethnicity and Utilization of Natural Resources in the Okavango Delta, Botswana: A Historical Perspective of Conflict and Collaboration. In: AHMED, A. (ed.) *Managing Science and Technology for a Sustainable Future: Wasd 2008 Conference Proceedings*. Brighton: World Assoc Sustainable Development-Wasd, Spru-Science & Technol Policy Res.
MCCALL, M. K. & DUNN, C. E. 2012. Geo-information tools for participatory spatial planning: Fulfilling the criteria for 'good' governance? *Geoforum*, 43, 81-94.
MORITZ, M., LARISA CATHERINE, B., DRENT, A., KARI, S., MOUHAMAN, A. & SCHOLTE, P. 2013. Rangeland governance in an open system: Protecting transhumance corridors in the Far North Province of Cameroon. *Pastoralism*, 3, 1-10.
MULALE, K., CHANDA, R., PERKINS, J. S., MAGOLE, L., SEBEGO, R. J., ATLHOPHENG, J. R., MPHINYANE, W. & REED, M. S. 2014. Formal institutions and their role in promoting sustainable land management in Boteti, Botswana. *Land Degradation & Development*, 25, 80-91.
MWANGI, E. 2007. Subdividing the commons: Distributional conflict in the transition from collective to individual property rights in Kenya's Maasailand. *World Development*, 35, 815-834.
NEITSCHMAN, B. 1995. Defending the Miskito Reefs with maps and GPS. *Cultural Survival Quarterly*, 18, 34 - 83.
NOTENBAERT, A., DAVIES, J., DE LEEUW, J., SAID, M., HERRERO, M., MANZANO, P., WAITHAKA, M., ABOUD, A. & OMONDI, S. 2012. Policies in support of pastoralism and biodiversity in the heterogeneous drylands of East Africa. *Pastoralism*, 2, 1-17.
OBA, G. 2013. The sustainability of pastoral production in Africa. In: Catley, A., Lind, J., and Scoones, I. (Eds). *Pastoralism and Development in Africa: Dynamic Change at the Margins, Pathways to sustainability*, Routlege, 29-36.
PETERS, P. E. 1994. *Dividing the commons: politics, policy, and culture in Botswana*, Charlottesville, University Press of Virginia.
ROB 1975. Republic of Botswana, National Policy on Tribal Grazing Land: Government Paper No. 2 of 1975. *Government Printers, Gaborone, Botswana.*
ROB 1991. Republic of Botswana, National Policy on Agricultural Development (NPAD). *Government Printers, Gaborone, Botswana.*
ROHDE, R. F., MOLEELE, N. M., MPHALE, M., ALLSOPP, N., CHANDA, R., HOFFMAN, M. T., MAGOLE, L. & YOUNG, E. 2006. Dynamics of grazing policy and practice: environmental and social impacts in three communal areas of southern Africa. *Environmental Science & Policy, 9,* 302-316.
SCOTT, L. M. & JANIKAS, M. V. 2010. Spatial Statics in ArcGIS. In Fischer, M. M and Getis, A (Eds), *Handbook of Applied Spatial Analysis: Software Tools, Methods and Applications.*
SMITH, D. A. 2003. Participatory mapping of community lands and hunting yields among the Bugle of western Panama. *Human Organization, 62,* 332-343.
SMITH, D. A., HERLIHY, P. H., VIERA, A. R., KELLY, J. H., HILBURN, A. M., ROBLEDO, M. A. & DOBSON, J. E. 2012. Using Participatory Research Mapping and GIS to Explore Local Geographic Knowledge of Indigenous Landscapes in Mexico. *Focus on Geography, 55,* 119-124.
SOLOMON, T. B., SNYMAN, H. A. & SMIT, G. N. 2007. Cattle-rangeland management practices and perceptions of pastoralists towards rangeland degradation in the Borana zone of southern Ethiopia. *Journal of Environmental Management, 82,* 481-494.
SPEELMAN, E. N., GROOT, J. C. J., GARCIA-BARRIOS, L. E., KOK, K., VAN KEULEN, H. & TITTONELL, P. 2014. From coping to adaptation to economic and institutional change - Trajectories of change in land-use management and social organization in a Biosphere Reserve community, Mexico. *Land Use Policy, 41,* 31-44.
TACHE, B. 2013. Rangeland enclosures in Southern Oromia, Ethiopia: an innovative response or the erosion of common property resources? In: Catley, A., Lind, J., and Scoones, I. (Eds). *Pastoralism and Development in Africa: Dynamic Change at the Margins, Pathways to sustainability,* Routlege, 37 - 46.
TALEN, E. 2000. Bottom-up GIS - A new tool for individual and group expression in participatory planning. *Journal of the American Planning Association, 66,* 279-294.
TAYLOR, J. 2012. Constraints of grassland science, pastoral management and policy in Northern China: Anthropological perspectives on degradational narratives. *International Journal of Development Issues, 11,* 208-226.
TONGCO, M. D. C. 2007. Purposive Sampling as a tool for Informant Selection: Research Methods. *Ethnobotany Research and Application: A journal of Plants, People and Applied Research, 5.*
TOULMIN, C. 2009. Securing land and property rights in sub-Saharan Africa: The role of local institutions. *Land Use Policy, 26,* 10-19.
TSEGAYE, D., VEDELD, P. & MOE, S. R. 2013. Pastoralists and livelihoods: A case study from northern Afar, Ethiopia. *Journal of Arid Environments, 91,* 138-146.
TURNER, M. D., MCPEAK, J. G. & AYANTUNDE, A. 2014. The Role of Livestock Mobility in the Livelihood Strategies of Rural Peoples in Semi-Arid West Africa. *Human Ecology*, 42, 231-247.
VETTER, S. 2005. Rangelands at equilibrium and non-equilibrium: recent developments in the debate. *Journal of Arid Environments*, 62, 321-341.
WHITE, R. 1992. *Livestock Development and Pastoral Production on Communal Rangeland in Botswana: Case study for Workshop: New directions in African Range Management Policy*, Matopos, Bulawayo, Zimbabwe, London, UK, Commonwealth Secretariat. |
Current Mass Schedule
Saturday: 4:00pm
Sunday: 8:30, 10:30am (Live Stream as well), 6:00pm
Daily Mass: Monday - Friday @ 11:00am (Star of the Sea)
Confessions: Sat. 3:00-3:30pm
Tues. & Thurs. after 11:00am Mass
Eucharistic Adoration:
1st Friday of Each Month
9:00am - 11:00am
1st Saturday of Each Month: Mass @ 9:00am
Church open for Private Prayer:
Mon.-Fri. 10:00am - 5:00pm
Parish Office Hours: Monday through Friday
9:00am - 12:00pm and 1:00pm - 4:00pm
Our Mission...
Swept by the waves of Christ’s love and guided by the Holy Spirit, we welcome all to the Catholic community of Our Lady Star of the Sea in the heart of Cape May. We celebrate God’s Presence in Word, Sacrament, and Service. We seek to be a faith-filled community where you experience the loving embrace of God.
525 Washington Street, Cape May, NJ 08204 • 520 Lafayette Street (Parish Office/Mailing Address)
Phone: 609-884-5312 • Fax: 609-542-9702 • www.ladystarofthesea.org
**LITURGY**
**MASS INTENTIONS**
**SATURDAY, March 11**
4:00p Joseph Catino r/b Frank & Betty Stepniak
Catherine & Walter Barc r/b Their Daughters
**SUNDAY, March 12**
8:30a Rose Haywood 1st Anniv. r/b Terry Quinn
10:30a John P. McGarvey r/b The McGarvey Family
William Manning r/b Maureen Flood
Theodore Melnychuk, Esq. r/b Michael & Denise Doyle
Hank Moller r/b The McHugh Family
Father Thomas McGee r/b Art & Jill Ciancaglini
6:00p Ints. of Katelyn Dessner & Babies
**MONDAY, March 13**
11:00a Emma Tenaglia r/b Joe & Maria Tenaglia
**TUESDAY, March 14**
11:00a Librada Enriquer Penaranda r/b Romy & Regie Ubana & Family
**WEDNESDAY, March 15**
11:00a Jim Cicchitti r/b Tracie Cicchitti
**THURSDAY, March 16**
11:00a Father Thomas McGee r/b Andy & Nancy Terifay
Ints. of Jamie McPoyle r/b Andy & Nancy Terifay
**FRIDAY, March 17**
11:00a Nick Hober r/b The O’Hara Family
Laura Fogarty r/b James McCarrie
John & Mildred Gaynor r/b Carol Gaynor
---
**STAFF DIRECTORY**
Oblates of St. Francis de Sales • www.oblates.org
Rev. David J. Devlin, OSFS - Pastor
firstname.lastname@example.org • Ext. 105
Rev. James T. Dever, OSFS - Parochial Vicar
email@example.com • Ext. 103
Rev. John J. Dolan, OSFS - Parochial Vicar
firstname.lastname@example.org • Ext. 104
Parish Staff
Sr. Maria Metzger, SSJ - Pastoral Associate, DRE
email@example.com • Ext. 109
Mr. Terence McGarvey - Parish Administrator
firstname.lastname@example.org • Ext. 120
Mrs. Marie Turco - Parish Secretary • Ext. 101
Art & Jill Ciancaglini ~ Music Director & Leader of Song
Denise Mount ~ Business Assistant
email@example.com • Ext. 106
Car Raffle Chair ~ Martine Simonis • Ext. 110
Regina Alulis ~ Bulletin Contact • firstname.lastname@example.org
---
**This Week in our Parish**
**Monday, March 13, 9:15AM**
Legion of Mary ~ De Sales Room, Parish Hall
**Monday, March 13, 7:00PM**
“That Man is You” ~ McGivney Room, Parish Hall
**Wednesday, March 15, 7:00PM**
Holy Hour/Benediction - Father John, Church
**Thursday, March 16, 4:00 - 5:30PM**
Survivors & Thrivers Cancer Support Group
De Sales Room, Parish Hall
**Friday, March 17, 7:00PM**
Stations of the Cross, Church
---
“Lent is the autumn of the spiritual life during which we gather fruit to keep us going for the rest of the year.” ~ St. Francis de Sales
---
**Diocesan Eucharistic Congress**
When: Saturday, March 25
(Solemnity of the Annunciation of the Lord)
Time: 8:30AM - 3:00PM
Where: Freedom Mortage Waterfront Pavilion, Camden
Cost: Free
“A Diocesan Eucharistic Congress is an opportunity for the Bishop to gather the people of his Diocese in one place for a celebration of our Catholic faith in the Eucharist,” said Father Robert Hughes, Vicar General for the Diocese of Camden. “Through prayer, teaching, silence, Adoration and Mass, the faithful can recognize the importance of the Eucharist to our Catholic life.”
For more information on the Diocesan Eucharistic Congress visit: eucharisticrevivalsouthjersey.org or avivamientoeucaristicosj.org.
Lent is a time for us to encounter Jesus in a very personal way so that we deepen our baptismal consecration and commitment. John tells us that he wrote about the encounter of Jesus with the Samaritan woman so that we “may believe that Jesus is the Messiah, the Son of God, and that through this belief we may have life in his name.”
Notice that, during this encounter, Jesus is drawing the woman beyond earthly realities and concerns to the deeper realities of the eternal. Jesus offers her “living water.” He means water that gives life, but her attention is on earthly water. She asks him, rather sarcastically, if he thinks he’s better than Jacob who gave them this well. Jesus refuses to be side-tracked from his goal - to give her water that will permanently end her thirst. All she can see is the convenience of not having to come to the well every day.
Jesus moves her to a new level by getting her to face the truth of her present situation. But he doesn’t tell her to “come back when you’ve straightened out your life.” The grace he offers is meant to help her to change – here and now.
Rather than make a commitment now, she says that she’s waiting for the Messiah to come. Now, Jesus can offer her the opportunity for a personal commitment: “I am he.”
The purpose of the story is to remind us that even committed disciples need to be brought to deeper understanding and conversion.
How great is our thirst during this Lenten season? What is the “food” we are seeking? What is it that Jesus wants us to understand through our Lenten encounters with him? St. Paul reminds us that God has proven his love for us. While we were still sinners, Christ died for us.
Our Lenten journey will be more fruitful if we are sincerely open when we encounter Jesus and are willing to be led by grace into a deeper conversion of mind and heart to the ways of the Lord. Then our hope will be based on the love of God poured out into our hearts.
Rev. Michael S. Murray, OSFS,
Executive Director of the De Sales Spirituality Center
The Sacrament of Penance
What?
The way we embrace Christ’s call to **conversion**.
The step we take to express true **repentance** for our offenses.
The **confession** of our sins.
The reception of **God’s pardon and peace**.
The **restoration** of our relationship with God and with the Church.
After Jesus declared that Peter would be the rock upon which he would build his Church, he said to Peter, “I will give you the keys to the kingdom of heaven. Whatever you bind on earth shall be bound in heaven; and whatever you loose on earth shall be loosed in heaven.” (Mt 16:13-20)
Why?
We were born with the effects of **original sin**: disobedience toward God and a lack of trust in his goodness (Gen 3:1-11).
God the Father sent his Son, **Jesus Christ**, to save us from our sins (Jn 3:16).
Jesus accomplished his saving mission by establishing the **Kingdom of God** and by suffering, dying on the cross, rising from the dead, and ascending into heaven (Mt 5:17).
Jesus established the **Church** to continue his saving mission by the power of the Holy Spirit through the celebration of the sacraments (Mk 1:15, Jn 19:34, Acts 2: 1-4, 2 Cor 5:18).
Through **faith** in the Gospel and **Baptism**, we are first washed clean of all sin (Acts 2:38).
Through the **Sacrament of Penance**, we continue to experience the forgiveness of our sins (Mt 16:13-20, Jn 20:19-23).
After his resurrection, Jesus appeared to his disciples, extended his peace, breathed the holy Spirit on them, and sent them out saying, “Whose sins you forgive are forgiven them, and whose sins you retain are retained.” (Jn 20:19-23)
The Church celebrates the Sacrament of Penance using the *Order of Penance*. The revised rites and formulas of the *Order of Penance* of the Second Vatican Council were promulgated in 1973. A new translation was provided in 2023.
Celebrating the Sacrament of Penance or “going to confession” is a liturgical action that consists of:
1. The Reception of the Penitent
2. The Reading of the Word of God (Optional)
3. The Confession of Sins and The Acceptance of Satisfaction
4. The Prayer of the Penitent and The Absolution
5. The Proclamation of Praise of God and The Dismissal of the Penitent
**The Acts of the Penitent**
1. Contrition (sorrow for sin)
2. Confession (award examination, outward accusation)
3. Satisfaction (an act of penance)
“Penance requires . . . the sinner to endure all things willingly; be contrite of heart, confess with the lips, and practice complete humility and fruitful satisfaction.”
*(Catechism of the Catholic Church, 1450)*
---
**Going to Confession in Twelve Easy Steps**
1. Examine your conscience, call to mind your sins, and then enter the confessional.
2. The priest will welcome and receive you.
3. Make the sign of the cross and say: “In the name of the Father and of the Son and of the Holy Spirit. Amen.”
4. The priest will invite you to trust in God. He may also read a text of Sacred Scripture.
5. Say these or similar words:
“Bless me Father for I have sinned. It has been ______long since my last confession. These are my sins…”
6. Tell the priest your sins. Then say these or similar words: “For these and all of my sins, I am sorry.”
7. Accept your work of penance (prayers, act of kindness, etc.) to help make satisfaction and amend your life.
8. Express your sorrow for your sins (Act of Contrition) in these or similar words:
“O my God, I am sorry and repent with all my heart for all the wrong I have done and for the good I have failed to do,
because by sinning I have offended you,
who are all good and worthy to be loved above all things.
I firmly resolve, with the help of your grace, to do penance, to sin no more,
and to avoid the occasions of sin.
Through the merits of the Passion of our Savior Jesus Christ, Lord, have mercy.”
9. Receive God’s forgiveness as the priest prays the Prayer of Absolution.
10. At the end of the Prayer of Absolution, reply: “Amen.”
11. Give praise to God. When the priest says: “Give thanks to the Lord, for he is good,” conclude: “For his mercy endures forever.”
12. Go in peace and remember to say or do your penance.
FROM MANY...One Body
Our House of Charity Campaign
Your pledge will ensure the vitality of essential Diocesan ministries and programs that sustain the redemptive presence of Jesus Christ through the Diocese.
Our 2022 - 2023 Goal: $121,000.00
Collected to Date: $94,751.00
Percentage of Goal: 78%
GOAL!!!
Almost There!
Just getting started
Weekly Contributions ~ February 25 - 26, 2023
| Ordinary Income | Ordinary Expenses | 2nd Collection |
|-----------------|-------------------|----------------|
| $10,296.00 | $57,615.34 | $1,927.00 |
The PARISH GIVING Program enables you to have your Sunday Offering automatically debited from your credit card. Make your life easier and check it out on the parish website.
Parish Giving
simple. secure. convenient.
Prayer and Praise
Welcome to our Prayer Group
When: 4th Thursday of every month @ 3:30PM
Where: De Sales Room in the Parish Hall
Contact for additional information:
Donna - email@example.com
call 609-898-4431
Lectors
Saturday - March 18
4:00p G. Kucher
Sunday - March 19
8:30a J. Saquelia
10:30a S. Curtis
6:00p J. Bogle
Extraordinary Ministers
Saturday - March 18
4:00p 1. P. Motz 2. N. Mulvanney 3. C. Kopp 4. M. Schuhl
Sunday - March 19
8:30a 1. C. Crouthamel 2. V. DiRenzo-Adams
3. E. Krause 4. S. Tarratt
10:30a 1. L. Curtis 2. M. Jenkins 3. S. Jenkins
4. E. Hughes
6:00p 1. B. Hodgdon 2. F. Bianco 3. L. Bogle 4. P. Bogle
Come join men of our Parish on Monday, March 13 for “That Man is You” (TMIY)’s weekly meeting. “The Crowning with Thorns” is the topic. The session begins at 7:00PM in the Parish Hall, Father McGivney Room. Mark Hartfiel discusses how The Crowning with thorns is all about Christ’s Kingship. A line is drawn, the Kingdom of God and the kingdom of the world are at odds.
Please contact Earle Hughes 609-609-351-2780 for further information.
We are the Survivors & Thrivers, a community of women who have experienced or are experiencing any kind of cancer. We meet the 1st and 3rd Thursday of each month from 4:00 - 5:30PM in the De Sales Room of the Parish Hall.
Next meeting: March 16, 2023
Contact: Christina firstname.lastname@example.org, or 610-908-2405
THANKS FOR BEING HERE
If you would like to become an ACTIVE member of Our Lady Star of the Sea Parish (either Full Time or Seasonal) we invite you to complete the Parish Registration form below and place the completed form in the Offertory basket or send to the Parish Office. Someone from the Parish Office will then contact you.
Our Lady Star of the Sea Parish Registration Form
Last Name
Full Name and relationship of all resident family members you wish to register at OLSS:
E-Mail Address:
Mailing Address in Cape May Area*:
Permanent Address (if different):
Preferred Telephone #: _______________________
Are you a:
- Full-time resident
- Seasonal resident
*PLEASE do not use a rental property address.
Place completed form in the Offertory Basket or send to Parish Office.
SCHOOL NEWS
Wildwood Catholic Academy is a premier college preparatory academy based on gospel values and the promotion of academic excellence.
Space is limited & filling up quickly!
SCHEDULE A VISIT OR APPLY ONLINE TODAY!
Call 609-522-6243 or email email@example.com
1500 Central Avenue, North Wildwood, NJ
Call or contact us at 609-522-7257
wildwoodcatholicacademy.org
SACRAMENTS
BAPTISMS
...are celebrated one Sunday each month, except during the Lenten Season. In order to schedule a date for your child’s Baptism: parents should be parish members, they must attend a Baptismal Preparation Session, and all necessary documentation for godparents must be submitted. To begin the process for your child’s Baptism please contact the parish office.
CONFESSIONS
Saturday 3:00 - 3:30 PM
Tuesday and Thursday after the 11:00 AM Mass
VISITATION OF THE SICK
Please call the Parish Office if a parishioner is seriously ill or infirm and would like to receive the Sacrament of the Sick and/or Holy Eucharist.
MARRIAGES
We welcome parishioners and the children of parishioners to be married at our Lady Star of the Sea.
Please visit the Parish website, www.ladystarofthesea.org, click on the Marriage tab for more information
One does not register in a parish community for the sole purpose of getting married, but to be a full, active participating member of the faith community.
Cremation and Proper Handling of Cremated Remains
Your Choices Don’t Stop with Cremation
While you may have chosen to be cremated, remember to choose your final resting place. Although the Church earnestly recommends the pious custom of burying the bodies of the dead be observed, it does not however, forbid cremation unless it has been chosen for reasons which are contrary to Christian teaching. Cremated remains are the body of the deceased in a changed form. We should honor them as we honor the body. They must be reverently buried or entombed in a place reserved for the burial of the dead.
Visit SouthJerseyCatholicCemeteries.org to learn more or call us at 855-MyPrePlan (855-697-7375).
Parish Ministries
BEREAVEMENT MINISTRY We offer bereavement support groups twice a year in the Spring and Fall. Dates and times are announced and printed in the parish bulletin. Call Suzanne Jenkins any time if you have questions or if you would like to participate in one of the groups 609-675-8630.
GOOD SAMARITAN If you are interested in becoming a volunteer please contact Janet Kerney @ 267-342-2756. Parish volunteers are available to help others in a variety of ways: rides to Sunday Mass, doctor’s appointments, shopping/banking, errands. Also visits: hospitals, nursing homes, your home, brief respite care. Let us know your need and we will try to provide for you. All requests must go through the Captain of the week. Please do not call our volunteers directly. To use our services you must be fully vaccinated. We are unable to transport wheelchairs. This week’s captain is Sr. Nancy Butler. Her number is 609-408-2549.
Promotes a spirit of fellowship among adults 50 years and older. During our gatherings, we enjoy lunch and varied programs. (Bring your own brown bag lunch. Desserts and beverages are provided). The group welcomes everyone age 50 and over, regardless of faith or parish affiliation. The group meets in Our Lady Star of the Sea Parish Hall on the last Wednesday of the month from 12:00 - 2:00PM from September through May. JUST FRIENDS looks forward to welcoming you at our next meeting! Join Us! No membership necessary. Please notice the time change. For more information call Carol at 609-224-9662 or Martine Simonis 609-884-5312 x 102.
The KNIGHTS OF COLUMBUS, St. Mary’s Council #6202 meets on the second Thursday of each month in the McGivney Room of the Parish Hall. Membership is open to Roman Catholic men, 18 years of age and older, who are committed to making their community a better place, while supporting Our Lady Star of the Sea Church. For more information or to request a membership application, please contact Rick Greenfield, Outreach Coordinator, at firstname.lastname@example.org or 609-408-3323.
CATHOLIC RADIO STATION WSMJ 91.9FM has been added to the family of catholic radio stations broadcast by Domestic Church Media (DCM). For a complete schedule of programming visit www.domesticchurchmedia.com. WSMJ 91.9 remains a true Catholic affiliate of EWTN.
SURVIVORS & THRIVERS is a community whose mission it is to support women who have experienced or are experiencing any kind of cancer. We meet on the first and third Thursdays of every month from 4:00 - 5:30 PM in the Parish Hall, and we welcome all women regardless of faith or parish affiliation. For more information or to join the group, call Christina at 610-908-2405 or email her at email@example.com.
THE LEGION OF MARY meets Mondays at 9:15AM in the DeSales Room in the Parish Hall. All are welcome to join in the service of Mary. For information please call Liz McPherson, President 609-841-6629.
Hospitality Committee We live out our Parish mission statement to “welcome all” …and be for those whom we meet as “the loving embrace of God.” Our group offers refreshment, along with a welcoming atmosphere to all who come to our parish activities. Contact: Donna Wicker 609-408-6646.
Do You Know Someone Who Might Be Interested in Becoming Catholic? Please encourage them to contact Sister Maria (609)884-5312 x109 as soon as possible. We would love to have the chance to tell him/her about the wonderful process for spiritual growth called the Rite of Christian Initiation of Adults. In this process, adults (with other adults) learn about all the Church teaches regarding Jesus, the Bible, the Sacraments, Spirituality, and Discipleship.
Parish Pastoral Council The primary focus of the council is to oversee the Parish Pastoral Plan and to evaluate and recommend new goals each year. The following parishioners are members of the Parish Pastoral Council: Father Dave Devlin, OSFS, Joe Bogle (Chair), Donna Wicker (Vice Chair), Patrick Holden, Janet Kerney, Jeanne McFadden, Michael McGovern, Jim McHugh, Danielle Rechner, Susan Tarrant, and Richard Turco.
Parish Finance Council: Joe Bogle, John Tice, George Catanese, Kathy Parker, Ray Roberts, Kevin O’Neil
RELIGIOUS EDUCATION CLASSES held Sundays 9:15AM - 11:30 which includes going to the 10:30 Mass. Contact Sister Maria at 609-884-5312 ext. 109 for more information.
Planning your Estate ~ Have you ever considered leaving part of your estate to the Church? There are several options available that you might find attractive from an income, tax savings, retirement, or inheritance point of view. To learn more about how these programs might work for you and your family, please contact Marie Turco (609)884-5312, ext. 101 for more info.
Pre-Planning: A Sharing of Faith ~ While we focus on living healthy, long lives, the inevitable can’t be ignored. We are all destined to cross over to the eternal life that God has planned for us. By pre-planning your Catholic burial you are sharing your faith with those left behind. Many benefits come with pre-planning: your wishes are known, peace of mind, elimination of stress, & payment plans. Visit SouthJerseyCatholicCemeteries.org or call 855-MyPrePlan (855-697-7375).
Readings for the Week
Monday: 2 Kgs 5:1-15ab; Ps 42:2, 3; 43:3, 4; Lk 4:24-30
Tuesday: Dn 3:25, 34-43; Ps 25:4-5ab, 6, 7bc, 8-9; Mt 18:21-35
Wednesday: Dt 4:1, 5-9; Ps 147:12-13, 15-16, 19-20; Mt 5:17-19
Thursday: Jer 7:23-28; Ps 95:1-2, 6-7, 8-9; Lk 11:14-23
Friday: Hos 14:2-10; Ps 81:6c-8a, 8bc-9, 10-11ab, 14, 17; Mk 12:28-34
Saturday: Hos 6:1-6; Ps 51:3-4, 18-19, 20-21ab; Lk 18:9-14
Young Adult Discernment Group
This is a group for men, (Young Adults 16 - 39) who are thinking about a vocation, to come together to pray with other young men from around our Diocese.
Where: Our Lady of Guadalupe Parish
135 N. White Horse Pike, Lindenwold
Time: 2:30 - 4:00 PM
When: March 12 ~ Topic: Whose Sins You Forgive are Forgiven Them
May 7 ~ Topic: I Am the Bread of Life
Contact: Father Adam Cichoski,
Director of Vocations
firstname.lastname@example.org or Camdenpriest.org
Prayer for Priestly Vocations
Lord Jesus, we ask you to bless the Diocese of Camden with an increase in vocations to the priesthood. We pray that young men from our parishes and families will hear your call and be both generous and courageous in their response. May more young men serve you as priests who teach the faith, preach the Gospel, celebrate the sacraments and make you present among us through their ministry. Encourage them to embrace the joy-filled and fulfilling life of a diocesan priest. May parents support priestly vocations in their families by prayer and good example. We entrust these prayers through Mary Immaculate, our patroness, hopeful that you will bless our diocese with more priests in the near future, who lives and reigns forever and ever. Amen
“Lent is like a long ‘retreat’ during which we can turn back into ourselves and listen to the voice of God, in order to defeat the temptations of the Evil One.”
~ Pope Benedict XVI
Father of goodness and love; hear our prayers for the sick members of our community and for all who are in need.
Baby Harper, Barbara Conner, Jacob & Jaxson Rivera, Roseann Rosenstock, Mary Rothwein, Barbara Sharp, Rose Walsh
Pray For The Deceased
They have gone no further from us than to God, and God is very near.
Father Thomas McGee
Just Friends
When: Wednesday, April 26, 2023
Time: 12:00-2:00PM
Where: Star of the Sea Parish Hall
Brown Bag Lunch
Desserts and beverages provided.
COME!
Learn about the beauty in the darkness with Cape May Astro, KEVIN BEARE
K of C SOCIAL DINNER ~ St. Mary’s Council 6202, Our Lady Star of the Sea
All councils welcome, Knights with spouse or guest & Knights’ widows. Knights may invite male Catholic guests with spouse.
When: Tuesday, April 4
Place: Alfe’s Italian Seafood Restaurant, Wildwood
Time: 6 PM Cocktails (cash bar) followed by dinner.
Entrees: Shrimp Scampi, Linguine and Clams Red, Veal Parmigiani, and Stuffed Shells
Cost: $25 per Person for 4 Course Dinner includes Gratuity. Pay at the table, cash only.
RESERVATIONS are REQUIRED.
Contact: Rick Greenfield at 609-408-3323 or email@example.com
Attention Elementary & High School Students:
Elizabeth B. Hall Scholarship
Elizabeth B. Hall College Scholarship
Our Lady Star of the Sea Church will award an annual $2000 Elizabeth B. Hall college scholarship to a high school student who falls under the following criteria:
1. Is a registered member of Our Lady Star of the Sea Parish.
2. Is active in Parish ministries.
3. Will be graduating from Wildwood Catholic, Lower Cape May Regional, Cape May Technical, Cape Christian Academy, other private schools or home schooled.
4. Completes the scholarship application showing financial need and academic achievement.
5. Has been accepted at an accredited college or university.
Applications are available in the Parish Office and must be submitted by April 15.
Our Lady Star of the Sea High School Scholarship
Our Lady Star of the Sea Church will award an annual $2000 Our Lady Star of the Sea high school scholarship to a student who falls under the following criteria:
1. Is a registered member of Our Lady Star of the Sea Parish.
2. Is active in Parish ministries.
3. Will be attending Wildwood Catholic High School.
4. Completes the scholarship application showing financial need and academic achievement.
Applications are available in the Parish Office and must be submitted by April 15.
OUTREACH OPPORTUNITIES
Christ Child Society
37th Annual Spring Fling
April 23rd
11:30AM-3PM
Shore Club
1170 Golf Club Rd
CMCH NJ 08210
Tickets $40
Lunch & Music Prizes Auction
Bittersweet Duo Raffles 50/50
RSVP: KAY
609-536-2865
Sponsored by Sturdy Bank
“I hereby dispense Catholics who are within the territory of the diocese of Camden on St Patrick’s Day, Friday, March 17, 2023 from the Lenten obligation to abstain from eating meat.” Bishop Dennis Sullivan
“In order to maintain the solemn spirit of the Holy season of Lent, I also ask the Catholics within the Diocese who choose to utilize this dispensation from meat abstinence on the Memorial of St. Patrick, to instead either make another appropriate sacrifice on the day itself or to abstain from meat on another weekday during the Third Week of Lent.”
March 12, 2023 - 3rd Sunday in Lent |
Cloud Based Disaster Detection & Management System using WSN
M Chaitra¹, Dr. B Sivakumar²
¹ M. Tech Student, Department of Telecommunication Engineering, Dr. Ambedkar Institute of Technology, Bangalore-560056
² Professor, Department of Telecommunication Engineering, Dr. Ambedkar Institute of Technology, Bangalore-560056
Abstract - Natural disasters and unreal conflicts typically generate Brobdingnagian volumes of debris/dust. Destruction of homes and public infrastructure as a result of a flood, earthquake, landslide or conflict will contribute to insecurity, loss of life, displacement of populations, and therefore the interruption of public services. Wireless Sensor Networks play a crucial role in disaster debris detection and management system. Debris flows square measure a sort of mass movement that happens in mountain torrents. They comprise a high concentration of solid material in water that flows as a wave with a steep front. Debris flows may be thought-about a development intermediate between landslides, earthquake and water floods. They are amongst the foremost venturesome natural processes in mountainous regions and should occur due to completely different weather conditions. Their quality is attributable to completely different factors: their capability of transporting and depositing Brobdingnagian amounts of solid materials, which can additionally reach giant sizes (boulders of many three-dimensional meters square measure ordinarily transported by dust flows), their steep fronts, which can reach many meters of height and additionally their high velocities. A disaster management system is designed using N-mote and N-gateway such that the flow of debris can be measured and alert is sent to the public and rescue team by using cloud services in advance so that there is less loss of life and property.
Key Words: WSN, Debris flow, N-mote, N-gateway, alert, Cloud services.
1. INTRODUCTION
Wireless Sensor systems for debris stream observing and cautioning assume an essential part among non-auxiliary measures expected to decrease debris flow chance. Specifically, debris flow warning frameworks can be subdivided into two primary classes: advance warning and event warning. These two classes utilize distinctive sorts of sensors. Advance warning systems depend on checking causative hydro meteorological procedures (commonly precipitation) and plan to issue a notice before a conceivable debris stream is activated.
Event warning systems depend on distinguishing debris streams when these procedures are in advance. They have a significantly littler lead time than preemptive guidance ones but at the same time are less inclined to false cautions.
Advance warning for debris flows utilizes sensors and systems, including measuring precipitation by methods for rain guages and climate radar and observing water release in headwater streams.
Event warning systems use different types of sensors, encompassing ultrasonic or radar gauges, ground vibration sensors, video cameras, avalanche pendulums, photocells, trip wires etc. Event warning systems for debris flows have a robust linkage with debris-flow monitoring that is carried out for analysis purposes: the same sensors are often used for both monitoring and warning, though warning systems have higher requirements of robustness than monitoring systems.
Debris flows are a kind of mass movement that accommodates extremely focused dispersions of poorly sorted sediment (from clay- to boulder-sized particles) in water that will move at terribly high speeds and have nice harmful power. Debris flows typically seem as waves (surges) that have steep fronts consisting principally of boulders. Behind the boulders front, the stage height and variety of boulders bit by bit decrease, and therefore the surge is charged with pebble-sized fragments then becomes a lot of and a lot of dilute till it finally seems as muddy water.
Conditions needed for debris flow occurrence embrace the supply of relevant amounts of loose rubble, high slopes and sharp water inflows which will return from intense rainstorms, collapse of channel obstructions, rapid snowmelt, glacial lakes outburst floods, etc. These needs square measure met in several mountainous basins below different atmospheric condition, creating rubble flows a widespread development worldwide.
By using various sensors for measuring different parameters of debris flow, a system is designed such that it alerts the public as well as the rescue team like weather forecast department before and after the event has occurred so that there will be less loss of life and property.
Debris flows will discharge giant quantities of debris (with volumes up to immeasurable cube-shaped meters) with high velocities. This causes them to be extremely unsafe phenomena; debris flow hazards end in high risk notably once they encroach urban areas or transportation routes.
The need to assess debris flow hazards and scale back the associated risk urges a stronger data of those processes and also the implementation of effective management measures. Observance and warning systems play a vital role within the analysis on debris flows and as a nonfunctional live to attenuate risks, severally.
2. LITERATURE REVIEW
i. Neural network-based early warning system for debris flow disaster.
The key techniques of building a period forecast model for debris flow disaster victimization neural network (NN) technique square measure explained, as well as the determination of neural nodes at the input layer, the output layer and also the implicit layer, the development of information supply and also the initial weight values and then on.
ii. Reliability and effectiveness of early warning systems for natural hazards: Concept and application to debris flow warning.
The system dependableness is classically drawn by the likelihood of Detection (POD) and likelihood of False Alarms (PFA). The EWS effectiveness, that could be a live of risk reduction, may be developed as a perform of POD and PFA is incontestable. To model the EWS and reckon the dependableness, a frame work supported Bayesian Networks is developed, that is more extended to a choice graph, facilitating the optimization of the warning system. In an exceedingly case study, the frame work is applied to the assessment of associate existing debris flow EWS.
iii. GIS-based susceptibility mapping and zonation of debris flows caused by Wenchuan Earthquake.
Debris flow susceptibility zonation (DBSZ) is critical for disaster management and coming up with development activities in mountainous regions. The results of a pilot study for assessing debris flow hazards exploitation geographic system (GIS) techniques is given. The debris flow caused by earthquake in Wenchuan County was known on the premise of remote sensing pictures interpretation and field investigation. The debris flow nominal status issue technique has been applied for DBSZ mapping in Wenchuan county.
iv. Disaster Management System Using GSM.
In this five sensors are used, those are angle or tilt sensing element which supplies the readings of slope angle if there’s any movement because of the landslide and it’s conjointly used for tidal wave alerting purpose, rain gage sensing element is employed to gather the depth of water at the mountains, soil drift sensing element is employed for detection of landslide, earthquake sensing elements area unit used for earthquake detection purpose and temperature sensor is employed for collecting the temperature.
v. Natural Disasters Alert System Using Wireless Sensor Network.
The sensor nodes are custom-developed float sensors and acceleration sensors and a low power readout ASIC circuit for a long life. The accelerometers are used to measure the seismic response of the earthquake. They detect vibrations during an earthquake event and send data to remote base station where multiple sensors data across the town is collected. A RF module provides low power network architecture is implemented over an 802.15.4.
vi. Development of Wireless Sensor Node for Landslide Detection.
This system is based on the wireless sensor network (WSN) that is composed of sensor nodes, gateway, and server system. Sensor nodes comprising sensing and communication part are implemented to detect ground movement. A sensing part is designed to measure inclination angle and acceleration accurately, and a communication part is deployed with Bluetooth (IEEE 802.15.1) module to transmit the data to the gateway. To verify the feasibility of this landslide prediction system, a series of experimental studies was performed at a small-scale earth slope equipped with an artificial rainfall dropping device. It is found that sensing nodes planted at slope can detect the ground motion when the slope starts to move.
3. SYSTEM OVERVIEW
Initially the sites prone to disaster are examined. The sensors to measure different parameters of debris flow are employed at sites which are prone to disaster. These sensors are in turn connected to microcontroller (N Mote) which controls and collects data from the sensors.
Communication from N-Mote to N-Gateway is done using ZigBee protocol (802.15.4). ZigBee is used for short range communication i.e 600mts Line of sight (LOS) and 200mts Non-Line of sight (NLOS) communication.
A gateway (N Gateway) is used to connect N-Mote to cloud which are connected to internet through wifi or Ethernet. When certain thresholds are reached, the data is sent to respective destination (public/rescue team).
The impact force of debris flows consists of two components: the dynamic pressure of fluid and the collisional force of single boulders.
D. Measurement of Soil moisture.
Soil moisture sensor is used to measure the amount of water content. If there is large amount of water content in mountainous regions there are chances of occurring of debris flows.
If there is large amount of water content in the soil then the sensor outputs low voltage, if the water content is less, then the output of the sensor reads high voltage.
E. N-Mote Section
Sensors: Are used by wireless sensor nodes to capture data from their environment. They are hardware devices that produce a measurable response to a change in a physical condition like temperature or pressure. Sensors measure physical data of the parameter to be monitored and have specific characteristics such as accuracy, sensitivity etc.
Controller (Atmega128): The controller performs tasks, processes data and controls the functionality of other components in the sensor node. All the data coming from sensors are extracted and stored in microcontroller’s memory and it is sent to ZigBee receiver which is present inside the n-mote module.
Zigbee transceiver: A ZigBee module is present inside the n-mote which receives the data coming from the sensors and transmits to ZigBee present in n-gateway wirelessly. This is done using a software XCTU, where coding is done for sender and receiver separately so that both the ZigBee module can communicate wirelessly.
Fig 2: N-Mote section Flow chart.
F. N-Gateway Section.
The data sent from ZigBee present in n-mote is acquired by ZigBee receiver present in gateway. N-gateway has n-mote and an additional process inside it. The processor used here is Orange pi, uses Armbian operating system which processes the data coming from the ZigBee receiver and send it to cloud. There is terminal display where all the data coming from ZigBee can be viewed by running the python script in Linux operating system. The received serial data is sent to cloud by using Ethernet or Wifi through gateway.

**Fig 3: N-Gateway section Flow chart.**
G. Cloud Section
The N-gateway is used to connect different devices which use different protocols. Here an n-gateway is used to connect n-mote to cloud. The sensor data coming from n-mote is sent to n-gateway and from n-gateway it is sent to cloud using Python scripting.
The data which is sent to cloud is stored in cloud database. An initial threshold value is set; if the threshold is reached i.e. if a disaster is likely to occur then the cloud sends messages to user mobile phone, Gmail account, and to weather forecast department.
A reverse process is set wherein if the threshold value is reached a reverse signal from cloud is sent back to sensor node where an alarm is set to alert the public in case of emergency.

**Fig 4: Cloud section Flow chart.**
4. WORKFLOW LANGUAGES
**Embedded C** is a set of language extensions for the C Programming language by the C Standards committee to address commonality issues that exist between C extensions for different embedded systems. Historically, embedded C programming requires nonstandard extensions to the C language in order to support exotic features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations.
In 2008, the C Standards Committee extended the C language to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as, fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.
Embedded C uses most of the syntax and semantics of standard C, e.g., `main()` function, variable definition, datatype declaration, conditional statements (if, switch case), loops (while, for), functions, arrays and strings, structures and union, bit operations, macros, etc.
**Python** is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. An interpreted language.
Python has a large standard library, commonly cited as one of Python's greatest strengths, providing tools suited to many tasks. This is deliberate and has been described as
batteries included Python philosophy. For Internet-facing applications, many standard formats and protocols (such as MIME and HTTP) are supported.
Modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary precision decimals, manipulating regular expressions, and doing unit testing are also included. Some parts of the standard library are covered by specifications (for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333), but most modules are not. They are specified by their code, internal documentation, and test suites (if supplied).
However, because most of the standard library is cross-platform Python code, only a few modules need altering or rewriting for variant implementations.
5. RESULTS
The various data collected from the sensors at different time intervals stored in ubidots cloud is shown below.
Fig 5: Soil Moisture.
Fig 6: Accelerometer X-axis.
Fig 7: Accelerometer Y-axis
Fig 8: Accelerometer Z-axis.
Fig 9: Distance.
6. CONCLUSION AND DISCUSSION
An overview of the sensors and systems most commonly employed for debris flow monitoring and warning are discussed. Debris flow monitoring has made important contributions to the understanding of these hazardous natural processes.
Finally, the need for a careful management and maintenance of debris flow warning systems must be stressed. The presence of a debris flow warning system could actually induce a feeling of safety, which is justified only if proper and continuous operation of the system is ensured.
REFERENCES
[1] Qiu J, Xiao B, Lv K, “Design and implementation of the short-term forecasting and environmental monitoring system over WSN for urban special site,” IEEE Fifth International Conference on Advanced Computational Intelligence (ICACI), pp. 756-759.
[2] Roy JK, Gupta D, Goswami S, “An improved flood warning system using WSN and Artificial Neural Network,” India Conference (INDICON), pp. 770-774, 2012.
[3] Mittal R, Bhatia M P S, “Wireless sensor networks for monitoring the environmental activities,” IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), pp. 1-5, 2010.
[4] Lee H C, Banerjee A, Fang Y M, et al, “Design of a multifunctional wireless sensor for in-situ monitoring of debris flows,” IEEE Transactions on Instrumentation and Measurement, , 2010.
[5] Liu Z, Liu Y, Gong X, et al, “A monitoring and warning system for brae debris flow with multi-sensor network,” Chinese IEEE on Control and Decision Conference (CCDC), ,pp. 3781-3785, 2011
[6] Motegi S, Yoshihara K, Horiuchi H. Proposal, “Multipath Routing for Adhoc Networks,” Institute of Electronics, Information and Communication Engineers Technical Study Report, pp. 51-56, 2002
[7] Bletsas A, Khisti A, Reed D P, et al. “A simple cooperative diversity method based on network path selection,” IEEE Journal on Selected Areas in Communications, pp. 659-672, 2006
[8] Jing Y, Jafarkhani H. Single and multiple relay selection schemes and their achievable diversity orders, IEEE Transactions on Wireless Communications, pp. 1414-1423, 2009
[9] Michalopoulos D S, Karagiannidis, “G K. PHY-layer fairness in amplify and forward cooperative diversity systems,” IEEE Transactions on Wireless Communications, pp. 1073-1082, 2008
[10] Liu J, Lu K, Cai X, et al. “Regenerative cooperative diversity with path selection and equal power consumption |
[99mTc]Tc-Galacto-RGD2 integrin αvβ3-targeted imaging as a surrogate for molecular phenotyping in lung cancer: real-world data
Jingjing Fu1†, Yan Xie1†, Tong Fu2‡, Fan Qiu1, Fei Yu1, Wei Qu1, Xiaochen Yao1, Aiping Zhang3, Zhenhua Yang4, Guoqiang Shao1, Qingle Meng1, Xiumin Shi1, Yue Huang5*, Wei Gu4* and Feng Wang1*
Abstract
Background: Epidermal growth factor receptor tyrosine kinase inhibitors (TKIs) are beneficial in patients with lung cancer. We explored the clinical value of [99mTc]Tc-Galacto-RGD2 single-photon emission computed tomography (SPECT/CT) in patients with lung cancer, integrin αvβ3 expression, and neovascularization in lung cancer subtypes was also addressed.
Methods: A total of 185 patients with lung cancer and 25 patients with benign lung diseases were enrolled in this prospective study from January 2013 to December 2016. All patients underwent [99mTc]Tc-Galacto-RGD2 imaging. The region of interest was drawn around each primary lesion, and tumour uptake of [99mTc]Tc-Galacto-RGD2 was expressed as the tumour/normal tissue ratio (T/N). The diagnostic efficacy was evaluated by receiver operating characteristic curve analysis. Tumour specimens were obtained from 66 patients with malignant diseases and 7 with benign disease. Tumour expression levels of αvβ3, CD31, Ki-67, and CXCR4 were further analysed for the evaluation of biological behaviours.
Results: The lung cancer patients included 22 cases of small cell lung cancer (SCLC), 48 squamous cell carcinoma (LSC), 97 adenocarcinoma (LAC), and 18 other types of lung cancer. The sensitivity, specificity, and accuracy of [99mTc]Tc-Galacto-RGD2 SPECT/CT using a cut-off value of T/N ratio at 2.5 were 91.89%, 48.0%, and 86.67%, respectively. Integrin αvβ3 expression was higher in non-SCLC compared with SCLC, while LSC showed denser neovascularization and higher integrin αvβ3 expression. Integrin αvβ3 expression levels were significantly higher in advanced (III, IV) than early stages (I, II). However, there was no significant correlation between tumour uptake and αvβ3 expression.
Conclusions: [99mTc]Tc-Galacto-RGD2 SPECT/CT has high sensitivity but limited specificity for detecting primary lung cancer, integrin expression in the tumour vessel and tumour cell membrane contributes to the tumour uptake.
*Correspondence: firstname.lastname@example.org; email@example.com; firstname.lastname@example.org; email@example.com
†Jingjing Fu, Yan Xie and Tong Fu have contributed equally to this work.
‡Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, 68 Changle Road, Nanjing 210006, China
§Department of Respiratory, Nanjing First Hospital, Nanjing Medical University, 68 Changle Road, Nanjing 210006, China
#Department of Pathology, Nanjing First Hospital, Nanjing Medical University, 68 Changle Road, Nanjing 210006, China
Full list of author information is available at the end of the article
© The Author(s) 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
**Background**
Lung cancer is the leading cause of cancer mortality worldwide [1, 2]. The incidence and mortality of lung cancer in China have increased rapidly in the last three decades, associated with increases in air pollution and tobacco consumption [3, 4]. However, new clinical treatment strategies, such as antiangiogenic epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) and immunotherapy, have significantly improved the outcomes of patients with lung cancer in the last decade [5]. TKIs have a cytostatic effect on tumour cells by slowing their growth and preventing the development of distant metastases [6, 7]. Multiplex genetic sequencing has been used to select appropriate treatment, based on the recommendation of the American Society of Clinical Oncology (ASCO); however, this requires obtaining enough tumour tissue by biopsy or surgery. Unfortunately, suitable tumour specimens are unavailable for some patients due to the tumour heterogeneity or undetermined primary lesion.
Angiogenesis plays important roles in tumour initiation, development, and metastasis [8]. Integrins are a diverse family of glycoproteins that form heterodimeric receptors for extracellular matrix molecules [9–11], of which integrin $\alpha_5\beta_3$, with an exposed arginine-glycine-aspartate (RGD) tripeptide sequence, is the most extensively studied [11]. Integrin $\alpha_5\beta_3$ is highly expressed in the neovasculature in solid tumours, including neuroblastoma, osteosarcoma, glioblastoma, breast cancer, and prostate cancer [12–20]. The highly restricted expression of integrin $\alpha_5\beta_3$ in normal tissues compared with its overexpression in tumour cells suggests that it may provide an interesting molecular target for the early detection of malignant tumours [12]. Overexpression of integrin $\alpha_5\beta_3$ was also correlated with tumour invasiveness in breast cancer, indicating a possible role in evaluating metastatic potential [19].
Radiolabelled RGD peptide as a target ligand for angiogenesis imaging has been well documented in preclinical and clinical studies [12, 21, 22]. In a previous multicentre study, we showed that $^{99m}$Tc labelled RGD dimers, such as $^{99m}$Tc-Tc-3PRGD$_2$, had high sensitivity for the detection of lung cancer, including primary and metastatic tumours [21, 23, 24]. $^{99m}$Tc Tc-Galacto-RGD$_2$, with higher affinity to $\alpha_5\beta_3$ and a favourable biodistribution, has been synthesized and utilized for the quantitative evaluation of $\alpha_5\beta_3$ expression and of tumour angiogenesis [25].
Clinically, multiple lymphadenopathy and remote metastasis were developed rapidly in higher aggressive lung cancer even with radical resection and comprehensive treatment, we suppose some key molecules mediate the tumour development and metastasis. Therefore, we conducted a longitudinal study to evaluate the clinical role of $^{99m}$Tc Tc-Galacto-RGD$_2$ SPECT/CT in a large population of patients with lung neoplasms. We also explored the expression of integrin $\alpha_5\beta_3$ protein in tumour cells and in the neovasculature, and determined the capability of the technique to detect lymphadenopathy and bone metastasis in patients with advanced lung cancer. Herein, we investigated the value of RGD-based imaging as a surrogate for molecular phenotyping in lung cancer.
**Methods**
**Patients**
This prospective, single-centre study enrolled patients referred to our centre with suspected lung neoplasms from January 2013 to December 2016. $^{99m}$Tc Tc-Galacto-RGD$_2$ SPECT/CT was performed in all patients; the final diagnosis was confirmed by histopathology based on acupuncture biopsy or surgery. A total of 210 consecutive patients (147 male, 63 female; mean age 63.80 ± 10.51 years, range 21–85 years) were enrolled and analysed. Of the 210 patients, 185 were confirmed with lung cancer and the other 25 patients had benign pulmonary diseases and served as the control. Patients who had undergone perioperative chemotherapy or radiotherapy were excluded from this study; the schema of study is shown in Fig. 1.
**$^{99m}$Tc Tc-Galacto-RGD$_2$ radiolabelling and quality control**
$^{99m}$Tc Tc-Galacto-RGD$_2$ labelling was carried out as described previously [25]. The Galacto-RGD$_2$ was friendly offered by the School of Health Sciences, Purdue University (Indiana, USA). Chemicals were purchased from Sigma-Aldrich (St. Louis, MO). Na$^{99m}$Tc TeO$_4$ was obtained from DongCheng Pharmaceutical Company (Nanjing, China). Briefly, radiolabelling with performed with a lyophilized kit formulation containing 20 μg, 7 mg TPPTS (trisodium triphenylphosphine-3,3',3''-trisulfonate), 6.5 mg tricine, 40 mg mannitol, 38.5 mg disodium succinate hexahydrate, and 12.7 mg succinic acid. $^{99m}$Tc-labelling was accomplished by adding 1–1.5 mL of Na$^{99m}$Tc TeO$_4$ solution (1,110–1,850 MBq). The reconstituted vial was heated at 100 °C for 30 min and the resulting solution was analysed by...
radio-high-performance liquid chromatography using a Lab Alliance system equipped with a ram IN-US detector and Zorbax C18 column (4.6 mm × 250 mm, 300 Å pore size, Waters Xbridge C18, Milford, MA). The flow rate was 1 mL/min, the mobile phase was isocratic with 90% solvent A (25 mM NH$_4$OAc buffer, pH 6.8) and 10% solvent B (acetonitrile) at 0–5 min, followed by a gradient mobile phase from 10% B at 5 min to 40% B at 20 min. The radiochemical purity was > 95% for all imaging.
**[$^{99m}$Tc]Tc-Galacto-RGD$_2$ imaging and interpretation**
The radiochemical purity was 95.1% ± 2.9%. [$^{99m}$Tc] Tc-Galacto-RGD$_2$ was administered at 555–740 MBq (15–20 mCi) and whole-body images were acquired at 1-h post-injection. The chest images, including the upper abdomen and adrenal glands, were performed using a combined transmission and emission device with x-ray tube and detector. All-purpose collimator centred on the 140-keV energy peak with a 20% symmetrical energy window. Thirty projection images were acquired over a 180° arc at 6° intervals for each SPECT head. The acquisition time was 30 s at each projection. The transaxial data were reconstructed using Ordered Subset Expectation Maximization, 2 iterations, 8 subsets (Symbia T6 SPECT/CT; Siemens AG, Germany). Anatomic CT images were performed for attenuation correction and tumour localization. If unexpected lesions were detected by whole-body imaging, additional abdomen or pelvis images were also acquired.
All images were interpreted independently on the computer monitor in three orthogonal planes by nuclear medicine physicians and a radiologist who were unaware of the clinical information and other imaging examinations. Significantly greater local uptake of [$^{99m}$Tc] Tc-Galacto-RGD$_2$ compared with the adjacent surrounding lung was interpreted as demonstrating a malignant lesion, and uptake less than or equal to the adjacent or surrounding lung was interpreted as a benign lesion. Focal activity in the hilum and mediastinum greater than the surrounding mediastinal activity was interpreted as...
lymphadenopathy. Regions of interest (ROI) were drawn around the primary lesion and contralateral lung tissue, respectively. And $^{99m}$Tc-Tc-Galacto-RGD$_2$ uptake was measured and expressed as the tumour/normal tissue ratio (T/N).
**Composite reference standard**
All available cytologic, histologic, follow-up, and imaging findings were used as a composite reference standard for the presence of tumour lesions. This is considered the optimal gold standard because cytologic or histologic verification of every lesion was not feasible or justifiable in these patients. Whenever possible, new findings on $^{99m}$Tc-Tc-Galacto-RGD$_2$ SPECT–CT were verified by additional investigations.
**Immunohistochemistry (IHC) analysis**
Tumour specimens were obtained from patients who underwent complete resection or biopsy. The sections were fixed in formalin, embedded in paraffin, deparaffinized, and stained with haematoxylin and eosin (H&E). Integrin $\alpha_6\beta_1$, Ki-67, CXCR4, and CD31 expression were analysed by IHC to evaluate the biological tumour behaviour. Sections were cut at 3-µm, dewaxed in xylene, and rehydrated in graded ethanol. Integrin $\alpha_6\beta_1$ and CXCR4 expression, microvessel density (CD31), and tumour cell proliferation (Ki-67) were detected by incubating the slides with monoclonal antibodies against human integrin $\alpha_6\beta_1$ (1:200, sc-7312; Santa Cruz Biotechnology, Santa Cruz, California, US), CXCR4 (1:100, ab227767; Abcam, Massachusetts, US), Ki-67 (1:100, ab270650; Abcam), or CD31 (1:50, ab28364; Abcam), respectively, overnight, followed by horseradish peroxidase-conjugated anti-mouse IgG (1:1000, Earth Ox, Millbrae, California, US) with 3’3-diaminobenzidine as the chromogen. H&E staining was also performed. All images were obtained at 100 × magnification with the same exposure time. Brightness and contrast were adjusted similarly in all images. Integrin $\alpha_6\beta_1$ and CXCR4 expression levels were quantified by determining the optical density (OD) after immunostaining.
**Statistical analysis**
All statistical analyses were carried out using R software (version 3.6.1) and graphs were constructed using GraphPad Prism software (version 7). Continuous variables with a non-normal distribution were expressed as median (interquartile range). Differences in T/NT and protein expression levels among groups were compared using Wilcoxon’s rank-sum or Kruskal–Wallis tests. The sensitivity, specificity, area under the curve (AUC), and cut-off value of T/NT were evaluated by receiver operating characteristic curve (ROC) analysis. Correlations between continuous variables with non-normal distributions were evaluated by Spearman’s rank correlation analysis. Bonferroni’s correction was applied for multiple comparisons. Statistical significance was established at $p < 0.05$.
**Results**
**Patient characteristics**
The clinical characteristics of the patients are shown in Table 1. Of the 210 consecutive patients enrolled in this study, 185 (88.1%) had malignant neoplasms identified by histopathology, including 22 patients with small cell lung cancer (SCLC), 97 with adenocarcinoma (LAC), 48 with squamous cell carcinoma (LSC), and 18 patients with other malignant lung tumours. Tumour tissues were obtained during thoracic surgery ($n = 118$), fine-needle aspiration ($n = 35$), or bronchoscopy ($n = 32$). Of the 25 patients with benign respiratory diseases, the benign nature of the lesion was confirmed during clinical follow-up in 12 patients, by histopathology in 7 patients, and at imaging follow-up in 6 patients. According to the Tumour, Node, and Metastasis (TNM) classification of lung cancer 8th edition published in 2015 [26], 37 patients were diagnosed with stage I (20.0%), 13 with stage II (7.03%), 40 with stage III (21.62%), and 95 patients with stage IV (51.35%). The volume of the primary tumour (median (interquartile range): 28.01 (12.30, 76.33) mm$^3$ was significantly higher in patients with malignant compared with benign disease (10.89 (8.66, 15.77) mm$^3$) (Wilcoxon’s rank-sum test, $p < 0.01$).
| Variants | Lung cancer | Benign disease | $P$ |
|----------|-------------|----------------|-----|
| **General** | | | |
| Age (years) | 64.17 ± 10.15 | 61.04 ± 12.77 | 0.25 |
| Sex | | | |
| Male | 133 (56.00%) | 14 (71.89%) | 0.16 |
| Female | 52 (44.00%) | 11 (28.11%) | |
| **Cancer type** | | | |
| LAC | 97 (52.43%) | | |
| LSC | 48 (25.95%) | | |
| SCLC | 22 (11.89%) | | |
| Other | 18 (9.73%) | | |
| **Stage** | | | |
| I | 37 (20.00%) | | |
| II | 13 (7.03%) | | |
| III | 40 (21.62%) | | |
| IV | 95 (51.35%) | | |
*LAC* adenocarcinoma, *LSC* squamous cell carcinoma, *SCLC* small cell lung cancer
**[99mTc]Tc-Galacto-RGD$_2$ imaging and interpretation**
High-contrast images acquired 1 h after injection of [99mTc]Tc-Galacto-RGD$_2$ showed higher focal uptake in malignant primary tumours and metastatic lymph nodes (Fig. 2), compared with significantly lower uptake in benign lesions, the ratio (median (interquartile range)) of T/N in malignant disease was 6.84 (4.62, 9.86), whereas that of benign diseases was 2.53 (1.24, 3.91), $p<0.01$. We also compared the uptake in different lung cancer subtypes (Fig. 3). [99mTc]Tc-Galacto-RGD$_2$ uptake was highest in LSC (T/N: 8.53 (6.75, 10.99)), followed by LAC (T/N: 6.84 (4.64, 9.07)) and SCLC (T/NT: 4.73 (2.47, 5.85)). Other types of lung cancer (T/NT: 5.23 (3.32, 11.50)) showed moderate uptake of [99mTc]Tc-Galacto-RGD$_2$ in the primary tumour, with no significant difference between other types and LSC, LAC, and SCLC.
**Fig. 2** [99mTc]Tc-Galacto-RGD$_2$ imaging showed RGD-avid uptake in the primary tumour, lymphadenopathy, and remote metastases in a patient with suspected multiple myeloma. The final diagnosis was lung adenocarcinoma confirmed by bronchoscopic biopsy. **a** Primary lesion presented in the lung window. **b**, **c** Enhanced CT showed primary tumour and lymphadenopathy in the right hilum. **d**, **e** Bone scan showed lytic lesion in the right rib and sclerotic lesions in the pelvis. **f**, **g** [99mTc]-Galacto-RGD$_2$ image showed avid lesions in the right lung and the right hilum. **h**, **i**, **j** IHC staining. **h** $\alpha_1\beta_3$ expression. **i** CD31 expression in neo-vasculature. **j** CXCR4 expression in tumour tissue
**Fig. 3** Ratio of primary tumour to normal tissue (T/NT) in different groups. *$p<0.05$, **$p<0.01$, ***$p<0.001$
also compared uptake by the primary tumour between locoregional and advanced stages. T/N was significantly lower in stage I–II (5.78 (3.62, 7.95)) compared with advanced stages (III –IV; 7.28 (5.43, 10.34)), $p < 0.01$. However, there was overlap with inflammatory pseudotumours or tuberculosis. RGD avidity was found in two cases of pulmonary sequestration and thymoma, respectively, due to higher density of micro-vessels (Figs. 4, 5). ROC analysis indicated that the sensitivity, specificity, and accuracy of $^{[99m}\text{Tc}]$Tc-Galacto-RGD$_2$ were 91.89%, 48.0%, and 86.67%, respectively, using a cut-off value of 2.5. With a T/N cut-off value of 3.94, the AUC was 0.83 and the sensitivity and the specificity were 82.7% and 76.0%, respectively.
**Histopathology and IHC**
Of the 210 patients with suspected lung cancer, immunohistochemistry (IHC) was performed in 66 patients with lung cancer and seven patients with benign diseases. Expression levels (median (interquartile range)) of integrin $\alpha_5\beta_3$ were significantly higher in lung cancer (OD: 15,020.5 (4482.6, 44,455.2)) compared with benign diseases (OD: 1797.8 (794.0, 2943.6); $p < 0.01$) (Table 2). CD31 levels were also elevated in lung cancer (OD: 21.9 (13.75, 34.35) vs 9.00 (8.90, 11.50); $p < 0.01$). Higher levels
---
**Fig. 4** $^{[99m}\text{Tc}]$Tc-Galacto-RGD$_2$ imaging showed focal uptake in a patient with pulmonary sequestration. **a**, **b** Enhanced CT showed the tumour blood supply originated from the abdominal aorta. **c**, **d** RGD imaging showed focal uptake in the tissue, and necrosis in the central zone of the tissue.
**Fig. 5** Female patient with an anterior mediastinum lesion, the final diagnosis was thymus adenoma **a-d** ($^{99m}$Tc-Tc-Galacto-RGD2 imaging showed high focal uptake. **e** The surface of the tumour showed many new vessels. **f-g** IHC showed a high density of neovascularization with high expression of CD31, but no significant $\alpha_v\beta_3$ expression in the tumour cell membrane. **h** H&E staining.
**Table 2** Immunohistochemical results of lung cancer and benign disease
| Variants | Lung cancer (n = 66) | Benign disease (n = 7) | p |
|-------------------|----------------------|------------------------|-------|
| Integrin $\alpha_v\beta_3$ (OD) | 15,020.5 (4482.6,44,455.2) | 1797.8 (794.0,2943.6) | 1.08E−03 |
| CXCR4 (OD) | 5120.0 (1978.0,18,460.0) | 538.6 (300.0,7101.7) | 0.08 |
| CD31 (MVD) | 21.9 (13.75,34.35) | 9.00 (8.90,11.50) | 5.56E-03 |
| Ki-67 (%) | 20.00 (7.46,40.00) | | |
Bold values indicate $p$ values means there is a significance of the statistical results
*OD* optical density, *MVD* microvessel density. The non-normal distribution data showed as median (interquartile range)
of integrin $\alpha_5\beta_3$ were expressed in advanced tumours (OD: 19,729.00 (6445.40, 45,288.30)) compared with locoregional tumours (5914.40 (1461.60, 17,658.20)), $p < 0.05$. Integrin $\alpha_5\beta_3$ was highly expressed not only in endothelial cells in the neovasculature, reflected by CD31 expression, but also in tumour cells (Fig. 6), with a higher density of neovasculature and integrin $\alpha_5\beta_3$ expression in the primary tumour. Integrin $\alpha_5\beta_3$ was also significantly correlated with CD31 expression in lung cancer ($r = 0.30$, $p = 0.016$). However, there was no correlation between tumour uptake of $^{[99m}\text{Tc}]$Tc-Galacto-RGD$_2$ and integrin $\alpha_5\beta_3$ expression in the primary tumour in this study (Fig. 7). Squamous lung cancer usually showed higher level of $\alpha_5\beta_3$ in the tumour cell and the higher density of microvessel, which was consistent with RGD imaging as shown in Fig. 8. Aggressive LAC tends to higher express integrin $\alpha_5\beta_3$ in the tumour cell and has denser microvessels, which showed focal uptake in the RGD image, as shown in Fig. 9. Neo-vascularization varied in benign respiratory diseases, associated with higher integrin $\alpha_5\beta_3$ expression. In the current study, integrin $\alpha_5\beta_3$ correlated with CD31 expression in the neo-vessel, indicating that integrin $\alpha_5\beta_3$ mediated angiogenesis, leading to tumour development and metastasis. We also examined CXCR4 expression. CXCR4 was highly expressed in lung cancer, as demonstrated by IHC. Furthermore, expression levels of CXCR4 tended to be positively correlated with integrin $\alpha_5\beta_3$ levels in lung cancer specimens ($r = 0.22$, $p > 0.05$). In addition, the proliferation index (Ki-67, median (interquartile range)) in LSC and SCLC (27.45 (11.88, 42.00) and 70.00 (55.13, 73.48), respectively) were both significantly higher than in LAC (10.15 (2.98, 27.89)) (Table 3).
**Lymphadenopathy and distant metastasis**
Of the 185 patients with lung cancer, 116 patients had lymphadenopathy, 87 had remote metastasis, 17 had multiple lung tumours including pleural invasion, and 70 patients had bone metastasis. The metastatic lymph
---
**Fig. 6** $^{[99m}\text{Tc}]$Tc-Galacto-RGD$_2$ SPECT/CT detected the primary tumour and multiple lymph node metastases in advanced adenocarcinoma. $^{[99m}\text{Tc}]$Tc-Galacto-RGD$_2$ image (a–d): **a** Whole-body image; **b–d** SPECT/CT. **e–g**: IHC showed higher expression of $\alpha_5\beta_3$ in the tumour cells and neo-vasculature, higher density of micro-vessel with CD31 expression in the tumour tissue, and a higher Ki-67 index.
nodes and remote metastases showed high focal uptake of $^{99m}$Tc-Tc-Galacto-RGD$_2$. However, although lymphadenopathy was evaluated by imaging follow-up, the final diagnosis was not confirmed, and we were therefore unable to evaluate the diagnostic value of $^{99m}$Tc-Tc-Galacto-RGD$_2$ imaging for lymphadenopathy and remote metastasis in this study.
**Discussion**
We previously validated the ability of $^{99m}$Tc-Tc-Galacto-RGD$_2$ to identify iodine-refractory status in patients with thyroid cancer [27]. In a rare case with a solitary fibrous tumour located in the main pulmonary artery, $^{99m}$Tc-Tc-Galacto-RGD$_2$ imaging played an important role in detecting the primary tumour and predicting the metastatic potential [22]. In the current study, we evaluated the use of $^{99m}$Tc-Tc-Galacto-RGD$_2$ SPECT/CT for the detection of lung cancer. We also explored the expression of integrin $\alpha_5\beta_3$ and CXCR4 in different lung cancer subtypes, and compared the neovasculature among these subtypes. The correlations between tumour uptake of $^{99m}$Tc-Tc-Galacto-RGD$_2$ and integrin $\alpha_5\beta_3$ expression and neovascularization were also explored.
High-contrast images of $^{99m}$Tc|Tc-Galacto-RGD2 showed a significantly higher T/NT ratio in malignant compared with benign lung lesions. Malignant primary tumours and metastatic lymph nodes showed higher focal uptake, while benign lesions showed significantly lower uptake. $^{99m}$Tc|Tc-Galacto-RGD2 SPECT/CT showed high sensitivity for primary tumours and remote metastases. ROC analysis showed a sensitivity and accuracy of 91.89% and 86.67%, respectively, for $^{99m}$Tc|Tc-Galacto-RGD2 SPECT/CT, using a cut-off value of T/N ratio at 2.5. However, the specificity for differentiating between malignant and benign disease was limited, possibly because of the involvement of integrin $\alpha_3\beta_3$ in various benign diseases. Overlap usually occurs between tuberculosis and inflammatory pseudo-tumours, which usually show higher uptake of $^{99m}$Tc|Tc-Galacto-RGD2 than other types of benign diseases, such as pneumonia [12].
In the current study, IHC showed that $\alpha_3\beta_3$ levels were higher in advanced lung cancer, and proliferation index, represented by Ki-67, was significantly increased in advanced stages of SCLC, associated with metastatic potential [12, 19, 28]. Patients with lung cancer, even in the early stages, may develop multiple metastases several months even after thorough tumour resection, possibly related to specific tumour types with higher metastatic potential [19]. In the current study, CXCR4 expression levels were higher in lung cancer compared with benign disease, though the differences were not significant. Its expression was correlated with both integrin $\alpha_3\beta_3$ and CD31 expression in primary lung tumours, while integrin $\alpha_3\beta_3$ was also correlated with CD31. These findings validate our hypothesis that lymphadenopathy and remote metastasis are mediated by specific biological molecules. Integrin $\alpha_3\beta_3$ and CXCR4 may mediate angiogenesis, which may further promote lymph node and remote metastases. Integrin $\alpha_3\beta_3$-targeted imaging thus improves our understanding of the interactions between cancer cells and their microenvironment, which is a necessary prerequisite for the development of treatment strategies. This study showed higher levels of integrin $\alpha_3\beta_3$ were expressed in advanced tumours, integrin $\alpha_3\beta_3$ was also highly expressed not only in endothelial cells in the neo-vasculature but also in tumour cells, higher uptake was found in the primary tumour with a higher density of neo-vasculature and higher $\alpha_3\beta_3$ expression, which associated with multiple lymphadenopathy and remote metastasis. This finding confirmed integrin $\alpha_3\beta_3$ overexpression as an important component of tumour microenvironment, which was related with tumorigenic and aggressive behaviour in lung cancer. CXCR4 has been implicated in the chemotactic migration of cancer cells [10]. CXCR4 and integrin might synergistically promote lymphatic metastasis in lung cancer, and act as clinical predictors of lymph node metastasis in non-SCLC [29, 30]. High expression levels of chemokines are related to a poor prognosis and a poor
chemotherapy tolerance in cancer patients [31–34]. CXCR4 is a chemokine receptor that plays a critical role in the process of lymphocyte homing to lymphatic vessels and secondary lymphoid organs, including the lymph nodes [35].
Integrin α₅β₃ was expressed not only in the tumour cells, but also in the endothelium, though there was a lack of a correlation between tumour uptake of [⁹⁹ᵐTc] Tc-Galacto-RGD₂ and integrin α₅β₃ expression related to the heterogeneity of lung cancer. We found that tumour uptake of [⁹⁹ᵐTc] Tc-Galacto-RGD₂ was related to integrin α₅β₃ expression, neovascularization, and tumour stage, and integrin α₅β₃ expression in tumour cells may promote lymphatic and distant metastases (Fig. 2). However, benign diseases showed variable degrees of angiogenesis, also associated with higher expression of integrin α₅β₃, as shown in one patient with thymus adenoma and in another with pulmonary sequestration (Figs. 3, 4). We hypothesized that tumour uptake of [⁹⁹ᵐTc] Tc-Galacto-RGD₂ depended on the neo-vasculature and integrin α₅β₃ expression in the tumour cells, and focal uptake in RGD-targeted imaging would thus be higher in primary tumours with more neo-vasculature and higher integrin α₅β₃ expression in the cell membrane. Regarding the different subtypes of lung cancer, LSC had more neovascularization and higher integrin α₅β₃ expression, followed by LAC, while SCLC showed less neovascularization and a higher proliferation index. The highest T/N ratio was therefore found in LSC, and was significantly higher than that in LAC and SCLCs. RGD-targeted imaging may thus serve as a useful tool for the phenotyping of lung cancer.
[¹⁸⁸Ga]Ga and [¹⁸F]F labelled RGD tracers have been used in the preclinical and clinical, [¹⁸⁸Ga]Ga-NODAGA-RGD provide a different spatial distribution than 2-[¹⁸F]FDG. It is worth noting that [¹⁸F]F-Galacto-RGD not only can be used for the assessment of α₅β₃ expression in the tumour neovasculature, but also in human atherosclerotic carotid plaques, where it correlates with α₅β₃ expression [36]. Compared with PET RGD tracer, [⁹⁹ᵐTc] Tc-Galacto-RGD₂ SPECT imaging has a disadvantage in space resolution. However, if taking the expenditure into account, [⁹⁹ᵐTc] Tc-Galacto-RGD₂ has a significant advantage, and the one-kit vial of [⁹⁹ᵐTc] Tc-Galacto-RGD₂ makes the synthesis more convenient, both of them contribute to the clinical transformation and application of [⁹⁹ᵐTc] Tc-Galacto-RGD₂.
However, there are some limitations in this study should be taken into concern. First, the quantitation of integrin α₅β₃ expression in the immunohistochemistry could be influenced by tumour specimen obtaining and vision field selection. Second, tumour specimens were achieved only in 66 patients, not in all suspected patients, which might influence the data analysis.
Conclusions
This was the first extensive longitudinal study to investigate the expression of integrin α₅β₃ in lung cancer. [⁹⁹ᵐTc] Tc-Galacto-RGD₂ imaging showed high sensitivity for the detection of primary lung cancer, but limited specificity. [⁹⁹ᵐTc] Tc-Galacto-RGD₂ uptake in the primary tumour was attributed to integrin α₅β₃ expression in the endothelial and tumour cells, and focal uptake occurred in primary lung cancers with more neovascularization and high levels of α₅β₃ in the tumour cells. LSC had a higher density of neo-vessels and higher α₅β₃ expression, followed by LAC and then SCLC, advanced lung cancer showed higher levels of integrin α₅β₃ compared with early stage. These findings suggest that RGD-based imaging might be a useful tool for lung cancer phenotyping and tumour biological behaviour evaluation. Further studies are warranted to validate these findings.
Abbreviations
TKIs: Tyrosine kinase inhibitors; SPECT/CT: Single-photon emission computed tomography; T/N: Tumour/normal tissue ratio; SCLC: Small cell lung cancer; LSC: Squamous cell carcinoma; LAC: Adenocarcinoma; EGFR-TNS: Epidermal growth factor receptor-tyrosine kinase inhibitor-TNS; SCO: The Chinese Society of Clinical Oncology; PET/CT: Positron emission tomography/computed tomography; RGD: Arginine-glycine-aspartate; ROI: Region of interest; H&E: Haematoxylin and eosin; OD: Optical density; MVD: Microvessel density; AUC: Area under the curve; ROC: Receiver operating characteristic; IHC: Immunohistochemistry.
Acknowledgements
We are grateful to our colleagues for clinical data collection and analysis. We’d like to thank Prof. Shuang Liu From Health College of Purdue University for radiotracer synthesis and study design, also thank Susan Furness, PhD, from Liwen Bianji, Edanz Editing China (www.liwenbianji.cn/ac), for editing the English text of a draft of this manuscript.
Authors’ contributions
FW performed manuscript preparation and submission. QM was responsible for imaging acquisition. AZ was responsible for thoracic surgery. WG was responsible for clinical management. YF performed histopathology and immunohistochemistry analysis. NG, WG, and FW were responsible for study design. JF was responsible for clinical data collection and MOT performed manuscript writing. ZY was responsible for clinical data collection. WQ have participated the clinical trial. FY performed statistical analysis. YX performed clinical trial preparation. YX and GS were responsible for imaging interpretation. TF and GS performed radionucliding. TF was responsible for quality control. XS and FQ performed tumour specimen collection and flow cytometry analysis. XY performed [⁹⁹ᵐTc] Tc-Galacto-RGD₂ imaging and interpretation. All authors read and approved the final manuscript.
Funding
This research was supported by grants from the National Natural Science Foundation of China (81602104, 82001952), Jiangsu Provincial Key Research and Development Special Fund (BE2017612), Nanjing Medical Foundation (ZKX17027), Health Commission of Jiangsu Province (H2019098), Nanjing Medical and Health International Joint Research and Development Project (201911042), The Second Round Fund of Nanjing Clinical Medical Center “Nanjing Nuclear Medicine Centre”.
Availability of data and materials
The datasets used and analysed during the current study are available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
All procedures performed in this study involving human participants were carried out in accordance with the ethical standards of the Nanjing Medical University and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all patients.
Consent for publication
All patients signed written consent prior to inclusion and consent for publication.
Competing interests
All authors declare no potential conflicts of interest.
Author details
1Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, 68 Changle Road, Nanjing 210006, China. 2Department of Imaging, Nanjing First Hospital, Nanjing Medical University, Nanjing 210006, China. 3Department of Thoracic Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing 210006, China. 4Department of Respiratory, Nanjing First Hospital, Nanjing Medical University, 68 Changle Road, Nanjing 210006, China. 5Department of Pathology, Nanjing First Hospital, Nanjing Medical University, 68 Changle Road, Nanjing 210006, China.
Received: 23 February 2021 Accepted: 9 June 2021
Published online: 13 June 2021
References
1. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68:394–424. https://doi.org/10.3322/caac.21497.
2. Chen W, Zheng R, Baade PD, Zhang S, Zeng H, Bray F; et al. Cancer statistics in China, 2016. CA Cancer J Clin. 2017;66:115–32. https://doi.org/10.3322/caac.21338.
3. Yang D, Liu B, Bai C, Wang X, Powell CA. Epidemiology of lung cancer and lung cancer screening programs in China and the United States. Cancer Lett. 2020;488:8–27. https://doi.org/10.1016/j.canlet.2019.10.009.
4. Zhang YC. The impact of tobacco on lung cancer. Chin J Lung Dis. Respir Care. 2008;17–7. https://doi.org/10.1007/BF02918403.2008x3.
5. Arbcour KC, Riely GJ. Systemic therapy for locally advanced and metastatic non-small cell lung cancer: a review. JAMA. 2019;322:764–74. https://doi.org/10.1001/jama.2019.1058.
6. Travis WD, Brambilla E, Nicholson AG, Yatabe Y, Austin JHM, Beasley MB, et al. The 2015 world health organization classification of lung tumors: implications for clinical and radiologic advances since the 2004 classification. J Thorac Oncol. 2015;10:1243–60. https://doi.org/10.1097/JTO.0000000000000630.
7. Ferguson FM, Gray NS. Kinase inhibitors: the road ahead. Nat Rev Drug Discov. 2018;17:353–77. https://doi.org/10.1038/nrd.2018.21.
8. Baenswuy V, Christofori G. The angiogenic switch in carcinogenesis. Semin Cancer Biol. 2009;19:329–37. https://doi.org/10.1016/j.semcancer.2009.05.003.
9. Zaidel-Bar R. Job-splitting among integrins. Nat Cell Biol. 2013;15:575–7. https://doi.org/10.1038/ncb2770.
10. Wu D, Yu Y, Ding T, Zu Y, Yang C, Yu L. Pairing of integrins with ECM proteins determines migrasome formation. Cell Res. 2017;27:1397–400. https://doi.org/10.1038/cr.2017.108.
11. Rohlfs EH, Pieschelbach MD. New perspectives in cell adhesion: RGD and integrins. Science. 1987;238:491–7. https://doi.org/10.1126/science.2821619.
12. Niu G, Chen X. Why integrin is a primary target for imaging and therapy. Theranostics. 2011;1:30–47. https://doi.org/10.7150/thno.v0i1p030.
13. Demircioglu F, Hodivala-Dilke K. AlphaVbeta3 Integrin and tumour Blood vessels-learning from the past to shape the future. Curr Opin Cell Biol. 2016;42:121–7. https://doi.org/10.1016/j.jcobi.2016.07.008.
14. Sun X, Ma T, Liu H, Yu X, Wu Y, Shi J, et al. Longitudinal monitoring of tumor antiangiogenic therapy with near-infrared fluorephore-labeled agents targeting to human alphaVbeta3 and vascular endothelial growth factor. Int J Nucl Med Mol Imaging. 2014;4:1428–39. https://doi.org/10.1007/s12659-014-2702-3.
15. Joseph JM, Gross N, Lassau N, Vuillorff V, Opolon P, Laudani L, et al. In vivo echographic evidence of tumoral vascularization and microenvironment interactions in metastatic orthotopic human neuroblastoma xenografts. Int J Cancer. 2005;113:881–90. https://doi.org/10.1002/ijc.20681.
16. Zou W, Guo Y, Li Y, et al. Absence of Lap2 and the alphavbeta3 integrin causes severe osteoporosis. J Cell Biol. 2015;208:125–36. https://doi.org/10.1083/jcb.201410123.
17. Zhang L, Meng X, Shan X, Gu T, Zhang J, Feng S, et al. Integrin alphav-beta3-specific hydroxyurea for cooperative targeting of glioblastoma with high sensitivity and specificity. Anal Chem. 2019;91:1287–95. https://doi.org/10.1021/acs.analchem.8b04700.
18. Wan L, Li G, Wang D, Li F, Men D, Hu T, et al. Quantitative profiling of integrin alphaVbeta3 on single cells with quantum dot labeling to reveal the phenotypic heterogeneity of glioblastoma. Nanoscale. 2019;11:2814–23. https://doi.org/10.1039/C9nr01105f.
19. Wu FH, Luo LJ, Liu Y, Zhao QX, Luo C, Luo J, et al. Cyclin D1b splice variant promotes alphavbeta3-mediated adhesion and invasive migration of breast cancer cells. Cancer Lett. 2018;435:159–67. https://doi.org/10.1016/j.canlet.2018.08.018.
20. Krishna SR, Singh A, Bowler N, Duffy AN, Friedman A, Fedele C, et al. Prostate cancer sheds the alphavbeta3 integrin in vivo through exosomes. Matrix Biol. 2019;77:41–57. https://doi.org/10.1016/j.matbio.2018.08.004.
21. Yan B, Qiu F, Ren L, Dai H, Fang W, Zhu H, et al. (99m)Tc-3PRGD2 molecular imaging of alphavbeta3 integrins in head and neck squamous cancer xenograft. J Radionucl Nucl Chem. 2015;304:1171–7. https://doi.org/10.1007/s12601-015-3038-2.
22. Luo R, Liang Y, Huang Y, Chen X, Wang F. Longitudinal observation of solitary fibrous tumor translation into malignant pulmonary artery intimal sarcoma. J Cardiol Thorac Surg. 2020;15:233. https://doi.org/10.1186/s13019-020-01131-3.
23. Zhou B, Fu T, Liu Y, Wei W, Dai H, Fang W, et al. 99mTc-3PRGD2 single-photon emission computed tomography/computed tomography for the diagnosis of choroidal melanoma: a preliminary STROBE-compliant observational study. Medicine (Baltimore). 2018;97:e12441. https://doi.org/10.1097/MD.00000000000012441.
24. Fu T, Qiu W, Qiu F, Li Y, Shao G, Tian W, et al. (99m)Tc-3PRGD2 micro-single-photon emission computed tomography/computed tomography provides a rational basis for integrin alphavbeta3-targeted therapy. Cancer Biother Radiopharm. 2014;29:351–8. https://doi.org/10.1089/cbr.2014.1622.
25. Ji S, Czerwinski A, Zhou Y, Shao G, Valenzuela F, Sowinski P, et al. (99m) Tc-Galcato-RGD2: a novel 99mTc-labeled cyclic RGD peptide dimer useful for tumor imaging. Mol Pharm. 2013;10:3304–14. https://doi.org/10.1021/mp400085d.
26. Kay FU, Kandathil A, Batra K, Saboo SS, Abbasa R, Rajain P. Revisions to the tumor, node, metastasis staging of lung cancer (8th edition): rationale, radiologic findings and clinical implications. World J Radiol. 2017;9:269–79. https://doi.org/10.4329/wjr.v9.i6.269.
27. Xu Q, Lu X, Wang J, Huang Y, Li Z, Zhang L, et al. Role of (99m)Tc-Galcato-RGD2 SPECT/CT in detecting metastatic differentiated thyroid carcinoma after thyroidectomy and radioactive iodine therapy. Nucl Med Biol. 2020;88:89–94.e3. https://doi.org/10.1016/j.nucmedbio.2020.06.006.
28. Hoster E, Rosenwald A, Berger F, Bernd HW, Hartmann S, Loddenkemper C, et al. Prognostic value of Ki-67 index, cytology, and growth pattern in mantle-cell lymphoma: results of randomized trials of the european mantle-cell lymphoma network. J Clin Oncol. 2016;34:1366–94. https://doi.org/10.1200/JCO.2015.61.8387.
29. Chen FH, Fu SY, Yang WC, Wang CC, Chiang CS, Hong JH. Combination of vessel-targeting agents and fractionated radiation therapy: the role of the SDF-1/CXCR4 pathway. Int J Radiat Oncol Biol Phys. 2013;86:777–84. https://doi.org/10.1016/j.ijrobp.2013.02.036.
30. Iwakiri S, Morita M, Takahashi T, Sodeoka M, Nagai S, Okubo K, et al. Higher expression of chemokines receptor CXCR4 predicts early and metastatic recurrence in pathological stage I non-small cell lung cancer. Cancer. 2009;115:2580–93. https://doi.org/10.1002/cncr.24281.
31. Demir IE, Mota RC. Chemokines: the (un)usual suspects in pancreatic cancer neural invasion. Nat Rev Gastroenterol Hepatol. 2020. https://doi.org/10.1038/s41575-020-0329-1.
32. Peng D, Kryczek I, Nagarsheth N, Zhao L, Wei S, Wang W, et al. Epigenetic reprogramming of TH1-type chemokines shapes tumour immunity and immunotherapy. Nature. 2015;527:49–53. https://doi.org/10.1038/nature15520.
33. Lau S, Fetzinger A, Venkiteswaran G, Wang J, Lewellis SW, Koplinski CA, et al. A negative-feedback loop maintains elevated chemokine concentrations during immune migration. Nat Cell Biol. 2020;22:268–73. https://doi.org/10.1038/s41556-020-0346-7.
34. Kwon ID, Lozada J, Zhang Z, Zeisler J, Poon R, Zhang C, et al. High-contrast CXCR4-targeted (18)F-PET imaging using a potent and selective antagonist. Mol Pharm. 2021;18:187–97. https://doi.org/10.1021/acs.molpharmaceut.0c00785.
35. Caboglu N, Yazici MS, Arun B, Broglio KR, Hortobagyi GN, Price JE, et al. CCR7 and CXCR4 as novel biomarkers predicting axillary lymph node metastasis in T1 breast cancer. Clin Cancer Res. 2005;11:5686–93. https://doi.org/10.1158/1078-0432.CCR-05-1447.
36. Beer AJ, Peeters J, Heider P, Saraste A, Reeps C, Metz S, et al. PET/CT imaging of integrin alpha(v)beta3 expression in human carotid atherosclerosis. JACC Cardiovasc Imaging. 2014;7:178–87. https://doi.org/10.1016/j.jcmg.2013.12.003.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
“Quality, food safety, a short and clean label. But also innovation, creativity and inspiration. These are the Pedon rules for good, well-made food.”
Gloria Buzzola
Quality Assurance Manager
2.1 Sustainable Innovation and Well-Being
pag. 28
2.2 Quality and Food Safety
pag. 30
2.3 Responsible Communication
pag. 36
HIGHLIGHTS
218 NEW R&D PROJECTS over the reporting period
27 EXTERNAL AUDITS for certifications and standards agreed with customers–average over the last 3 years
SMAU INNOVATION AWARD 2023
5.350 STUDENTS INVOLVED in the educational project “On the Road with Pedon”
MATERIAL TOPICS
Sustainable Innovation
Quality and Food Safety
Responsible Communication
SUSTAINABLE DEVELOPMENT GOALS
3 GOOD HEALTH AND WELL-BEING
4 QUALITY EDUCATION
9 INDUSTRY INNOVATION AND INFRASTRUCTURE
2.1 Sustainable Innovation and Well-Being
For Pedon, food is more than just about eating. It becomes an essential vehicle for culture and sustainability, an opportunity to promote a healthy diet and lifestyle.
Given the recognised nutritional value of our raw materials, innovation at Pedon is focused on making legumes and grains more accessible and tasty, adapting them to modern needs without compromising on quality and natural goodness.
Our approach draws inspiration from the flexitarian diet, driven by ethical and health concerns, which shifts the dietary balance towards plant-sourced proteins, and from the plant-based diet. These goals and assumptions provide the framework for our Research & Development Division in designing innovation processes, shaped by constant market research and monitoring. Using tools for trend analysis and the direct observation of markets and segments of inspiration, the division translates the cues it finds into new product ideas following guidelines for nutrition, taste and easy use.
The projects developed over the reporting period focused primarily on expanding the new market segment of ready meals. These efforts generated an average of 86 new items per year, mainly under our Private Label and for the development of international markets. New branded products were targeted at growth in new segments, in particular ready soups.
| NUMBER OF R&D PROJECTS DEVELOPED | FY 2021/2022 | FY 2022/2023 | FY 2023/2024 |
|----------------------------------|--------------|--------------|--------------|
| Italy | 74 | 87 | 97 |
| World | 60 | 59 | 58 |
| NPD | 65 | 79 | 74 |
| Products improvement/extensions | 9 | 8 | 23 |
Our Research & Development Department work also involves the study of new ingredients and new varieties to improve the taste and nutritional values of products or as an essential part of the development of new segments, in particular ready meals whose recipes are based on vegetables and spices. In the three-year reporting period, a total of 74 new raw materials were adopted after in-depth study.
PEDON’S INNOVATIVE BREAKTHROUGHS HAVE WON NUMEROUS AWARDS, REFLECTING HOW STAKEHOLDERS APPRECIATE THE CREATIVE EFFORTS OF THE COMPANY TO TRANSFORM THE MARKET AND GENERATE VALUE FOR CONSUMERS AND THE WHOLE COMMUNITY.
**2022 AWARDS**
- Best Product Innovation 2021
Legume- and Nut-based Snacks, for “Legumi Fatti a Snack”, Grocery e Consumi Awards
- Best Product Innovation 2021
Legumes and Grains, for “Mix Pronti con Verdure”, Grocery e Consumi Awards
- Mark-up e GDO Week award
for “Legumi Fatti a Snack”
**2023 AWARDS**
- Best Product Innovation
for “Le Zuppe I Pronti Pedon”, Grocery e Consumi Awards
- Smau Innovation Award 2023
Italian Excellence Innovation Model for Businesses and Public Administrations
2.2 Quality and Food Safety
Innovation goes hand in hand with a central focus on quality and food safety management – an overriding commitment and objective for the company in its business. Quality and food safety are concepts that are interconnected. They also tie in closely with health, combining to form the broader concept of “food integrity”, meaning that products are healthy, nutritious, safe, tasty, authentic, traceable and environmentally-friendly.
Certifications
One of the ways in which Pedon pursues continuous improvement is through product and system certifications, obtained on both mandatory and voluntary bases. These certifications are an assurance for consumers and retailers that our products comply with food safety and quality standards. Specifically, Pedon has obtained BRCGS Food Safety and IFS Food Safety process certifications, two international schemes that are global standards for food safety, quality and legality. For both certifications, audits are carried out unannounced.
BRC STANDARD
The Brand Reputation through Compliance Global Standard (BRCGS) for Food Safety assures the quality and safety of food products through the application of quality/product management systems, HACCP, hygiene control and good manufacturing practices. Pedon has been graded AA+, the highest grade envisaged.
IFS STANDARD
The International Featured Standards (IFS) Food Standard assesses products and production processes to assure that food producers guarantee safety, authenticity and quality, in accordance with legal requirements and customer specifications. Pedon was awarded a “higher level” score.
Product certifications follow similar assumptions and purposes.
- Certification of organic production and labelling of organic products pursuant to EU Regulation 2018/848, as amended.
- Various standards adopted by Pedon for the certification of gluten-free products in Italy and North America.
- Products certified as kosher for Jewish consumption by the U.S. Orthodox Union.
- Compliance with Naturland standards for organic production and processing, with social responsibility requirements at all levels.
- V-Label Vegan is one of the most popular ethical standards for the certification of vegetarian and vegan products.
- Products compliant with the Rainforest Alliance’s Sustainable Agriculture Standard.
- PGI (Protected Geographical Indication) designation for Castelluccio di Norcia Lentils.
The upgrading of the quality system and the numerous audits carried out provide ongoing insights for constant and continuous improvement.
In the three-year reporting period, 44 audits were carried out by certification bodies to renew the certifications which Pedon has chosen to adopt. An additional 39 audits were carried out to verify compliance with standards agreed with customers. On top of all this is our comprehensive internal audit system. Internal audits are carried out regularly to check compliance with the System and requirements, with findings reported to company departments in order to improve the company production process and raise awareness and attention on compliance issues.
| | FY 2021/2022 | FY 2022/2023 | FY 2023/2024 |
|------------------------|--------------|--------------|--------------|
| NUMBER OF CERTIFICATION AUDITS | 14 | 14 | 16 |
| NUMBER OF CUSTOMER AUDITS | 14 | 15 | 10 |
| NUMBER OF INTERNAL AUDITS | 30 | 48 | 47 |
Quality Controls
Pedon carries out a quality control procedure on incoming raw materials and products in accordance with established standards and methods.
Quality controls are divided into four types: chemical, microbiological, organoleptic and physical, and are carried out internally by the Quality Control Department and by specialized external partners. Internal sensory analyses are also carried out on a regular basis to check that products retain their organoleptic properties over time, especially for the ready meals.
Specifically in FY 2023/2024, 5,627 internal controls were carried out on incoming raw materials to check their physical properties, such as humidity levels, physical defects and the presence of any foreign bodies.
All batches of products are also subject to quality control before they are released on the market, with checks on organoleptic properties and other physical parameters.
External controls carried out by accredited partners consist of microbiological analyses, to identify any pathogenic microorganisms and their toxins or organisms such as yeasts and moulds, and chemical analyses to identify any contaminants and food residues, in particular as concerns allergens, given that some “allergen-free” products are packaged in our facilities.
In the last financial year, the following external controls were carried out:
- 3,000 Analyses on Raw Materials
- 1,000 Analyses on Products
Technologies to Improve Quality Standards
With the aim of improving standards in terms of effectiveness and efficiency, Pedon invested €400,000 over the three-year reporting period in technological solutions for upgrading quality. These investments targeted new washing systems, the digitisation of quality processes and packaging control and new state-of-the-art laboratory instrumentation. One major improvement project involved the installation of an XRAY system for the selection and cleaning of raw materials. This advanced technology system ensures outstanding levels of elimination for metals, stones, glass and other foreign bodies.
Quality and Food Safety Awareness Plan
A fundamental driver of continuous improvement lies in the promotion of a strong quality mindset in the company. Accordingly, Pedon has set out a Quality and Food Safety Awareness Plan, a roadmap to help build the essential mindset and skills needed to ensure the proper functioning of the processes and their key outcomes, quality and safety. The plan is a tangible expression of the organisation’s commitment to reaching standards of true excellence in food production and distribution, to guarantee the greatest quality and safety for consumers.
The plan identifies the roles involved, the frequency of assessments and the grading system used. It additionally involves interdepartmental meetings aimed at promoting cooperation and the exchange of knowledge between the different functional areas. This synergy is essential for addressing complex challenges and ensuring integrated quality and safety management. A key aspect of the plan concerns the updating and training of our people on behavioural rules and allergen risk management.
2.3 Responsible Communication
Product Labelling
Pedon protects consumers by complying strictly with European labelling requirements and rules on the environmental labelling of packaging. The company strives to guarantee that each label on its products tells a story of clarity, integrity and all-round transparency, offering detailed information on the products, ensuring all marketing communications are accurate and comprehensive.
Labelling checks involve various Departments and the following activities are carried out:
- Verification and validation of nutritional and health claims through product analysis activities
- Checks on the nutritional values reported on labels
- Cross-checks with external legal advisers on the information provided, to ensure there is no room for interpretation and ambiguity
In the reporting period, there were no recorded instances of non-compliance with product information and labelling requirements, in particular as concerns the process of issuing and processing labels, demonstrating Pedon’s constant commitment to accuracy and transparency in labelling.
In the same period, there were no recorded non-conformities in marketing communications, including advertising, promotion and sponsorships.
Customer care
Our relationship with consumers is a fundamental priority for Pedon. The company has an extensive customer care system in place, including a toll-free contact number, a website and social media channels through which reports, clarifications and complaints can be made. Efforts to reduce response times and raise the quality of customer care were carried on over the reporting period.
The analysis shows that in the last year the number of complaints as a percentage of the number of items sold dropped, confirming the effectiveness of our approach to the constant improvement of products and processes. At the same time, the number of requests for information similarly shows a falling trend, thanks to communication initiatives on the Pedon website concerning product availability and deliveries.
| | FY 2021/2022 | FY 2022/2023 | FY 2023/2024 |
|------------------------|--------------|--------------|--------------|
| INFORMATION REQUESTS | 1,163 | 701 | 576 |
| RECALLS (% of total quantities sold) | 0.009% | 0.005% | 0.004% |
In support of innovation and in order to integrate the information present on the packaging, Pedon has chosen to undertake educational paths for its stakeholders. These initiatives aim to raise awareness in consumers and employees of the importance of a sustainable and healthy diet, promoting greater knowledge of the benefits of plant foods and encouraging a healthy and responsible lifestyle.
Children are naturally curious, showing great imagination and enthusiasm in learning. That’s why Pedon, which has made curiosity one of its foundational values, targets children in promoting a healthy, balanced diet.
**On the Road with Pedon**
“On the Road with Pedon” is a company initiative for young children to explore and learn about nutrition. The project has proved a resounding success, attracting great approval and enthusiasm. The food education workshop is designed for local primary schools and engages kids in fun learning activities that provide an opportunity for them to learn more about the world of grains, legumes and seeds and discover their incredible and surprising properties. A survey of school principals and teachers found that 91.7% rated the experience as excellent and the learning content as very good/excellent.
| TOTAL WORKSHOPS DELIVERED | FY 2021/2022 | FY 2022/2023 | FY 2023/2024* |
|---------------------------|--------------|--------------|---------------|
| TOTAL PUPILS INVOLVED | 1,250 | 1,800 | 2,300 |
* Of the total workshops planned, 80% have been delivered, with the remaining number scheduled for the first term of the 2024/2025 school year.
**World Legumes Day at the Children’s Museum**
On 10 February, Pedon celebrated World Legumes Day with a special event to tell the story of legumes in an fun and engaging way for kids and their families at the Children’s Museum in Verona. This unique museum is designed to encourage children to explore the world through experiments, practical tests and tactile activities.
**The “Good to Know” Blog**
“Good to Know” is a section of the Pedon website dedicated to food education and the promotion of a healthy lifestyle. The blog is a channel for Pedon to share information about the benefits and nutritional properties of legumes, grains and seeds. It features articles explaining how these foods can contribute to daily wellbeing and practical tips and recipes to incorporate the ingredients easily into our daily diet.
**In-House Nutritionist**
As part of the commitment to promoting a healthy, balanced diet also for the employees, Pedon has engaged a professional nutritionist, who comes regularly to the organisation to talk about nutrition issues and healthy eating habits. |
Suicide can be prevented. Most suicidal people do not want to die, they just do not want to live with the pain they are feeling. Helping a suicidal person talk about their thoughts and feelings can help save a life. Do not underestimate your abilities to help a suicidal person, even to save a life.
How can I tell if someone is feeling suicidal?
A suicidal person may not ask for help directly, but they are likely to show certain warning signs. It is really important that you are able to recognise some of the warning signs for suicide.
Signs a person might be suicidal
A person may show a big change in mood, behaviour or appearance, for example:
• Expressing, in words or actions:
- hopelessness or feeling that their life is worthless
- having no reason to live or no purpose in life
- no interest in or plans for the future
- fear of being involuntarily removed or returned to home country, especially if there is a risk of torture or death
- strong sense of feeling alone and cut off, even if surrounded by family or friends
- distress about intrusive memories of past traumatic events
- feeling that their life has been a failure and they would have been better off in their home country
- feelings of guilt or shame, or belief of being a burden to others (e.g. saying “others will be better off without me”).
• Withdrawing from friends, family or the community.
• Suddenly becoming very sad or a sad person becomes much more depressed.
A person may threaten to hurt or kill themselves, or say that they wish to die, verbally (speaking) or in writing. This may be direct but sometimes is subtle and not obvious. Watch for:
- Talking or writing about death, dying or suicide (including making unexpected jokes about these topics).
- Looking for a way to kill themselves (e.g. trying to get pills or poisons, weapons or other means), including asking for information about possible suicide methods (e.g. ‘will a bottle of this medicine kill me?’). Be aware that people may use different methods to carry out suicide, so pay attention to the presence of any sort of things that could be used for suicide (e.g. sharp objects, poisons - such as pesticides and seeds, and kerosene).
- Saying that they want to disappear.
- Expressing in words or actions:
- they feel trapped, there is no way out or that suicide is the only solution to their problems
- the desire or hope they will die (including praying that God may take their life)
- they feel that death is an honourable solution to their situation.
A person may behave in ways that are life-threatening or dangerous, for example:
- Harming themselves by cutting, taking poison, hitting their head against the wall or other method.
- Stopping life-saving medical treatments or medications.
A person may try to set their affairs and relationships in order, for example:
- Giving away valued possessions.
- Asking others to take on responsibility for the care of people or pets.
- Contacting people (e.g. family members or other people they have not spoken to in a long time) to say goodbye, make peace or ask for forgiveness.
People may show one or many of these signs, and some may not show any signs on this list. Warning signs for suicide may also be different among cultures or their expressions might vary.
If you have noticed some of these warning signs and you are concerned a person may be at risk of suicide, you need to talk to them about your concerns. If you are not sure whether what you have noticed is a reason to be alarmed, you could ask someone who knows the person better than you, if they are worried too.
Getting ready to approach the person
Be aware of your own attitudes, think about how you feel about suicide and how this will impact on your ability to help (e.g. belief that suicide is wrong or that it is an acceptable option). If the person is from a different cultural or religious background to you, remember that they might have beliefs and attitudes about suicide that are different from your own. So it may help to learn more about the common traditional, religious or spiritual beliefs about suicide amongst the people who you have frequent interactions with.
Choose a private place to talk with the person and allow enough time to talk about your concerns.
If you feel you are unable to ask the person about suicidal thoughts, find someone else who can.
Making the approach
Act quickly if you think someone is considering suicide. Even if you only have a mild suspicion that the person is having suicidal thoughts, you should still approach them.
Tell the person you are concerned about them and describe the behaviours that are worrying you. Give them time to talk about their negative feelings before asking about suicidal thoughts. Be aware that the person may not want to talk with you, or you might have difficulty connecting with them. If this happens, you should offer to help them find someone else to talk to.
If the person has issues with their visa, and is unwilling to talk with you for fear of deportation, reassure them that you care about them and not about their immigration situation.
Asking about thoughts of suicide
Anyone could have thoughts of suicide. If you think someone might be having suicidal thoughts, you should ask that person directly. Unless someone tells you, the only way to know if they are thinking about suicide is to ask. For example, you could ask:
“Are you having thoughts of suicide?” or “Are you thinking about killing yourself?”
Be mindful of how you ask someone – the words you use are very important. You should not ask about suicide in a judgmental way, for example don’t say “You’re not thinking of doing anything stupid, are you?”.
See Box 1 Dealing with communication difficulties related to culture or language
Sometimes people don’t want to ask directly about suicide because they think they will put the idea into the person’s head. This is not true. If a person is suicidal, asking them about suicidal thoughts will not increase the risk that they will do it. Instead, asking the person about suicidal thoughts will give them the chance to talk about their problems and show them that somebody cares.
Box 1
Dealing with communication difficulties related to culture or language
It is more important to genuinely want to help than to be of the same age, gender or cultural background as the person. However, if you think the person is uncomfortable interacting with you due to differences in age group, gender, religion, ethnic or cultural backgrounds, or if you are uncomfortable for similar reasons, you should ask the person if they would prefer to talk to someone more like themselves.
Consider also asking leaders from the person’s cultural, religious or spiritual group about any important aspects of gender roles and expectations that might help you to provide immediate assistance to the person. Be aware that in some cultures, males may be less likely to express their emotions, or females may be expected to protect the family’s and husband’s names. These can act as barriers to opening up and disclosing suicidal intentions. However, these barriers should not stop you from trying to talk to the person if you have concerns.
If the person is having trouble communicating in your language, you should speak slowly, use simple words, check for understanding and, if necessary, repeat what you have said. You could also ask them if there is someone who could be contacted to help with communication. If no informal or other face-to-face interpreter services are available, you should use available telephone and online interpreting services.
Even though it is common to feel panic or shock when someone says they are thinking about suicide, it is important to not react negatively, e.g. show judgment, shock, panic or anger. Do your best to appear calm, confident and empathic, as this may have a reassuring effect on the suicidal person.
**How should I talk with someone who is suicidal?**
Tell the person that you care and want to help, and that you do not want them to die. It is more important to show you really care than to say ‘all the right things’. Do not let the fear of saying the wrong words or of not saying the perfect words stop you from encouraging the person to talk.
Ask about and remember the person’s traditional, spiritual and religious beliefs when talking with them. Find out how acceptable suicide is in their culture or religion. Remember that in cultures where suicide is more acceptable, the risk of acting on suicidal thoughts may be increased. In cultures where suicide is not openly talked about, a person might find it more difficult to tell someone about their suicidal intentions.
Be supportive and understanding of the person, and listen to them with all your attention. Suicidal thoughts are often an appeal for help and a desperate attempt to escape from problems and distressing feelings. You should give the person a chance to talk about those feelings and their reasons for wanting to die.
Ask the suicidal person what they are thinking and feeling. Tell them that you want to hear whatever they have to say. Let the person know it is okay to talk about things that might be painful, even if it is hard. Recognise and be understanding and respectful of the suffering of the person. Give them a chance to express their thoughts and feelings (e.g. allow them to cry, express anger or scream), explain their reasons for wanting to die, and acknowledge these (e.g. show you are listening). A person may feel better because they have told someone what they are thinking and feeling. If the person is a teenager and they are worried they may get into trouble for sharing their thoughts or feelings, you should reassure them that this will not happen.
Remember to thank the suicidal person for sharing their feelings with you and talk about the courage it takes to do this.
See Box 2 **Listening tips** for tips on how to listen effectively and Box 3 **What not to do**
**Box 2**
**Listening tips**
- Be patient and remain calm and in control while the suicidal person is talking about their feelings.
- Encourage the person to do most of the talking.
- Listen to the suicidal person without expressing judgment. Accept what they are saying without agreeing or disagreeing with what they are doing or thinking.
- Find out more about the suicidal thoughts and feelings and the problems behind them by asking open questions that cannot be answered with a simple ‘yes’ or ‘no’.
- Keep in mind that asking too many questions can bring on anxiety (nervousness, fear) in the person. If it seems like an interrogation, the person might withdraw from the conversation.
- Show you are listening by repeating back to the person what they are saying.
- Clarify important points with the person so that you know they understand.
- Express empathy for the suicidal person.
Box 3
What not to do
Don’t
- argue with the person about their thoughts of suicide
- debate with the person whether suicide is right or wrong
- use guilt or threats to prevent suicide (e.g. do not tell the person they will go to hell or ruin other people’s lives if they die by suicide)
- dismiss the suicidal person’s problems, or compare their problems to someone else’s
- give simple reassurances such as ‘don’t worry’, ‘cheer up’, ‘you have everything going for you’ or ‘everything will be alright’
- interrupt with stories of your own
- show you are not interested or show a negative attitude through your body language
- challenge the person to carry out their threats by daring them or telling them to ‘just do it’
- try to diagnose a mental health problem
- try to take control and be directive, unless the person is at immediate risk.
Do not avoid using the word ‘suicide’. It is important to discuss the issue directly without fear or expressing negative judgment. Speak about suicide using appropriate language, (e.g. using the terms ‘suicide’ or ‘die by suicide,’) and avoid using terms to describe suicide that promote negative attitudes, such as ‘commit suicide’ (meaning it is a crime or sin) or referring to past suicide attempts as having ‘failed’ or been ‘unsuccessful’ (meaning death would have been a positive outcome).
How can I tell how urgent the situation is?
Take all thoughts of suicide seriously and take action. Do not dismiss the person’s thoughts as ‘attention seeking’ or a ‘cry for help’. Determine the urgency of taking action based on identifying suicide warning signs, including the number and nature of warning signs, and major risk factors and reasons for suicide (e.g. recent stressful event, mental illness, previous suicide attempt or family history of suicide).
Determine whether someone has definite intentions to take their life, or whether they have been having more unclear suicidal thoughts, like “what’s the point of going on?”.
To do this, ask the suicidal person about issues that affect their immediate safety:
- Whether they have a plan for suicide.
- Whether they have already taken steps to get what they need to end their life.
- Whether they have ever attempted or planned suicide in the past.
- Whether they think that they have received any signs or instructions to kill themselves, such as from spirits or ancestors.
- Whether there have been changes in their employment or schooling, social life or family.
- Whether there has been a change in their spiritual or religious beliefs (e.g. an increase or decrease in prayer or church attendance).
Ask the person if they have been using drugs or alcohol. Intoxication (getting drunk or high on drugs) can increase the risk of a person acting on suicidal thoughts. If the person appears intoxicated but says they have not used alcohol or other drugs, ask if they have taken any special herbs, teas or other substances in religious or traditional rituals, as some of these can have intoxicating or hallucinogenic (mind altering) effects.
If the suicidal person says they are hearing voices, ask what the voices are telling them. This is important in case the voices are relevant to their current suicidal thoughts.
Ask the person how they would like to be supported and if there is anything you can do to help, but do not try to take on their responsibilities. It is also useful to find out what has supported the person in the past and what supports are available to them:
- Have they told anyone about how they are feeling?
- Are there people they can turn to when they need help or support?
- Is there anything important in the person’s life that may reduce the immediate risk of suicide (e.g. attachments to children)?
- Have they received help for emotional or mental health problems or are they taking any medication? Keep in mind that the person may not share your understanding of what ‘mental health’ or ‘mental illness’ mean, and feel that they are very negative terms.
If the person is a teenager, they may feel more comfortable getting help if it is less likely to be reported to their family. Ask if they have a supportive friend from outside their culture or community they prefer to contact. If a woman’s husband or other male relative is making it difficult for you to provide assistance, try to get someone who the family respects and trusts to help with the situation.
Remember that those at the highest risk for acting on thoughts of suicide in the near future are those who have a specific suicide plan (i.e. the means, a place, a time and an intention to do it). However, the lack of a plan for suicide is not a guarantee of safety. Also, if the person states they are not suicidal but displays many warning signs, you should still take action to make sure they are safe.
How can I keep the person safe?
Once you have established that a suicide risk is present, you need to take action to keep the person safe. A person who is suicidal should not be left on their own. If you think there is an immediate risk of the person acting on suicidal thoughts, act quickly, even if you are unsure. Work together with the person to ensure they are safe, instead of acting alone to prevent suicide.
Suicidal people often believe they have no choice but to die by suicide. Remind them that suicidal thoughts don’t have to be acted on, and that even though these thoughts may feel like they will never go away, they are usually temporary. Encourage the person to talk about their reasons for dying and their reasons for living. Acknowledge that they are considering both options and emphasise that living is a real option for them.
Ask about the problems the person is facing and how you can help. Reassure them that there are solutions to problems or ways of coping instead of suicide. By talking about specific problems, you can help the person to feel hope that there are ways of dealing with the difficulties that seem never ending. If you are willing and able, offer to help the person with tasks to address these difficulties, but do not offer false hope or make unrealistic promises.
When talking to the suicidal person, focus on the things that will keep them safe for now, rather than the things that put them at risk. Talk about the ‘good things’ in a person’s life, their hopes for the future, and other reasons to live. Encourage the person to think about their personal strengths and qualities, and the positive things in their life. Consider and use the person’s belief systems and values, including their spiritual and religious beliefs, to encourage them to change their mind about suicide (but do not use guilt or threats). Encourage the person to take part in an activity that they have found has helped them cope in the past or that they enjoy. For instance, if they engage in religious, spiritual or traditional practices, such as reading religious texts, praying, meditating or chanting, you should encourage them to do this.
Make sure that potentially harmful items are not available to the suicidal person. Remove access to them, after you gain their trust and if it is safe to do so. Be aware that the way they want to suicide may not be obvious and you should ask the person about how they plan to carry out the act, as these vary. If the person is intoxicated (drunk or high on drugs), limit their access to alcohol or other drugs (e.g. by asking them if they can put the substances away or throw them away). Make sure they are not left alone until the alcohol or drug has worn off, even if they say they are not suicidal.
Remind them that suicidal thoughts don’t have to be acted on, and that even though these thoughts may feel like they will never go away, they are usually temporary.
Work out a plan to help keep the suicidal person safe (See Box 4 Safety plan). Involve the person as much as possible in decisions about the plan. However, do not assume that a safety plan is enough to keep the suicidal person safe. Be aware that depending on their cultural background, the person may agree to keep safe or do any other action you suggest just to be polite.
Although you can offer support, you are not responsible for the actions or behaviours of someone else, and cannot control what they might decide to do.
Box 4
Safety plan
A safety plan is an agreement between the suicidal person and the first aider that involves actions to keep the person safe. If the person agrees, you should involve someone the person trusts in developing the safety plan. This might be a friend, family member, or a religious or spiritual leader. Work with the suicidal person to create plans to ensure their safety for the next 24, 48 and 72 hours.
The safety plan should:
- Focus as far as possible on what the suicidal person should do rather than what they should not.
- Clearly outline what will be done, who will be doing it and when it will be carried out.
- Include a list of contact numbers that the person agrees to call if they are feeling suicidal (e.g. the person’s doctor or mental health care professional, a suicide helpline or 24 hour crisis line, and friends and family members who will help in an emergency).
- Make sure the person knows how to access the safety contacts provided to help them (i.e. what will happen when they call the phone number?). The contact numbers should be kept somewhere accessible to the person. They should be available in the person’s main spoken language or instructions for seeking an interpreter should be available, if this is possible.
If the person won’t make a safety plan, it is not safe to leave them alone for any period of time and you should make sure someone stays close by the person (in the same room) and get outside help immediately. You must find out whether the person will be alone or whether there are family members or friends who can provide support.
If the person is psychotic (for instance, seems confused and not in touch with reality), they may not be able to agree to a safety plan and you should involve mental health services urgently.
What about professional and other help?
Reassure the person by letting them know that we all go through tough times and need support and that reaching out for help is the first step to feeling better.
Assure the person that there is support available and that you will help them to access it. Ask the person if they would like you to contact someone for them such as a friend, family member or trusted religious, spiritual or community leader.
Encourage the person to get suitable professional help as soon as possible. Find out about the resources and services available to help a person who is considering suicide, including hospitals, mental health clinics, mobile outreach crisis teams, suicide prevention helplines and local emergency services.
Find out about local services for people from immigrant and refugee backgrounds, such as:
- transcultural mental health services
- services for survivors of torture and trauma
- culturally appropriate services for women, such as women’s counselling centres and refuges
- culturally appropriate services that are responsive to people from the LGBTIQ (lesbian, gay, bisexual, transgender, intersex, questioning) community.
Provide this information to the suicidal person and discuss help-seeking options with them.
Ask the person for permission to contact their regular doctor or mental health professional about your concerns. If possible, the health professional contacted should be someone the suicidal person already knows and trusts. Otherwise, call a mental health centre or crisis telephone line and ask for advice on the situation. If the person does not want to talk to someone face-to-face, encourage them to call a suicide helpline.
Be aware that some people from immigrant and refugee backgrounds fear and distrust emergency services, statutory bodies and others in positions of power. You may need to reassure the person before contacting or directing them towards these services.
Don’t assume that the person will get better without help or that they will seek help on their own. People who are feeling suicidal might not ask for help for many reasons, including stigma, shame and a belief that their situation is hopeless and that nothing can help.
What if the suicidal person is unwilling or refuses to seek help?
You should be patient and persistent in encouraging them to get help. Try to find out why they are reluctant to seek help. A person who has had previous negative experiences (including experiences in other countries) may not want to accept that type of help again. Try to offer other options.
If you are afraid that the person is going to act on their thoughts of suicide, or they refuse to hand over the things with which they intend to kill themselves, you should contact a mental health professional or doctor to explain what is happening and ask for advice or instructions. If you are talking with a person about suicide on the phone, you should contact emergency services.
Be aware that females from some cultural backgrounds may not be permitted to make decisions about their health alone and this can stop some people from accepting help.
Make sure someone who is close to the suicidal person is aware of the situation (i.e. a close friend or family member) and if the person has not done so yet, ask them to agree to contact a specific person within a specific timeframe. If you are assisting a woman who is in an arranged marriage, it may not be appropriate to involve the family and you may need to discuss alternative supports.
If the suicidal person is a teenager, it is very important to ensure that the person receives help from a health professional, support group or a relevant community organisation. If you are unable to persuade them to get help, you should get assistance from someone they trust, such as a helpline or a mental health professional.
Be prepared that the person may get angry and feel betrayed by your attempt to prevent their suicide or to help them get professional help, but try not to take personally any hurtful actions or words of the person.
What if the suicidal person has a weapon?
You will need to contact the police. Tell the police that the person is suicidal. This will help them to respond appropriately. If needed, explain that the person is from an immigrant or refugee background and may distrust police. Explain to the suicidal person that you are contacting the police because they can offer immediate help and safety (and not because you think the person is a criminal). Make sure you do not put yourself in any danger.
What should I do if the person has acted on suicidal thoughts?
If the suicidal person has already harmed themselves, give them first aid, call emergency services and ask for an ambulance. You should get a quicker response from the emergency services if you tell them that the person has attempted suicide and describe what they have done.
Remember, despite our best efforts, it is not always possible to prevent suicide.
Self-injury for reasons other than suicide
Never assume that a person who self-harms is suicidal, as some people injure themselves for reasons other than suicide. If you are unsure whether injuries are due to a suicide attempt, you should ask the person directly.
For some people, self-injury is intended to relieve unbearable anguish or pain, to stop feeling numb or other emotional reasons. This can be distressing to see. There are First aid guidelines for non-suicidal self-injury (https://mhfa.com.au/sites/mhfa.com.au/files/MHFA_selfinjury_guidelinesA4%202014%20Revised_1.pdf) which, although not developed specifically for people from culturally and linguistically diverse backgrounds, can help you to understand and assist if this is occurring.
Self-inflicted injuries may also be the result of religious or traditional practices. However, you should not make any assumptions that this is the case, because these behaviours may be an important warning sign for suicide.
Take care of yourself
After helping someone who is suicidal, make sure you take appropriate self-care. Providing support and assistance to a suicidal person can be exhausting and it is therefore important to take care of yourself.
AN IMPORTANT NOTE
Purpose of these Guidelines
These guidelines are designed to help members of the public provide first aid to someone from an immigrant or refugee background who is at risk of suicide. The role of the first aider is to assist the person until appropriate professional help is received or the crisis resolves.
Development of the Guidelines
The guidelines are based on the expert opinions of a panel of individuals with lived and/or professional experience with mental health and suicide prevention from several countries about how to help someone who may be at risk of suicide. The methodology was based on Ross AM, Jorm AF, Kelly CM. (2014). Re-development of mental health first aid guidelines for suicidal ideation and behaviour: A Delphi study (https://mhfa.com.au/sites/mhfa.com.au/files/MHFA_suicide_guidelinesA4%202014%20Revised.pdf).
How to use these Guidelines
These guidelines provide general advice about how to help someone from an immigrant or refugee background who may be at risk of suicide. Each individual is unique and it is important to tailor support to what the person needs. These guidelines therefore may not be appropriate for every person who could be at risk of suicide. It is recommended that the first aiders working with people from specific ethnocultural backgrounds, consult with members of these communities to identify ways to apply these guidelines.
First aiders should also consider learning more about cultural responsiveness. These resources are available through MHiMA and other transcultural mental health and refugee support agencies.
More resources about how to discuss suicide are available at www.conversationsmatter.com.au. Although not developed specifically for people from immigrant and refugee backgrounds, first aiders may find them useful.
The development of these guidelines was funded by the Commonwealth Department of Health through the Mental Health in Multicultural Australia project (MHiMA) (www.mhima.org.au)
Although these guidelines are copyrighted, they can be freely reproduced, made available online or electronically, for non-profit purposes provided the source is acknowledged.
More information:
Enquiries about the development of the guidelines and related training should be sent to:
Dr Erminia Colucci,
Global and Cultural Mental Health Unit,
Centre for Mental Health,
School of Population and Global Health,
The University of Melbourne.
firstname.lastname@example.org
For general inquiries:
National Coordination Unit
First floor, 519 Kessels Road,
Macgregor QLD, 4109
P O Box 6623,
Upper Mt Gravatt, QLD, Australia 4122
Phone: 1300 136 289
Email: email@example.com
These guidelines can be downloaded from MHiMA (www.mhima.org.au) and GCMHU websites (cimh.unimelb.edu.au). All Mental Health First Aid guidelines can be downloaded from www.mhfa.com.au.
Cite these guidelines as:
Colucci E, Jorm AF, Kelly CM, Too LS, Minas H (2016). Suicide First Aid Guidelines for People from Immigrant and Refugee Backgrounds. Melbourne: Mental Health in Multicultural Australia; Global and Cultural Mental Health Unit, Centre for Mental Health, Melbourne School of Population and Global Health, The University of Melbourne; and Mental Health First Aid Australia.
Acknowledgements
MHiMA would like to acknowledge and thank the Global and Cultural Mental Health Unit, Centre for Mental Health, Melbourne School of Population and Global Health, The University of Melbourne as the Consortium member who researched, developed and produced the Suicide First Aid Guidelines, in collaboration with Mental Health First Aid Australia. MHiMA would like to thank the panel of experts with lived and/or professional experience of suicide, the MHiMA Consumer and Carer Working Groups and MHiMA State and Territory Reference Group for their guidance and contribution to the development of the Guidelines.
Thank you to Suzanne Britt for plain English editing.
Design by Kathryn Junor. |
EUROPEAN PATENT SPECIFICATION
Date of publication and mention of the grant of the patent:
01.05.2013 Bulletin 2013/18
Application number: 05790878.2
Date of filing: 14.10.2005
Int Cl.:
B64D 11/00 (2006.01)
B60N 2/48 (2006.01)
International application number:
PCT/SG2005/000355
International publication number:
WO 2006/041417 (20.04.2006 Gazette 2006/16)
AIRCRAFT PASSENGER SEAT WITH A DISPLAY MONITOR INCLUDING A READING LIGHT
FLUGGASTSITZ MIT EINEM EINE LESELEUCHTE ENTHALTENDEN ANZEIGEMONITOR
SIEGE POUR PASSAGERS D'UN AVION AVEC ECRAN COMPRENANT UN ECLAIRAGE DE LECTURE
Designated Contracting States:
DE FR GB IT
Priority: 14.10.2004 AU 2004905972
Date of publication of application:
01.08.2007 Bulletin 2007/31
Proprietor: Singapore Airlines Limited
Singapore 819829 (SG)
Inventor: LING, Sieak Chern
8-20 WhiteWater, Singapore 819829 (SG)
Representative: Rondano, Davide et al
Corso Emilia 8
10152 Torino (IT)
References cited:
DE-C1- 3 421 547 US-A- 5 507 556
US-A1- 2001 002 092 US-B2- 6 554 437
US-E1- R E33 423
Note: Within nine months of the publication of the mention of the grant of the European patent in the European Patent Bulletin, any person may give notice to the European Patent Office of opposition to that patent, in accordance with the Implementing Regulations. Notice of opposition shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).
Description
[0001] The present invention relates to personal reading lights for passengers in aircraft.
[0002] The present invention also relates to aircraft that include personal reading lights for passengers.
[0003] It is known to provide personal reading lights for passengers in aircraft.
[0004] One widely used known type of passenger reading lights on commercially operating aircraft is mounted on the roof of a passenger cabin of the aircraft, with a separate light for each passenger seat that can be operated by a passenger in the seat via a switch on the arm rest of the seat.
[0005] One disadvantage of roof-mounted reading lights arises from the fact that the lights are spaced well away from passengers. Consequently, the reading lights are clearly visible to other passengers and can be an inconvenience to the passengers.
[0006] Another disadvantage of roof-mounted reading lights is that, notwithstanding attempts to confine the area that is illuminated by each light to the particular passenger seat associated with the light, invariably the lights illuminate a wider area than the particular passenger seats and can be an inconvenience to other passengers in the immediate vicinity of the light.
[0007] One other known, more personalised, type of reading light than the reading light described in the preceding paragraphs that is also used on commercially operating aircraft is in the form of a light that is mounted on the end of a flexible arm that can be bent by a passenger from an inoperative position to one side and behind the shoulder of the passenger, typically in the space between adjacent passenger seats, to an operative position in which the arm directs light downwardly from above the shoulder of the passenger in a required direction.
[0008] This type of reading light is less visible to passengers in an aircraft cabin in general and has less impact on passengers in the immediate vicinity of the light in particular.
[0009] However, this type of reading light tends to be a low intensity light only and thus relies on being very close to a passenger in order to be useful. Thus, this type of reading light is of little, if any, benefit in other situations, for example, when a passenger requires light to be projected onto a tray table.
[0010] One other known type of reading light for passengers that is disclosed in the patent literature is mounted to a back of a passenger seat and is arranged to direct light rearwardly into a space in front of a passenger who is seated in a passenger seat immediately behind the seat back.
[0011] This type of reading light is close to the passenger and is not generally visible to other passengers in an aircraft cabin and is well placed to illuminate a tray table. In addition, the light can illuminate a confined area that enables the passenger to read comfortably but without inconveniencing passengers in the immediate vicinity of the light, such as in adjacent seats.
[0012] However, the proposals for this type of reading light in the patent literature are confined to mounting the reading light directly to seat backs and the light is not adjustable and therefore the light from the reading light is not always projected into areas that are required by different passengers. Belgium patent application 868863 in the name of Vogel I. GmbH is an example of this type of reading light. The closest prior art is shown in DE 4321547.
[0013] The present invention provides an improved personalised reading light and an aircraft that includes the reading light.
[0014] The personalised reading light of the present invention is of the above-described type that is mounted to the back of a passenger seat, which can be described as a "front seat", and is arranged to direct light rearwardly into a space in front of a passenger who is seated in a passenger seat immediately behind the front seat, which can be described as a "rear seat". In particular, the reading light of the present invention forms part of a personalised visual display monitor assembly that is mounted to the back of the front seat.
[0015] In general terms, the present invention provides an aircraft that includes an aircraft passenger cabin, a plurality of passenger seats arranged in rows in the cabin, with at least one passenger seat including a personalized visual display monitor assembly mounted to a back of the seat for use by a passenger when the passenger is seated in a seat immediately behind the passenger seat, and with the visual display monitor assembly including a personalised reading light for the passenger facing the seat back.
[0016] In particular, in accordance with the present invention, the reading light is adjustable so that the direction of light from the reading light can be adjusted as required depending on circumstances, such as the required viewing area for the passenger facing the seat back and the angle of inclination of the front seat.
[0017] Preferably the reading light is adapted to direct light rearwardly from the seat into a space immediately in front of the passenger facing the seat back.
[0018] Preferably the reading light is mounted so that the direction of light from the reading light can be adjusted upwardly and downwardly from the visual display monitor assembly about a horizontal axis within a range of angle.
[0019] Preferably the reading light is adjustable with respect to other components of the visual display monitor assembly, thereby to facilitate adjustment of the direction of light from the reading light.
[0020] Preferably the visual display monitor assembly is mounted to the back of the seat so that the assembly is adjustable with respect to the seat, thereby to facilitate adjustment of the direction of light from the reading light. The adjustable mounting of the assembly to the seat means that the direction of light from the reading light can be adjusted over a wider range of angles than in a situation in which the assembly is fixed with respect to
the seat.
Preferably the visual display monitor assembly is mounted for pivotal movement to the seat and the position of the assembly with respect to the seat can be pivoted between a first retracted position in which the assembly is generally parallel to the back of the seat and a second pivoted position in which the assembly is at an angle with respect to the back of the seat.
The reading light may be of any suitable type.
Preferably the reading light includes a plurality of light emitting diodes.
More preferably the reading light includes a plurality of light emitting diodes arranged in a line.
Preferably the reading light includes a tubular housing for the line of light emitting diodes and the housing is mounted to the back of the seat for rotation about the lengthwise extending axis of the tube, thereby to facilitate adjustment of the direction of light from the reading light.
Preferably the reading light includes a mask that is adapted to confine the light projected by the light to a well-defined beam. In effect, the mask blocks stray light from being projected beyond defined boundaries.
Preferably the reading light includes an image directing film to shift the light focus in any desired direction. An example of an image directing film is a 3M product called Vikuity (Trade Mark) image directing film. It is a 0.15 mum thick film that re-directs any image or light source by 20°.
Preferably the reading light includes a lens that collects wasted light that is projected off the sides and redirects the light onto a desired surface. This feature solves two problems, namely reducing stray light and increasing the brightness on a desired surface.
Preferably the visual display monitor assembly includes a liquid crystal display monitor assembly.
Preferably the visual display monitor assembly and the reading light are powered by a common electrical circuit.
In addition, in general terms, the present invention provides a passenger seat for an aircraft that includes a personalized visual display monitor assembly mounted to a back of the seat for use by a passenger when the passenger is seated in a seat immediately behind the passenger seat, the visual display monitor assembly including a personalised reading light for the passenger facing the seat back.
In addition, in particular, in accordance with the present invention, the reading light is adjustable so that the direction of light from the reading light can be adjusted as required depending on circumstances, such as the required viewing area for the passenger facing the seat back and the angle of inclination of the seat.
Preferably the reading light is adapted to direct light rearwardly from the passenger seat into a space immediately in front of the passenger facing the seat back.
Preferably the reading light is mounted so that the direction of light from the reading light can be adjusted upwardly and downwardly about a horizontal axis within a range of angles.
Preferably the reading light is adjustable with respect to other components of the visual display monitor assembly, thereby to facilitate adjustment of the direction of light from the reading light.
Preferably the visual display monitor assembly is mounted to the back of the seat so that the assembly is adjustable with respect to the seat, thereby to facilitate adjustment of the direction of light from the reading light. The adjustable mounting of the assembly to the seat means that the direction of light from the reading light can be adjusted over a wider range of angles than in a situation in which the assembly is fixed with respect to the seat.
Preferably the visual display monitor assembly is mounted for pivotal movement to the seat and the position of the assembly with respect to the seat can be pivoted between a first retracted position in which the assembly is generally parallel to the back of the seat and a second pivoted position in which the assembly is at an angle with respect to the back of the seat.
The reading light may be of any suitable type.
Preferably the reading light includes a plurality of light emitting diodes.
More preferably the reading light includes a plurality of light emitting diodes arranged in a line.
Preferably the reading light includes a tubular housing for the line of light emitting diodes and the housing is mounted to the back of the seat for rotation about the lengthwise extending axis of the tube, thereby to facilitate adjustment of the direction of light from the reading light.
Preferably the reading light includes a mask that is adapted to confine the light projected by the light to a well-defined beam.
Preferably the reading light includes an image directing film to shift the light focus in any desired direction.
Preferably the reading light includes a lens that collects wasted light that is projected off the sides and redirects the light onto a desired surface.
Preferably the visual display monitor assembly includes a liquid crystal display monitor assembly.
Preferably the visual display monitor assembly and the reading light are powered by a common electrical circuit.
The present invention is described further with reference to the accompanying drawings, of which:
Figure 1 is a perspective view of the back of a row of three aircraft seats in accordance with one embodiment of the present invention;
Figure 2 is a side elevation of an upper part of the middle seat shown in Figure 1 that illustrates in more detail the visual display monitor that forms part of
the aircraft seats shown in Figure 1, with the monitor in an extended position;
Figure 3 is a front view of the visual display monitor shown in Figures 1 and 2;
Figure 4 is an underside view of the visual display monitor shown in Figures 1 to 3, with the monitor in a retracted position; and
Figure 5 is a side view of the visual display monitor shown in Figure 4 in the retracted position.
Figure 1 shows a row of three aircraft passenger seats 3. The seats 3 form part of a cabin of an aircraft. The aircraft and the aircraft cabin can be of any configuration. The seats 3 are standard aircraft seats in the sense that the backs 7 of the seats 3 are adjustable between an upright position (the two outer seats) and a fully reclined position (the central seat).
[0048] One feature of the seats 3, which is a relatively recent innovation for aircraft passenger seats but is being used increasingly on aircraft, is that visual display monitors in the form of liquid crystal display monitor assemblies, generally identified by the numeral 5, are mounted to the backs 7 of the seats 3.
[0049] Each liquid crystal display monitor assembly 5 is coupled to the in-flight entertainment system of the aircraft and includes a monitor 9 for viewing movies and playing video games, etc. by selection of the passenger.
[0050] An upper end of each assembly 5 is pivotally mounted to the seat 3 so that the assembly 5 can be swung between a retracted position in which the assembly is flush with the seat back 7, as illustrated by the two outer seats 3 in Figure 1, and an extended position in which the assembly is at an angle with respect to the seat back 7, as illustrated by the middle seat 3 in Figure 1 and in Figure 2.
[0051] The pivot mechanism between each assembly 5 and the seat 3 may be of any suitable type.
[0052] In the arrangement shown in the Figures, the seat back 7 includes a pair of support plates 21 mounted to the seat back 7 on opposite sides of the assembly 5, and each plate 21 includes a curved opening 19. In addition, the assembly 5 includes a U-shaped support bracket 13 that has sides 15, and each side 15 has an outwardly extending pin 17. The support plates 21 and the sides 15 of the bracket 13 are positioned so that the pins 17 extend into the curved openings 19. The openings 19 define guide channels for the pins 17. It can readily be appreciated that this arrangement supports the assembly 5 for pivoting movement, with the extent of the movement being limited by the arc of the curved openings 19.
[0053] In the Figures the curved openings 19 permit each assembly 5 to pivot 30° with respect to the seat back 7. This pivot range is shown by the arc “α” in Figure 1.
[0054] In accordance with the embodiment of the present invention shown in the Figures, each assembly 5 further includes a reading light, generally identified by the numeral 11, mounted to a lower end of the monitor 9.
[0055] Specifically, each reading light 11 includes a line of light emitting diodes 31 positioned in a transparent tube 33.
[0056] The light emitting diodes 31 are powered by the same electrical circuit that powers the monitor 9.
[0057] The tube 33 houses and supports the diodes 31. In particular, the tube 33 is mounted to the monitor 9 to rotate about the axis of the tube 33 and thereby selectively direct the light with respect to the monitor 9. The pivot range is shown by the arc “β” in Figure 1.
[0058] In addition, each reading light 11 includes a row of push buttons, generally identified by the numeral 37 in Figure 3, that include on/off buttons and a brightness controller button.
[0059] Each reading light 11 also includes an image directing film (not shown) to shift the light focus in any desired direction. An example of an image directing film is a 3M product called Vikuity (Trade Mark) image directing film. It is a 0.15 mm thick film that redirects any image or light source by 20°.
[0060] With the above-described arrangement, adjustment of the direction of light from the reading light 11 is possible via adjustment of the light 11 per se with respect to the monitor 9 of the assembly 5 and via adjustment of the assembly 5 with respect to the back 7 of the seat 3. Consequently, it is possible for a passenger to accurately direct a confined light beam onto a required viewing area, such as an extended tray table 35, as illustrated by the shaded region extending from the reading light 11 at the back of the middle seat shown in Figure 1.
[0061] Many modifications may be made to the preferred embodiment of the present invention described above without departing from the spirit and scope of the present invention.
Claims
1. A passenger seat (3) for an aircraft that includes a personalized visual display monitor assembly (5) mounted to a back (7) of the passenger seat (3) for use by a passenger when seated in a passenger seat (3) immediately behind the passenger seat (3), the visual display monitor assembly (5) including a personalised reading light (11) to direct light rearwardly from the passenger seat (3) (5) into a space immediately in front of the passenger facing the back (7) of the passenger seat (3),
wherein the visual display monitor assembly (5) is mounted to the back (7) of the passenger seat (3) so that the visual display monitor assembly (5) is adjustable with respect to the passenger seat (3) to facilitate adjustment of the direction of light from the
reading light (11),
in that the visual display monitor assembly (5) and the reading light (11) are powered by a common electrical circuit, characterized in that the direction of light from the reading light (11) is adjustable by the passenger independently of a position of the visual display monitor assembly (5) and the personalised light directs light downwardly from the visual display monitor assembly (5).
2. The passenger seat defined in claim 1, wherein the reading light (11) is mounted so that the direction of light from the reading light (11) can be adjusted upwardly and downwardly about a horizontal axis within a range of angles.
3. The passenger seat defined in claim 1 or claim 2, wherein the reading light (11) is adjustable with respect to other components (9) of the visual display monitor assembly (5), thereby to facilitate adjustment of the direction of light from the reading light (11).
4. The passenger seat defined in any one of claims 1 to 3, wherein the visual display monitor assembly (5) is mounted for pivotal movement to the passenger seat (3) and the position of the visual display monitor assembly (5) with respect to the passenger seat (3) can be varied between a first retracted position in which the visual display monitor assembly (5) is generally parallel to the back (7) of the passenger seat (3) and a second pivoted position in which the visual display monitor assembly (5) is at an angle with respect to the back (7) of the passenger seat (3).
5. The passenger seat defined in any one of claims-9 to 3, wherein the reading light (11) includes a plurality of light emitting diodes (31).
6. The passenger seat defined in claim 5, wherein the reading light (11) includes a plurality of light emitting diodes (31) arranged in a line.
7. The passenger seat defined in claim 6, wherein the reading light (11) includes a tubular housing (33) for the line of light emitting diodes (31) and the housing (33) is mounted to the back (7) of the passenger seat (3) for rotation about the lengthwise extending axis of the tube, thereby to facilitate adjustment of the direction of light from the reading light (11).
8. The passenger seat defined in any one of claims-9 to 3, wherein the visual display monitor assembly (5) includes a liquid crystal display monitor assembly.
9. An aircraft that includes an aircraft passenger cabin, a plurality of passenger seats arranged in rows in the cabin, with at least one passenger seat according to any of the preceding claims.
Patentansprüche
1. Ein Passagiersitz (3) für ein Flugzeug beinhaltend eine personalisierte Monitoranordnung (5) zur visuellen Anzeige montiert an einer Rückseite (7) von dem Passagiersitz (3) zur Benutzung durch einen Passagier, wenn dieser in einem Passagiersitz (3) direkt hinter dem Passagiersitz (3) platziert ist, wobei die Monitoranordnung (5) zur visuellen Anzeige ein personalisiertes Leselicht (11) beinhaltet zum Lenken von Licht nach hinten von dem Passagiersitz (3) in einen Raum direkt vor dem Passagier, der der Rückseite (7) von dem Passagiersitz (3) zugewandt ist, wobei die Monitoranordnung (5) zur visuellen Anzeige an die Rückseite (7) von dem Passagiersitz (3) so montiert ist, dass die Monitoranordnung (5) zur visuellen Anzeigebezüglich des Passagiersitzes (3) einstellbar ist, um eine Einstellung der Richtung des Lichts von dem Leslicht (11) zu erleichtern, und wobei die Monitoranordnung (5) zur visuellen Anzeige und das Leselicht (11) durch einen gemeinsamen elektrischen Stromkreis angetrieben werden, gekennzeichnet dadurch, dass die Richtung des Lichts vom Leselicht (11) durch den Passagier einstellbar ist, unabhängig von einer Position von der Monitoranordnung (5) zur visuellen Anzeige, und das personalisierte Licht nach unten von der Monitoranordnung (5) zur visuellen Anzeigelenkt.
2. Der Passagiersitz gemäß Anspruch 1, wobei das Leselicht (11) so montiert ist, dass die Richtung des Lichts von dem Leselicht (11) nach oben eingestellt werden kann und nach unten um eine horizontale Achse innerhalb eines Bereichs von Winkeln.
3. Der Passagiersitz gemäß Anspruch 1 oder Anspruch 2, wobei das Leselicht (11) bezüglich anderer Komponenten (9) von der Monitoranordnung (5) zur visuellen Anzeige einstellbar ist, um dadurch die Einstellung der Richtung des Lichts von dem Leselicht (11) zu erleichtern.
4. Der Passagiersitz gemäß irgendeinem der Ansprüche 1 bis 3, wobei die Monitoranordnung (5) zur visuellen Anzeige für eine Drehbewegung an den Passagiersitz (3) montiert ist, und die Position von der Monitoranordnung (5) zur visuellen Anzeige bezüglich des Passagiersitzes (3) zwischen einer ersten eingezogenen Position, in welcher die Monitoranordnung (5) zur visuellen Anzeige im Wesentlichen parallel mit der Rückseite (7) von dem Passagiersitz (3) ist, und einer zweiten gedrehten Position, in welcher die Monitoranordnung (5) zur visuellen Anzeige bei einem Winkel bezüglich der Rückseite (7) von dem Passagiersitz (3) ist, variiert werden kann.
5. Der Passagiersitz gemäß irgendeinem der Ansprüchen 1 bis 3, wobei das Leselicht (11) eine Mehrzahl von lichtemittierenden Dioden (31) beinhaltet.
6. Der Passagiersitz gemäß Anspruch 5, wobei das Leselicht (11) eine Mehrzahl von lichtemittierenden Dioden (31) beinhaltet, die in einer Reihe angeordnet sind.
7. Der Passagiersitz gemäß Anspruch 6, wobei das Leselicht (11) ein röhrenförmiges Gehäuse (33) für die Reihe von lichtemittierenden Dioden (31) beinhaltet und wobei das Gehäuse (33) an der Rückseite (7) vom Passagiersitz (3) montiert ist zur Rotation um dieselch längs erstreckende Achse von der Röhre, um dabei das Einstellen von der Richtung des Lichts von dem Leselicht (11) zu erleichtern.
8. Der Passagiersitz gemäß irgendeinem der Ansprüche 1 bits 3, wobei die Monitoranordnung (5) zur visuellen Anzeige eine Flüssigkristallbildschirmanordnung beinhaltet.
9. Ein Flugzeug, das eine Flugzeugpassagierkine beinhaltet, eine Mehrzahl von Passagier sitzen angeordnet in Reihen in der Kabine, mit zumindest einem Passagiersitz gemäß irgendeinem der vorhergehenden Ansprüche.
Revendications
1. Siège pour passager (3) pour un aéronef qui inclut un assemblage d'écran d'affichage visuel personnalisé (5) monté au dos (7) du siège pour passager (3) pour utilisation par un passager lorsqu'il est assis dans un siège pour passager (3) immédiatement derrière le siège pour passager (3), l'assemblage d'écran d'affichage visuel (5) incluant une lampe de lecture personnalisée (11) pour diriger de la lumière vers l'arrière du siège pour passager (3) dans un espace immédiatement devant le passager faisant face au dos (7) du siège pour passager (3),
dans lequel l'assemblage d'écran d'affichage visuel (5) est monté au dos (7) du siège pour passager (3) de façon que l'assemblage d'écran d'affichage visuel (5) soit réglable par rapport au siège pour passager (3) pour faciliter le réglage de la direction de la lumière issue de la lampe de lecture (11),
dans lequel l'assemblage d'écran d'affichage visuel (5) et la lampe de lecture (11) sont alimentés par un circuit électrique commun,
caractérisé en ce que la direction de la lumière issue de la lampe de lecture (11) est réglable par le passager indépendamment de la position de l'assemblage d'écran d'affichage visuel (5) et en ce que la lampe personnalisée dirige de la lumière vers l'arrière de l'assemblage d'écran d'affichage visuel (5).
2. Siège pour passager selon la revendication 1, dans lequel la lampe de lecture (11) est montée de façon que la direction de la lumière issue de la lampe de lecture (11) puisse être réglée vers le haut et vers le bas à l'intérieur d'une certaine plage d'angles par rapport à un axe horizontal.
3. Siège pour passager selon la revendication 1 ou la revendication 2, dans lequel la lampe de lecture (11) est réglable par rapport à d'autres composants (9) de l'assemblage d'écran d'affichage visuel (5), pour faciliter ainsi le réglage de la direction de la lumière issue de la lampe de lecture (11).
4. Siège pour passager selon l'une quelconque des revendications 1 à 3, dans lequel l'assemblage d'écran d'affichage visuel (5) est monté pour un mouvement pivotant par rapport au siège pour passager (3) et dans lequel la position de l'assemblage d'écran d'affichage visuel (5) par rapport au siège pour passager (3) peut être modifiée entre une première position rétractée dans laquelle l'assemblage d'écran d'affichage visuel (5) est globalement parallèle au dos (7) du siège pour passager (3) et une seconde position pivotée dans laquelle l'assemblage d'écran d'affichage visuel (5) est à un certain angle par rapport au dos (7) du siège pour passager (3).
5. Siège pour passager selon l'une quelconque des revendications 1 à 3, dans lequel la lampe de lecture (11) inclut une pluralité de diodes électroluminescentes (31).
6. Siège pour passager selon la revendication 5, dans lequel la lampe de lecture (11) inclut une pluralité de diodes électroluminescentes (31) agencées en une ligne.
7. Siège pour passager selon la revendication 6, dans lequel la lampe de lecture (11) inclut un boîtier tubulaire (33) pour la ligne de diodes électroluminescentes (31) et dans lequel le boîtier (33) est monté au dos (7) du siège pour passager (3) pour rotation autour de l'axe s'étendant suivant la longueur du tube, pour faciliter ainsi le réglage de la direction de la lumière issue de la lampe de lecture (11).
8. Siège pour passager selon l'une quelconque des revendications 1 à 3, dans lequel l'assemblage d'écran d'affichage visuel (5) inclut un assemblage d'écran d'affichage à cristaux liquides.
9. Aéronef qui inclut une cabine pour passagers d'aéronef, une pluralité de sièges pour passager agencés en rangées dans la cabine, avec au moins un siège pour passager selon l'une quelconque des revendications précédentes.
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
REFERENCES CITED IN THE DESCRIPTION
This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.
Patent documents cited in the description
- BE 868863 [0012]
- DE 4321547 [0012] |
EUROPEAN PATENT SPECIFICATION
(45) Date of publication and mention of the grant of the patent:
08.02.2006 Bulletin 2006/06
(21) Application number: 02721432.9
(22) Date of filing: 14.03.2002
(51) Int Cl.:
G02C 7/02 (2006.01)
G02B 3/10 (2006.01)
(86) International application number:
PCT/US2002/007943
(87) International publication number:
WO 2002/084382 (24.10.2002 Gazette 2002/43)
(54) PROGRESSIVE ADDITION LENSES
PROGRESSIVE ADDITIONS LINSEN
VERRES / LENTILLES A FOYER PROGRESSIF
(84) Designated Contracting States:
AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR
(30) Priority: 10.04.2001 US 832236
(43) Date of publication of application:
14.01.2004 Bulletin 2004/03
(72) Inventor: MENEZES, Edgar, V.
Roanoke, VA 24018 (US)
(74) Representative: Perin, Georges
Cabinet Plasseraud,
65/67, rue de la Victoire
75440 Paris Cedex 9 (FR)
(73) Proprietor: ESSILOR INTERNATIONAL
(COMPAGNIE GENERALE D'OPTIQUE)
94220 Charenton le Pont (FR)
(56) References cited:
EP-A- 1 026 533 EP-A- 1 063 556
WO-A-00/72051 US-A- 5 631 798
US-A- 5 726 734 US-A- 5 886 766
US-A- 6 074 062 US-B1- 6 199 984
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).
Description
Field of the Invention
[0001] The present invention relates to multifocal ophthalmic lenses. In particular, the invention provides progressive addition lens designs and lenses in which unwanted lens astigmatism is reduced as compared to conventional progressive addition lenses.
Background of the Invention
[0002] The use of ophthalmic lenses for the correction of ametropia is well known. For example, multifocal lenses, such as progressive addition lenses ("PAL's"), are used for the treatment of presbyopia. The progressive surface of a PAL provides far, intermediate, and near vision in a gradual, continuous progression of vertically increasing dioptric power from far to near focus, or top to bottom of the lens.
[0003] PAL's are appealing to the wearer because PAL's are free of the visible ledges between the zones of differing dioptric power that are found in other multifocal lenses, such as bifocals and trifocals. However, an inherent disadvantage in PAL's is unwanted astigmatism, or astigmatism introduced or caused by one or more of the lens' surfaces. In hard design PAL's, the unwanted astigmatism borders the lens channel and near vision zone. In soft design PAL's, the unwanted astigmatism extends into the distance vision zone. Generally, in both designs the unwanted lens astigmatism at or near its approximate center reaches a maximum that corresponds approximately to the near vision dioptric add power of the lens.
[0004] Many PAL designs are known that attempt to reduce unwanted astigmatism with varying success. One such design is disclosed in United States Patent No. 5,726,734 and uses a composite design that is computed by combining the sag values of a hard and a soft PAL design. The design disclosed in this patent is such that the maximum, localized unwanted astigmatism for the composite design is the sum of the contributions of the hard and soft designs areas of maximum, localized unwanted astigmatism. Due to this, the reduction in the maximum, localized unwanted astigmatism that may be realized by this design is limited.
[0005] European patent application EP 1026533 A2 discloses lenses which comprise one or more progressive addition surfaces and one or more regressive surfaces. The distance vision zones, near vision zones and channels of the progressive and regressive surfaces may be aligned.
[0006] Therefore, a need exists for a design that permits even greater reductions of maximum, localized unwanted astigmatism than in prior art designs.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007]
FIG. 1 is an illustration of the distortion area of a progressive lens.
FIG. 2a is a cylinder contour of the progressive surface used in the lens of Example 1.
FIG. 2b is a power contour of the progressive surface used in the lens of Example 1.
FIG. 3a is a cylinder map of the regressive surface used in the lens of Example 1.
FIG. 3b is a power map of the regressive surface used in the lens of Example 1.
FIG. 4a is a cylinder contour of the composite surface of Example 1.
FIG. 4a is the power contour of the composite surface of Example 1.
FIG. 5 is the cylinder contour of the concave progressive surface of Example 2.
FIG. 6a is the cylinder contour of the lens of Example 2.
FIG. 6b is the power contour of the lens of Example 2.
FIG. 7a is the cylinder contour of a conventional lens.
FIG. 7b is the power contour of a conventional lens.
FIG. 8 is the cylinder contour of the concave progressive addition surface of the lens of Example 3.
FIG. 9a is the cylinder contour of the lens of Example 3.
FIG. 9b is the power contour of the lens of Example 3.
DESCRIPTION OF THE INVENTION AND ITS PREFERRED EMBODIMENTS
[0008] The present invention provides an ophthalmic lens according to claim 1.
[0009] The present invention also provides a method for designing an ophthalmic lens according to claim 2.
[0010] By "lens" or "lenses" is meant any ophthalmic lens including, without limitation, spectacle lenses, contact lenses,
intraocular lenses and the like. Preferably, the lens of the invention is a spectacle lens.
[0011] By "progressive addition surface" is meant a continuous, aspheric surface having distance and near viewing or vision zones, and a zone of increasing dioptic power connecting the distance and near zones. One ordinarily skilled in the art will recognize that, if the progressive surface is the convex surface of the lens, the distance vision zone curvature will be less than that of the near zone curvature and if the progressive surface is the lens' concave surface, the distance curvature will be greater than that of the near zone.
[0012] By "area of unwanted astigmatism" is meant an area on the lens surface having about 0.25 diopters or more of unwanted astigmatism.
[0013] By "regressive surface" is meant a continuous, aspheric surface having zones for distance and near viewing or vision, and a zone of decreasing dioptic power connecting the distance and near zones. If the regressive surface is the convex surface of the lens, the distance vision zone curvature will be greater than that of the near zone and if the regressive surface is the lens' concave surface, the distance curvature will be less than that of the near zone.
[0014] By "aligned" in relation to the areas of unwanted astigmatism is meant that the areas of unwanted astigmatism are disposed so that there is partial or substantially total superposition or coincidence when the surface are combined to form the composite surface.
[0015] A number of optical parameters conventionally are used to define and optimize a progressive design. These parameters include areas of unwanted astigmatism, areas of maximum, localized unwanted astigmatism, channel length and width, distance and reading zone widths, reading power width, and normalized lens distortion. Normalized lens distortion is the integrated, unwanted astigmatism of the lens below the optical center, primary reference point, divided by the dioptic add power of the lens. Referring to Fig. 1, for progressive addition lenses, the normalized lens distortion, $D_L$, can be calculated by the equation:
$$D_L = M_A/(3A_p) \left\{ A_I/2 - A_I - \pi N_W^2/4 \right\} \quad (I)$$
wherein: $A_I$ is the lens area; $N_W$ is the near width; $M_A$ is the maximum, localized, unwanted astigmatism (the highest, measurable level of astigmatism in an area of unwanted astigmatism on a lens surface); and $A_p$ is the dioptic power of the lens at $y = -20$ mm below the primary reference point. $A_I$ is the area of the intermediate zone where the unwanted astigmatism is less than 0.5 diopters and is calculated by the equation:
$$A_I = I_L/2 \left[ I_W + D_W \right] + (C_L - I_L)/2 \left[ I_W + N_W \right] \quad (II)$$
where: $I_W$ is width of the intermediate zone where the unwanted astigmatism is less than 0.5 diopters; $D_W$ and $N_W$ are the widths of the distance (at $y = 0$) and near (at $y = -20$ mm) viewing zones, respectively, where the unwanted astigmatism is less than about 0.5 diopters; and $I_L$ is the length along the center of the channel between the prism reference point and the narrowest width in the intermediate zone.
[0016] For purposes of Equation II, the near width and intermediate widths are not synonymous with reading and channel width. Rather, whereas reading and channel width are defined based on clinically relevant threshold for good vision, the near and intermediate widths of Equation II are based on a 0.5 diopter astigmatic threshold.
[0017] In the lenses of the invention, the normalized lens distortion is significantly reduced compared to conventional progressive addition lenses. Thus, in a preferred embodiment, the invention provides progressive addition lenses comprising, consisting essentially of, and consisting of at least one progressive addition surface having a normalized lens distortion of less than about 300.
[0018] In the lenses of the invention, the dioptic add power, or the amount of dioptic power difference between the distance and near vision zones, of the progressive surface design is a positive value and that of the regressive surface design, a negative value. Thus, because the add power of the composite surface is the sum of the progressive and regressive surface designs' dioptic add powers, the regressive surface design acts to subtract dioptic add power from the progressive surface design.
[0019] It is known that a progressive addition surface produces unwanted astigmatism at certain areas on the surface. The unwanted astigmatism of an area may be considered a vector quantity with a magnitude and axis of orientation that depends, in part, on the location of the astigmatism on the surface. A regressive surface also has areas of unwanted astigmatism, the magnitude and axis of the regressive surface astigmatism are determined by the same factors that are determinative for the progressive surface astigmatism. However, the axis of the regressive surface astigmatism typically is orthogonal to that of the progressive surface astigmatism. Alternately, the magnitude of the regressive surface astigmatism may be considered to be opposite in sign to that of the progressive surface astigmatism at the same axis.
Thus, combining a progressive surface design with an area of unwanted astigmatism with a regressive surface design with a comparably located area of unwanted astigmatism reduces the total unwanted astigmatism for that area when the two designs are combined to form a composite surface of a lens. The reason for this is that the unwanted astigmatism of the lens at a given location will be the vector sums of the unwanted astigmatisms of the progressive and regressive surface designs. Because the magnitudes of the progressive addition and regressive surface designs’ astigmatisms have opposite signs, a reduction in the total unwanted astigmatism of the composite surface is achieved. Although the axis of orientation of the unwanted astigmatism of the regressive surface design need not be the same as that at a comparable location on the progressive surface design, preferably the axes are substantially the same so as to maximize the reduction of unwanted astigmatism.
At least one area of astigmatism of the progressive surface design must be aligned with one area of astigmatism of the regressive surface design to achieve a reduction of unwanted astigmatism in the composite surface. Preferably, the areas of maximum, localized unwanted astigmatism, or the areas of highest, measurable unwanted astigmatism, of each of the surface designs are aligned. More preferably, all areas of unwanted astigmatism of one surface design are aligned with those of the other.
In another embodiment, the surfaces’ distance and near zones, as well as the channels are aligned. By aligning the surfaces in such a manner, one or more areas of unwanted astigmatism of the progressive surface design will overlap with one or more such areas on the regressive surface design. In another embodiment, the invention provides a surface of a lens comprising, consisting essentially of, and consisting of one or more progressive addition surface designs and one or more regressive surface designs, wherein the distance vision zones, near vision zones and channels of the progressive and regressive surface designs are substantially aligned.
In the lenses of the invention, the composite surface may be on the convex, concave, or both surfaces of the lens or in layers between these surfaces. In a preferred embodiment, the composite surface forms the convex lens surface. One or more progressive addition and regressive surface designs may be used in the composite surface, but preferably only one of each surface is used. In embodiments in which a composite surface is the interface layer between the concave and convex surfaces, preferably the materials used for the composite surface are of a refractive index that differ by at least about 0.01, preferably at least 0.05, more preferably at least about 0.1.
One ordinarily skilled in the art will recognize that the progressive addition and regressive surface designs useful in the invention may be either of a hard or soft design type. By hard design is meant a surface design in which the unwanted astigmatism is concentrated below the surface’s optical centers and in the zones bordering the channel. A soft design is a surface design in which the unwanted astigmatism is extended into the lateral portions of the distance vision zone. One ordinarily skilled in the art will recognize that, for a given dioptic add power, the magnitude of the unwanted astigmatism of a hard design will be greater than that of a soft design because the unwanted astigmatism of the soft design is distributed over a wider area of the lens.
In the lens of the invention, preferably, the progressive addition surface designs are of a soft design and the regressive surface designs are of a hard design. Thus, in yet another embodiment, the invention provides a lens surface comprising, consisting essentially of, and consisting of a one or more progressive addition surface designs and one or more regressive surface designs, wherein the one or more progressive addition surface designs are soft designs and the one or more regressive surface designs are hard designs. More preferably, the progressive addition surface design has a maximum unwanted astigmatism that is less in absolute magnitude than the surfaces’ dioptric add power and, for the regressive surface design, is greater in absolute magnitude.
The composite progressive surface of the invention is provided by first designing a progressive addition and a regressive surface. Each of the surfaces is designed so that, when combined with the design of the other surface or surfaces to form the composite progressive surface, substantially all of the areas of maximum, localized unwanted astigmatism are aligned. Preferably, each surface is designed so that the maxima of the unwanted astigmatism areas are aligned and when the surfaces’ designs are combined to obtain the composite surface design, the composite surface exhibits maximum, localized unwanted astigmatism that is at least less than about 0.125 diopters, preferably less than about 0.25 diopters, than the sum of absolute value of the maxima of the combined surfaces.
More preferably, each of the progressive and regressive surfaces is designed so that, when combined to form the composite surface, the composite surface has more than one area of maximum, localized unwanted astigmatism on each side of the composite surface’s channel. This use of multiple maxima further decreases the magnitude of the areas of unwanted astigmatism on the composite surface. In a more preferred embodiment, the areas of maximum, localized unwanted astigmatism of the composite surface form plateaus. In a most preferred embodiment, the composite surface has more than one area of maximum, localized unwanted astigmatism in the form of plateaus on each side of the composite surface’s channel.
Designing of the progressive and regressive surfaces used to form the composite surface design is within the skill of one of ordinary skill in the art using any number of known design methods and weighting functions. Preferably, however, the surfaces are designed using a design method that divides the surface into a number of sections and provides a curved-surface equation for each area as, for example, is disclosed in United States Patent No. 5,886,766.
The surface designs useful in the lenses of the invention may be provided by using any known method for designing progressive and regressive surfaces. For example, commercially available ray tracing software may be used to design the surfaces. Additionally, optimization of the surfaces may be carried out by any known method.
In optimizing the designs of the individual surfaces or the composite surface, any optical property may be used to drive the optimization. In a preferred method, the near vision zone width, defined by the constancy of the spherical or equivalent spherocylindrical power in the near vision zone may be used. In another preferred method, the magnitude and location of the peaks or plateaus of the maximum, localized unwanted astigmatism may be used. Preferably, for purposes of this method, the location of the peaks and plateaus is set outside of a circle having an origin at \( x = 0, y = 0 \), or the fitting point, as its center and a radius of 15 mm. More preferably, the \( x \) coordinate of the peak is such that \( |x| > 12 \) and the \( y < -12 \) mm.
Optimization may be carried out by any convenient method known in the art. Additional properties of a specific lens wearer may be introduced into the design optimization process, including, without limitation, variations in pupil diameter of about 1.5 to about 5 mm, image convergence at a point about 25 to about 28 mm behind the front vertex of the surface, pantoscopic tilt of about 7 to about 20 degrees, and the like, and combinations thereof.
The progressive and regressive surface designs used to form the composite progressive surface may be expressed in any of a variety of manners, including and preferably as sag departures from a base curvature, which may be either a concave or convex curvature. Preferably, the surfaces are combined on a one-to-one basis meaning that the sag value \( Z_1 \) at point \((x, y)\) of a first surface is added to the sag value \( Z_2 \) at the same point \((x, y)\) on a second surface. By "sag" is meant the absolute magnitude of the \( z \) axis distance between a point on a progressive surface located at coordinates \((x, y)\) and a point located at the same coordinates on a reference, spherical surface of the same distance power.
More specifically in this embodiment, following designing and optimizing of each surface, the sag values of the surfaces are summed to obtain the composite surface design, the summation performed according to the following equation:
\[
Z(x, y) = \Sigma a_i Z_i(x, y)
\]
(III)
wherein \( Z \) is the composite surface sag value departure from a base curvature at point \((x, y)\), \( Z_i \) is the sag departure for the \( i \)th surface to be combined at point \((x, y)\) and \( a_i \) are coefficients used to multiply each sag table. Each of the coefficients may be of a value between about -10 and about +10, preferably between about -5 to about +5, more preferably between about -2 and about +2. The coefficients may be chosen so as to convert the coefficient of highest value to about + or -1, the other coefficients being scaled appropriately to be less than that value.
It is critical to perform the sag value summation using the same coordinates for each surface so that the distance and near powers desired for the composite surface are obtained. Additionally, the summation must be performed so that no unprescribed prism is induced into the composite surface. Thus, the sag values must be added from the coordinates of each surface using the appropriate coordinate systems and origins. Preferably, the origin from which the coordinate system is based will be the prism reference point of the surface, or the point of least prism. It is preferable to calculate the sag values of one surface relative to the other along a set of meridians by a constant or a variable magnitude before performing the summation operation. The calculation may be along the \( x-y \) plane, along a spherical or aspherical base curve, or along any line on the \( x-y \) plane. Alternatively, the calculation may be a combination of angular and linear displacements to introduce prism into the lens.
The distance and near vision powers for the progressive and regressive surface designs are selected so that, when the designs are combined to form the composite surface, the powers of the lens are those needed to correct the wearer's visual acuity. The dioptric add power for the progressive addition surface designs used in the invention each independently may be about +0.01 to about +6.00 diopters, preferably about +1.00 diopters to about +5.00 diopters, and more preferably about +2.00 diopters to about +4.00 diopters. The dioptric add power of the regressive surface designs are each independently may be about -0.01 to about -6.00, preferably about -0.25 to about -3.00 diopters, and more preferably about -0.50 to about -2.00 diopters.
In the case in which more than one composite progressive surface is used to form the lens, or the composite surface used in combination with one or more progressive surface, the dioptric add power of each of the surfaces is selected so that the combination of their dioptric add powers results in a value substantially equal to the value needed to correct the lens wearer's near vision acuity. The dioptric add power of each of the surfaces may be from about +0.01 diopters to about +3.00 diopters, preferably from about +0.50 diopters to about +5.00 diopters, more preferably about +1.00 to about +4.00 diopters. Similarly, the distance and near dioptric powers for each surface are selected so that the sum of the powers is the value needed to correct the wearer's distance and near vision. Generally, the distance curvature for each surface will be within the range of about 0.25 diopters to about 8.50 diopters. Preferably, the curvature of the
distance zone of a concave surface may be about 2.00 to about 5.50 diopters and for a convex surface, about 0.5 to about 8.00 diopters. The near vision curvature for each of the surfaces will be about 1.00 diopters to about 12.00 diopters.
[0037] Other surfaces, such as spherical, toric, aspheric and atoric surfaces, designed to adapt the lens to the ophthalmic prescription of the lens' wearer may be used in combination with, or in addition to, the composite progressive addition surface. Additionally, the individual surfaces each may have a spherical or aspherical distance vision zone. The channel, or corridor of vision free of unwanted astigmatism of about 0.75 or greater when the eye is scanning from the distance to the near zone and back, may be short or long. The maximum, localized unwanted astigmatism may be closer to the distance or near viewing zone. Further, combinations of any of the above variations may be used.
[0038] In a preferred embodiment, the lens of the invention has a convex composite and concave progressive addition surfaces. The convex composite surface may be a symmetric or asymmetric soft design with an aspherical distance viewing zone and a channel length of about 10 to about 20 mm. The maximum, localized unwanted astigmatism is located closer to the distance than the near viewing zone and preferably is on either side of the channel. More preferably, the maximum, localized unwanted astigmatism is superior to the point on the surface at which the dioptric add power of the surface's channel reaches about 50 percent of the surface's dioptric add power. The distance viewing zone is aspherized to provide additional plus power to the surface of up to about 2.00 diopters, preferably up to about 1.00 diopters, more preferably up to about 0.50 diopters. Aspherization may be outside of a circle centered at the fitting point and having a radius of about 10 mm, preferably about 15 mm, more preferably about 20 mm.
[0039] The concave progressive surface of this embodiment is an asymmetrical, and preferably an asymmetrical, hard design, with a spherical distance viewing zone and channel length of about 12 to about 22 mm. The distance viewing zone is designed to provide additional plus power of less than about 0.50 diopters, preferably less than about 0.25 diopters. The maximum, localized unwanted astigmatism is located closer to the near viewing zone, preferably on either side of the lower two-thirds of the channel.
[0040] In yet another embodiment, the lens of the invention has a convex composite surface and concave regressive surface. In still another embodiment, the lens has a convex composite surface, a regressive surface as an intermediate layer, and a spherocylindrical concave surface. In yet another embodiment, the convex surface is the composite surface, a regressive surface is an intermediate layer and the concave surface is a conventional progressive addition surface. In all embodiments it is critical that the distance, intermediate and near viewing areas of all surfaces align so as to be free of unwanted astigmatism.
[0041] The lenses of the invention may be constructed of any known material suitable for production of ophthalmic lenses. Such materials are either commercially available or methods for their production are known. Further, the lenses may be produced by any conventional lens fabrication technique including, without limitation grinding, whole lens casting, molding, thermoforming, laminating, surface casting, or combinations thereof. Preferably, the lens is fabricated by first producing an optical preform, or lens with a regressive surface. The preform may be produced by any convenient means including, without limitation injection or injection-compression molding, thermoforming, or casting. Subsequently, at least one progressive surface is cast onto the preform. Casting may be carried out by any means but preferably is performed by surface casting including, without limitation, as disclosed in United States Patent Nos. S,147,585, 5,178,800, 5,219,497, 5,316,702, 5,358,672, 5,480,600, 5,512,371, 5,531,940, 5,702,819, and 5,793,465.
[0042] The invention will be clarified further by a consideration of the following, non-limiting examples.
Examples
Example 1
[0043] A soft design, convex progressive addition surface was produced as a sag table wherein $Z_1$ denoted the sag value departure from a base curvature of 5.23 diopters for the distance zone. In Figs. 2a and 2b are depicted the cylinder and power contours for this surface. The add power was 1.79 diopters with a channel length of 13.3 mm and maximum, localized, unwanted astigmatism of 1.45 diopters at $x = -8$ mm and $y = -8$ mm. The prism reference point used was $x = 0$ and $y = 0$ and the refractive index ("RI") was 1.56.
[0044] A hard design regressive surface design was produced for a convex surface as a sag table wherein $Z_2$ denoted the sag value departure from a base curvature of 5.22 diopters for the distance zone. In Figs. 3a and 3b are depicted the cylinder and power contours for this surface. The add power was -0.53 diopter, the channel length was 10.2 mm and the maximum, localized unwanted astigmatism was 0.71 diopters at $x = -10$ mm and $y = -10$ mm. The prism reference point used was $x = 0$ and $y = 0$ and the RI was 1.56.
[0045] A convex composite surface design was produced using Equation III wherein $a_1 = a_2 = 1$ to generate the sag value departures. In FIGS. 4a and 4b are depicted the cylinder and power contours for the composite surface, which surface has a base curvature of 5.23 diopters and an add power of 1.28 diopters. The composite surface contains a single maximum, localized unwanted astigmatism area located on either side of the channel. The magnitude of this astigmatism maximum was 0.87 diopters and the channel length is 13.0 mm. The composite surface's area of astigmatism
was located at $x = -10$ mm and $y = -18$ mm. The maximum astigmatism and normalized distortion of the composite surface was significantly lower, without compromise of the other optical parameters, than that of comparable dioptric add power prior art lenses. For example, a Varilux COMFORT® lens has a maximum astigmatism value and normalized distortion of 1.41 diopters and 361, respectively for a 1.25 diopter add power as shown in Table 2. For a composite surface lens the maximum astigmatism is 0.87 diopters and the normalized lens distortion of the lens is calculated to be 265.
Example 2
[0046] A concave progressive addition surface was designed using a material refractive index of 1.573, a base curvature of 5.36 diopters and an add power of 0.75 diopters. FIG. 5 depicts the cylinder contours of this surface. The maximum, localized astigmatism was 0.66 diopters at $x = -16$ mm and $y = -9$ mm. The prism reference point used was at $x = 0$ and $y = 0$.
[0047] This concave surface was combined with the convex composite surface from Example 1 to form a lens with a distance power of 0.08 diopters and an add power of 2.00 diopters. In the Table is listed the key optical parameters of this lens (Example 2), and in FIGs. 6a and 6b is depicted the cylinder and power contours. The maximum astigmatism is 1.36 diopters, significantly lower than prior art lenses shown in the Table 1 as Varilux COMFORT® (Prior Art Lens 1 and FIGS. 7a and 7b. The normalized lens distortion of the lens is calculated to be 287, significantly less than the prior art lenses of Table 3. Additionally, none of the other optical parameters are compromised.
Example 3
[0048] In order to demonstrate the capability of the design approach of the invention to optimize specific optical parameters, specifically the reading power width, a concave progressive addition surface was designed using a material RI of 1.573, a base curvature of 5.4 diopters and an add power of 0.75 diopters. In FIG. 8 is depicted the cylinder contour of this surface. The maximum, localized astigmatism was 0.51 diopters at $x = -15$ mm and $y = -9$ mm. The prism reference point used was at $x = 0$ and $y = 0$.
[0049] This concave surface was combined with the convex composite surface from Example 1 to form a lens with a distance power of 0.05 diopters and an add power of 2.00 diopters. In the Table is listed the key optical parameters of this lens (Example 3), and in Figs. 9a and 9b is shown the cylinder and power contours. The maximum astigmatism is 1.37 diopters, significantly lower than the prior art lens shown in Table 1 as Varilux COMFORTS® - (Prior Art Lens 1 and FIGs 7a and 7b. The normalized lens distortion of the lens is calculated to be 289, which is significantly less than the prior art lenses of Table 3. The lower astigmatism of the concave surface smoothenes out the astigmatic contours and increases the reading power width from 7.4 mm to 8.6 mm. None of the other optical parameters are compromised.
| Optical Parameter | Prior Art Lens 1 | Example 2 | Example 3 |
|----------------------------|------------------|-----------|-----------|
| Distance Power (D) | 0.00 | 0.00 | 0.00 |
| Add Power (D) | 1.99 | 2.01 | 2.01 |
| Distance Width (mm) | 13.5 | 12.6 | 12.6 |
| Reading Width (mm) | 17.6 | 14.6 | 15.2 |
| Reading Power Width (mm) | 13.9 | 7.4 | 8.6 |
| Channel Length (mm) | 12.2 | 12.4 | 12.2 |
| Channel Width (mm) | 6.3 | 8.9 | 8.8 |
| Max. Astig. Location (x,y in deg.) | 16.8-12.1 | 12.5-14.9 | 11.3-11.1 |
| Max. Astigmatism (D) | 2.46 | 1.36 | 1.37 |
| | Varilux COMFORT® | Example 1 |
|--------------------------|------------------|-----------|
| Label Add Power (D) | 1.25 | 1.25 |
| $A_P$ (D) | 1.40 | 1.28 |
| $D_W$ (mm) | 45.65 | 30.00 |
| | Varilux COMFORT® | Example 1 |
|----------------------|------------------|-----------|
| $l_W$ (mm) | 5.00 | 5.32 |
| $N_W$ (mm) | 7.50 | 9.27 |
| $l_L$ (mm) | 11.25 | 8.00 |
| Channel Length (mm) | 12.85 | 13.00 |
| $M_A$(D) | 1.41 | 0.87 |
| Distortion Area (mm²)| 1075 | 1168 |
| $D_L$ | 361 | 265 |
| | Varilux COMFORT® | Rodenstock MULTIGRESSIV® | Zeiss GRADAL® | Hoya EX® | Varilux PANAMIC® | Sola PERCEPTA® | Example 2 | Example 3 |
|----------------------|------------------|--------------------------|---------------|----------|-----------------|----------------|-----------|-----------|
| Label Add Power (D) | 2.00 | 2.00 | 2.00 | 2.00 | 2.00 | 2.00 | 2.00 | 2.00 |
| A_y (D) | 1.99 | 2.11 | 2.21 | 2.28 | 2.19 | 2.12 | 2.01 | 2.01 |
| D_w (mm) | 13.50 | 10.20 | 14.45 | 13.05 | 10.25 | 14.20 | 12.60 | 12.60 |
| I_w (mm) | 3.00 | 4.00 | 3.75 | 4.00 | 6.50 | 2.75 | 3.50 | 4.00 |
| N_w (mm) | 10.00 | 10.00 | 5.50 | 6.00 | 14.90 | 11.50 | 8.00 | 8.00 |
| I_c (mm) | 8.75 | 8.75 | 10.00 | 12.50 | 8.75 | 8.75 | 8.75 | 8.75 |
| Channel Length (mm) | 12.20 | 12.45 | 12.90 | 13.05 | 12.20 | 12.50 | 12.40 | 12.20 |
| M_A (D) | 2.46 | 2.56 | 2.20 | 2.45 | 2.25 | 2.53 | 1.36 | 1.37 |
| Distortion Area (mm²)| 1241 | 1246 | 1286 | 1276 | 1129 | 1209 | 1272 | 1270 |
| D_L | 511 | 504 | 427 | 457 | 387 | 481 | 287 | 289 |
Claims
1. An ophthalmic lens with a progressive addition surface, comprising a composite surface of a progressive surface and a regressive surface, wherein the composite surface exhibits a maximum, localized unwanted astigmatism that is at least less than about 0.125 diopters than the sum of an absolute value of the maximum, localized astigmatism of each of the progressive and regressive surfaces.
2. A method for designing an ophthalmic lens with a progressive addition surface comprising the steps of: a.) designing a progressive surface comprising at least one first area of unwanted astigmatism; b.) designing a regressive surface comprising at least one second area of unwanted astigmatism; and c.) combining the progressive and regressive surface designs to form a composite progressive surface design, wherein the at least one first and second areas of unwanted astigmatism are substantially aligned.
3. The method of claim 2, wherein each of the progressive and regressive surface designs is one of a hard design, a soft design, or a combination thereof.
4. The method of claim 2, wherein each of the progressive and regressive surface designs are hard designs.
5. The method of claim 2, wherein each of the progressive and regressive surface designs are soft designs.
6. The method of claim 2, wherein a surface formed from the composite surface design exhibits maximum, localized unwanted astigmatism that is less than about 0.125 diopters than the sum of an absolute value of the maximum, localized unwanted astigmatism of each of the progressive and regressive surfaces.
7. The method of claim 2, wherein the composite surface design comprises more than one area of maximum, localized unwanted astigmatism on each side of the composite surface’s channel.
8. The method of claim 2, wherein the progressive and regressive surface designs are expressed as sag departures from a base curvature.
9. The method of claim 8, wherein the base curvature is a concave curvature or a convex curvature.
10. The method of claim 2 wherein step c.) is carried out by summing the progressive surface and regressive surface design sag values according to the following equation:
\[ Z(x, y) = \sum a_i Z_i(x, y) \] (1)
wherein \( Z \) is the composite surface sag value departure from a base curvature at point \((x, y)\), \( Z_i \) is the sag departure for the \( i \)th surface to be combined at point \((x, y)\) and \( a_i \) are coefficients.
Revendications
1. Lentille ophtalmique avec une surface de correction progressive, comprenant une surface composite d'une surface progressive et d'une surface régressive, dans laquelle la surface composite montre un astigmatisme indésirable localisé maximum qui est au moins inférieur d'environ 0,125 dioptrie à la somme d'une valeur absolue de l'astigmatisme localisé maximum de chacune des surfaces progressive et régressive.
2. Procédé pour concevoir une lentille ophtalmique avec une surface de correction progressive comprenant les étapes de : a) conception d'une surface progressive comprenant au moins une première aire d'astigmatisme indésirable ; b) conception d'une surface régressive comprenant au moins une deuxième aire d'astigmatisme indésirable ; et c) combinaison des surfaces progressive et régressive conçues pour former une surface progressive composite, dans laquelle les au moins une première et une deuxième aires d'astigmatisme indésirable sont实质iellement alignées.
3. Procédé selon la revendication 2, dans lequel chacune des surfaces progressive et régressive conçues est d'une
conception dure, d'une conception souple, ou d'une combinaison de celles-ci.
4. Procédé selon la revendication 2, dans lequel chacune des surfaces progressive et régressive conçues est de conception dure.
5. Procédé selon la revendication 2, dans lequel chacune des conceptions de surfaces progressive et régressive conçues est de conception souple.
6. Procédé selon la revendication 2, dans lequel une surface formée à partir de la surface composite conçue montre un astigmatisme indésirable localisé maximum qui est inférieur d'environ 0,125 dioptrie à la somme de la valeur absolue de l'astigmatisme localisé maximum de chacune des surfaces progressive et régressive.
7. Procédé selon la revendication 2, dans lequel la conception de surface composite comprend plus d'une aire d'astigmatisme indésirable localisé maximum de chaque côté du canal de la surface composite.
8. Procédé selon la revendication 2, dans lequel les surfaces progressive et régressive conçues sont exprimées en tant que départs d'écart à partir d'une courbure de base.
9. Procédé selon la revendication 8, dans lequel la courbure de base est une courbure concave ou une courbure convexe.
10. Procédé selon la revendication 2, dans lequel l'étape c) est mise en oeuvre en additionnant les valeurs d'écart des surface progressive et de surface régressive selon l'équation suivante :
\[ Z(x,y) = \Sigma a_i Z_i(x,y)(I) \]
où \( Z \) est le départ de valeur d'écart de surface composite à partir d'une courbure de base au point \((x,y)\), \( Z_i \) est le départ d'écart pour la \( i \)ème surface à combiner au point \((x,y)\), et \( a_i \) sont des coefficients.
Patentansprüche
1. Ophthalmische Linse mit einer progressiven Additionsfläche, welche eine aus einer Progressionsfläche und einer Regressionsfläche zusammengesetzte Fläche aufweist, wobei die zusammengesetzte Fläche eine maximale lokal begrenzte unerwünschte astigmatische Wirkung aufweist, welche mindestens um etwa 0,125 Dioptrien kleiner ist als die Summe der absoluten Beträge der maximalen lokal begrenzten astigmatischen Wirkung der Progressions- und der Regressionsfläche.
2. Verfahren zum Konstruieren einer ophthalmischen Linse mit einer progressiven Additionsfläche, welches die folgenden Schritte umfasst: a.) Konstruieren einer Progressionsfläche, welche mindestens einen ersten Bereich unerwünschter astigmatischer Wirkung aufweist; b.) Konstruieren einer Regressionsfläche, welche mindestens einen zweiten Bereich unerwünschter astigmatischer Wirkung aufweist; und c.) Kombinieren der Konstruktionen der Progressions- und der Regressionsfläche, so dass eine zusammengesetzte Progressionsflächenkonstruktion gebildet wird, wobei der mindestens eine erste und der mindestens eine zweite Bereich unerwünschter astigmatischer Wirkung im Wesentlichen gleich ausgerichtet sind.
3. Verfahren nach Anspruch 2, wobei die Konstruktionen der Progressions- und der Regressionsfläche jeweils vom Typ "Hard Design", vom Typ "Soft Design" oder eine Kombination davon sind.
4. Verfahren nach Anspruch 2, wobei die Konstruktionen der Progressions- und der Regressionsfläche jeweils vom Typ "Hard Design" sind.
5. Verfahren nach Anspruch 2, wobei die Konstruktionen der Progressions- und der Regressionsfläche jeweils vom Typ "Soft Design" sind.
6. Verfahren nach Anspruch 2, wobei eine Fläche, die von der zusammengesetzten Flächenkonstruktion gebildet wird,
eine maximale lokal begrenzte unerwünschte astigmatische Wirkung aufweist, welche um etwa 0,125 Dioptrien kleiner ist als die Summe der absoluten Beträge der maximalen lokal begrenzten astigmatischen Wirkung der Progressions- und der Regressionsfläche.
7. Verfahren nach Anspruch 2, wobei die zusammengesetzte Flächenkonstruktion mehr als einen Bereich maximaler lokal begrenzter unerwünschter astigmatischer Wirkung auf jeder Seite des Kanals der zusammengesetzten Fläche aufweist.
8. Verfahren nach Anspruch 2, wobei die Konstruktionen der Progressions- und der Regressionsfläche als Durchbiegungsabweichungen von einer Basiskrümmung ausgedrückt werden.
9. Verfahren nach Anspruch 8, wobei die Basiskrümmung eine konkave Krümmung oder eine konvexe Krümmung ist.
10. Verfahren nach Anspruch 2, wobei Schritt c.) ausgeführt wird, indem die Durchbiegungswerte der Konstruktion der Progressionsfläche und Regressionsfläche entsprechend der folgenden Gleichung addiert werden:
\[ Z(x, y) = \sum a_i Z_i(x, y), \quad (I) \]
wobei \( Z \) die Durchbiegungswertabweichung der zusammengesetzten Fläche von einer Basiskrümmung im Punkt \((x, y)\) ist, \( Z_i \) die Durchbiegungsabweichung für die i-te zu kombinierende Fläche im Punkt \((x, y)\) ist und \( a_i \) Koeffizienten sind.
FIG. 1
\[ D_W \]
\[ I_W \]
\[ N_W \]
\[ I_L \]
\[ C_L \]
FIG. 2b
FIG. 3b
FIG. 4b
FIG. 5
FIG. 6b
FIG. 7b
FIG. 8
0.25
0.5
0.25
0.5
0.25
0.5
1.0
1.25
0.5
1.0
1.25
FIG. 9b
1.0
1.0
1.5
2.0
0
0.5
1.0 |
Are You Walking the Walk? Or Just Talking the Talk?
So many times we attend LEAD sessions and walk away saying, “I knew that. What a waste of time,” or, “I’ve heard that before so why am I hearing it again?” …
(Read on … Page 2)
Why the Military Produces Great Leaders
(Read on … Page 4)
Your beliefs don’t make you a better person, your behavior does. Author Unknown
So many times we attend LEAD sessions and walk away saying “I knew that. What a waste of time,” or “I’ve heard that before so why am I hearing it again?” We certainly believed what we heard. And we may have already known what we heard -- but do our beliefs transfer into changed behaviors?
Changing behavior is difficult. In fact, it is more difficult than we sometimes acknowledge. People attempt to create change by using only their mind. How many times have you made a New Year’s Resolution only to find two months later that only thinking them into fulfillment has not yet worked? You believe that the resolutions will make you better, i.e. better health, have more energy, more money - but it just doesn’t happen. Then you feel guilty because you did not follow through with what you believed you needed or wanted to accomplish.
One reason change may not occur is that we do not always understand the foundation upon which our behaviors are built. Our behavior develops from many influences in our life such as the role models we have had, e.g. parents, family members, coaches, teachers, and others we have observed. If we have positive role models, we have positive behaviors. If those role models are negative, our behaviors may skew toward being negative.
(continued on page 3)
**LEAD GOALS**
1. **Create a Culture of Shared Learning**
a. Using each other as resources; contributing to and tapping into the communal wisdom of our group.
b. Create an appreciation for what we can teach to, and learn from, each other.
c. Explore leadership tools through shared experiences.
2. **Understand the Intrinsic Link Between Self Development and Community Success**
a. Starting with self-development, we become better leaders. Better leaders help build organizational strength and effectiveness, thereby providing better service to the community which, in turn, makes the community stronger.
3. **Explore and Enhance Effective Leadership Qualities**
a. Foster adaptability, flexibility, and resiliency
b. Foster being participatory, inclusive, and self-aware
c. Emotional Intelligence
d. Further and develop core leadership competencies
**Navigation Tip**
In order to aid in navigating throughout the PDE please note that you can return to the front page by clicking on the back arrows after each article, or at the bottom of each page.
Another influence is the “role” we held in our family as we were growing up. We may have been the “parent” to our sibling or even to our parents. We may have been expected to be a good athlete or an excellent student. The roles we learned as a child profoundly and subconsciously influence our present behavior.
An additional influence is the set of beliefs that are attached to our behaviors. We may have been given anything we asked for. We may have had to work for everything we got. We may have learned there is not enough to go around. Our beliefs affect our behavior. Another influence is how we coped with our life experiences during our childhood. Often we bring the ways of coping that we developed in childhood or during our teenage years into our adult life. Sometimes these coping mechanisms are useful and sometimes they are no longer appropriate as an adult.
As a result of these influences, we develop behavior patterns and they become ingrained in us. So how do we change our behavior patterns - especially ones that may unconsciously be steering our current behaviors or resistance to change?
First, we list the behavior we want to change. Second, we list the beliefs about that behavior. Third, we list the feelings connected to the behavior. Fourth, we list the benefits – the gains – of giving it up or of replacing it with a new behavior. By listing the behavior(s), the beliefs about it, the emotions attached to it, and the benefits of changing it, we can create the desired change.
Recognizing that it’s actually our behaviors (not our intentions) that make us a better, leader, father, mother, brother, sister, or person is the first step. So when you go to your next LEAD session, consider walking away with the question of “How can I change my behavior, based upon what I learned (or was reminded of) so that I can be a better leader?”
Your thoughts and comments are always welcome.
For more information about LEAD, please contact Jason Bajor (BAT) at 630/454-2075 or firstname.lastname@example.org; Gail Cohen (ELG) at 847/931-5607 or email@example.com; Kathy Livernois (STC) at 630/377-4470 or firstname.lastname@example.org, or Jen Morrison, LEAD Coordinator, at 630/762-7090 or email@example.com
CURRENT LEAD COMMITTEE MEMBERS
LEAD: Jen Morrison, LEAD Coordinator, 630/762-7090, firstname.lastname@example.org
Batavia: Jason Bajor, Assistant City Administrator, 630/454-2075, email@example.com
Randy Deicke, Fire Chief, 630/454-2111, firstname.lastname@example.org
Elgin: Gail Cohen, Human Resources Director, 847/931-5607, email@example.com
Kyla Jacobsen, Water Director, 847/931-6160, firstname.lastname@example.org
Tom Migatz, Parks Maintenance Supervisor, 847/931-6136, email@example.com
Russ Matson, Public Safety Supervisor, 847/289/2576, firstname.lastname@example.org
St. Charles: Denice Brogan, Human Resources Generalist, 630/377-4415, email@example.com
Guy Hoffrage, Training Coordinator, 630/762-6946, firstname.lastname@example.org
Kathy Livernois, Human Resources Director, 630/377-4470, email@example.com
One assumption at the core of this blog is that military service—particularly service in the crucible of combat—is exceptionally effective at developing leaders. Why? It’s nurture, not nature.
First, in all services, military leadership qualities are formed in a progressive and sequential series of carefully planned training, educational, and experiential events—far more time-consuming and expensive than similar training in industry or government. Secondly, military leaders tend to hold high levels of responsibility and authority at low levels of our organizations. Finally, and perhaps most importantly, military leadership is based on a concept of duty, service, and self-sacrifice; we take an oath to that effect.
We view our obligations to followers as a moral responsibility, defining leadership as placing follower needs before those of the leader, and we teach this value priority to junior leaders. Our leadership extends to caring for the families of our soldiers, sailors, airmen, or marines, especially when service members are deployed. When serving in crisis conditions where leadership influences the physical well-being or survival of both the leader and the led—in extremis contexts—transactional sources of motivation (e.g. pay, rewards, or threat of punishment) become insufficient.
Why should a person be motivated by rewards when he might not live to enjoy them? Why would a person fear administrative punishment when compliance might lead to injury or death? Soldiers in such circumstances must be led in ways that inspire, rather than require, trust and confidence. When followers have trust and confidence in a charismatic leader, they are transformed into willing, rather than merely compliant, agents. In the lingo of leadership theorists, such influence is termed transformational leadership, and it is the dominant style of military leaders.
Contrast the military leader value set reflecting service to the one that currently exists in some US businesses. Are we likely to see business leaders placing the well-being of their shareholders and employees above their own? [On] February 4, 2009, in a swift response to public outrage, the Obama administration imposed a cap of $500,000 in pay for top executives at companies that receive large amounts of bailout money from the U.S. Government. From a military perspective, a half million dollars is a generous sum, more than double the compensation of a four star leader in charge of a theater
(continued on page 5)
of war. But the quantity of compensation isn’t as relevant as the message to followers that, when times were tough, the leader put his or her personal well-being ahead of theirs.
Such perceptions of a military leader in combat would render that leader mistrusted and ineffective in the eyes of soldiers forever. Why should business leaders expect anything else on the part of people desperate about the loss of their equity or employment or lifestyles?
The current economic environment, partly caused by a crisis of self-service leadership, has created belt-tightening reminiscent of a world war, with budgets slashed, travel funding restricted, training programs cut, personnel layoffs, and other draconian, cash-saving measures in place. CEOs [and other leaders] have to start leading like generals—even if that means living a lifestyle in common with their troops.
The best leadership — whether in peacetime or war — is borne as a conscientious obligation to serve. In many business environs it is difficult to inculcate a value set that makes leaders servants to their followers. In contrast, leaders who have operated in the crucibles common to military and other dangerous public service occupations tend to hold such values. Tie selflessness with the adaptive capacity, innovation, and flexibility demanded by dangerous contexts, and one can see the value of military leadership as a model for leaders in the private [and public] sector.
© Copyright 2009 by Harvard Business Review. Colonel Tom Kolditz is Professor and Head of the Department of Behavioral Sciences and Leadership at the US Military Academy at West Point, New York. Kolditz has served in an array of military tactical command and technical staff assignments worldwide, commanding through battalion level, and as a leadership and human resources policy analyst in the Pentagon. His most recent book is *In Extremis Leadership: Leading as if Your Life Depended on It*. For more of Col. Kolditz’ blog posts at Harvard Business Review, please click here: http://hbr.org/search/Colonel%20Tom%20Kolditz
The soldier’s heart, the soldier’s spirit, the soldier’s soul, are everything. Unless the soldier’s soul sustains him, he cannot be relied on and will fail himself and his country in the end. National strength lies only in the hearts and spirits of men. -- GENERAL GEORGE MARSHALL
Do you ever find yourself complaining about some aspect of your life -- family, work, health, etc. -- some aspect that you claim to be important to you? If so, are you spending more time complaining about that area of life than you are actually doing something about it? My bet is that blame and complain win the day more often than not.
Several years ago, John, someone whom I admire greatly, asked me to help him examine his life to assess how things could be doing even better. Right there, John stood out amongst many. While he was intent on improving the quality of his life, he did not start with what was wrong. Instead, we took inventory of what was working well and then moved on to what could be working even better. Our coaching work then began to focus on steps he could take to build on what he already had.
**You Don’t Have to Be Sick to Get Better**
That may be a subtle distinction lost on many. John was fundamentally demonstrating a truth that often hides in plain sight -- you don’t have to be sick to get better. Too many of us in this country grew up with a deficit model of thinking, most likely from our schooling that has leaked into how we view life in general.
Deficit thinking often shows up in performance reviews at work -- after a few obligatory comments about things your did well, the review then turns its deficit oriented head into pointing out all the things that are somewhere between wrong and ‘not up to standard.’
You know the drill: you took that 50 word spelling test, got 44 right, and the test came back with a score on top in red ink -- MINUS 6. Is minus six the truth? Sure is, but how do you build on minus six? Could you have built on PLUS 44 instead? Same data point, yet one suggests being in the hole, and the other talks about how to improve on what’s already working.
Now don’t get me confused with one of those folks who think everyone should get a trophy just for showing up. We absolutely need to know both sides of the equation -- what’s working and what needs to be improved. However, too many of us have been drilled in the deficit approach, which leads to an approach to life that translates as “good enough never is.” Deficit thinking often shows up in performance reviews at work -- after a few obligatory comments about things you did well, the review then turns its deficit oriented head into pointing out all the things that are somewhere between wrong and “not up to standard.”
You can find deficit thinking in just about any aspect of life: what’s wrong with your company, your boss, society, politics, your spouse, your kids, or just about anything else that seems to be important. For this example, I’m going to focus on health, but you can easily substitute any of these other areas of life.
(continued on page 7)
Is Your Health Important to You? Really?
Years ago, I was working with a nationally prominent physician who had a deep understanding of nutrition and the role that proper food choices play in building a solid platform to support real health and well-being. In our lengthy intake process, he pointed out another remarkable truth hiding in plain sight: “Many people come to be when something is wrong. They typically start by saying everything was just fine and then all of a sudden things went south.”
He told me that the “all of a sudden” theory was pure nonsense. Sure, the symptom may have showed up all of a sudden, but something began moving south a long time before the person finally noticed. Years of poor dietary choices, lack of exercise, and improper nutrition often lead to the day the symptom finally appears. He then told me we needed to get a better picture of where my body was relative to a number of nutritional markers. After a series of tests (blood, hair, and other stuff), he was able to show me graphically where my body was doing okay nutritionally, and where it was a bit short in nutritional supply.
This may sound an awful lot like deficit thinking, but instead he was showing me the PLUS 44 areas and where we need to build for further improvement. He was able to design a nutrition-based health strengthening program for me, using what I already did well, and adding strength to areas that could be improved. Rather than trying to fix a symptom, he worked with me to build a complete system of health, focusing on a balanced core platform designed to build and maintain health across a broad spectrum.
What I learned in the process was astounding, ranging from how imbalance in one area of my health/nutrition led to me making other poor choices, which further undermined my health. Although I had always thought that my health was a priority, reality showed me otherwise.
How to Assess Your Real Priorities
This is where what I learned from John in that coaching session comes into play. If you have been following along, looking at areas of your life where you may be currently stuck in blame and complain, be it relationships, health, money, career, etc., then you can apply this profoundly simple assessment tool I learned from John to discover where your priorities really lie.
... take your health as an example:
What choices are you making about exercise, diet and nutrition?
If you’re not spending much time making conscious choices ...
John’s advice? If you want to know the truth about your priorities, just take a look at your checkbook (or credit card bills) and your calendar. Where are you spending your money? Where are you spending your time? No matter how much your mind may want to argue with this simple assessment technique, sooner or later you’re going to have to cop to the truth of that old adage and “put your money (or time) where your mouth is.”
If you don’t have a calendar or a job that requires a calendar, start one! Nothing complicated -- any simple calendar will do, preferably something you can carry with you, from a paper-based pocket calendar, to your iPhone or PDA. As you go through your day, all of your day -- not just time spent at work -- simply jot down where you are spending time. Be sure to include your weekends.
This will give you an independent view of what you consider to be important -- after all, it’s your time and you’re the one choosing to spend it that way. Yeah, I know, there’s all that stuff about your boss, your job, your kids and all the rest. However,
(continued on page 8)
just take your health as an example: What choices are you making about exercise, diet and nutrition? If you’re not spending much time making conscious choices, then you may well wind up telling your physician one day that “everything was just fine and then suddenly…”
The same applies to the quality of your relationships with your spouse, friends and kids. Where are you spending your time? Let’s get real here: Sitting in front of the TV with your kids or partner isn’t the same as spending time with them deepening your communication and relationship, now is it?
What aspects of your life would you like to improve? What choices can you make starting today to elevate these areas to real priorities that you can measure with your calendar or your checkbook?
© 2010 Russell Bishop is an educational psychologist, author, executive coach, management consultant, and Editor-at-Large for Huffington Post. You can learn more about his work by visiting his website at www.RussellBishop.com
The LEADER From the Committee
Can’t Find the Time? Don’t Kid Yourself.
The first quarter of this year brought another enlightening learning event to LEAD, in the form of Time Management guru, Dr. Donald E. Wetmore, author of several books: Beat the Clock, Organizing Your Life, and The Productivity Handbook; and he is also the founder of the Productivity Institute.
This second installment of speaker events for LEAD focused on how to get four more hours out of our day in a presentation entitled, The 28-Hour Day. Attendees were provided with a broader philosophy regarding the concepts of time management. For example, some of what we learned was that:
- **Time management is all about choices**, and we have a personal responsibility for those choices. Therefore, saying you “didn’t have time” is a lie – you actually chose to spend your time elsewhere. *Life is a series of choices.*
- **Time management is really about taking action now** to make sure we have enough time down the line to live life to its fullest until the very end. We can take time for health and fitness now, or we can deal with it later in life.
- **Either you will be in control of your time, or everyone else will.** Wouldn’t you rather write your own life story than have everyone else write it for you?
(continued on page 9)
• There is a finite number of hours in the day/week/month/year. How are you going to spend that time? Budget that time accordingly -- daily planning is an investment in time – not an expense.
• If something is “off” in your life, chances are you have been neglecting one or more of the 7 Vital Areas of Life: Health, Family, Financial, Intellectual, Social, Professional, and Spiritual. Take a look at your time management budget, and realign them with your goals. Goals without action steps are just wishes.
• No matter your position or role, you really DO have the ability to control your time. Even if your priorities are typically dictated by other people or events, or constantly changing, you still have a voice over how to prioritize your time. Not sure how? Create an interruptions log, a crisis management log, and then speak up. Explain your time constraints, request prioritization direction, and/or make suggestions for improvement. You are most likely the best resource of how to maximize your efficiency for the group. Don’t keep it to yourself.
Besides the new perspectives on time management, Dr. Wetmore also provided attendees with a plethora of tips and tools they could use to help create more hours in their days. Some of them included:
• Creating a Crisis Management Log to determine which scenarios could have been avoided with better planning or systems in place. Life is not a sprint – it’s a marathon.
• Try using an Interruptions Log for a few weeks to see who is interrupting you, when it happens, and whether it was worth your time to be interrupted. You can recapture an hour a day just by managing interruptions.
• Plan your day and work your plan. Take time at the end of every day (home and work) to plan what you want to accomplish the next day. Knowing exactly where you’re headed saves time the next day. Successful people are on their chosen path.
• Over-plan your day to create a healthy level of pressure – but remember – your productivity is based on what you get done – not on what you don’t get done. Instead of “I want to get it all done,” think: “I want to get the most important things done.” Celebrate your achievements and stop beating yourself up for not getting it all done.
• Clean your work area! A messy work area can result in 1.5 hours a day of distraction.
This was an information-packed session, attended by 162 employees from the three cities, which represented a 24% increase in attendance from our first LEAD speaker event held in the last quarter of 2011. Survey responses were insightful and mixed. Employees looking for a specific tool-based presentation appeared to be somewhat disappointed, while those who were interested in the overall concept of time management were enlightened. Dr. Wetmore concentrated his verbal presentation on the theoretical and philosophical aspects of time management and work/life balance, while he provided details on specific tools in the workbook so staff could read up on them at a later time.
If you would like an additional workbook, please contact Jen Morrison, LEAD Coordinator, at firstname.lastname@example.org or 630/762-7090. Or, for more information about Time Management, please visit Dr. Wetmore’s website at http://www.balancetime.com for a wealth of free articles about how to gain better control of your time.
We are pleased to welcome new staff and acknowledge achievements and milestones of existing staff since our last issue of *The LEADER* …
**Congratulations to:**
**Tom Bluett**
(St. Charles Public Works, Electric Division) on completing his International Brotherhood of Electrical Workers (IBEW) apprentice program and being promoted to Lineman.
**Brooks Boyce**
(St. Charles Police Department) on completing his Masters Degree in Criminal Justice Administration from Columbia College.
**Mike Grandt, Don Henry, and Michael Mertes**
(St. Charles Public Works and Economic Development Departments) on placing 9th overall in the 13th Annual Trivia Bee sponsored by Literacy Volunteers Fox Valley.
**Steve Huffman**
(St. Charles Police Department) on completing his Bachelor of Science in Criminal Justice Administration from Columbia Southern University.
**The City of St. Charles Community Development Department**
… on receiving the Chaddick Institute Municipal Development Award for the First Street redevelopment project.
**The City of St. Charles Public Works Department**
… on receiving:
- The International Concrete Repair Institute Award of Excellence for the repair of historic structures;
- FEMA class rank improvement from Class 8 to Class 5;
- American Public Works Association (APWA) Award of Appreciation from the Education Committee for use of the Public Works Facility;
- APWA Project of the Year Award – Historic Restoration/Preservation for Projects less than $5 million for the St. Charles Municipal Center River Wall and Plaza.
- APWA Project of the Year Award – Environmental Projects $5 million to $25 million for the radium removal project at Wells No. 3 & 4.
**The City of St. Charles Community Development Department**
… on being awarded the George Burke Safety Award from the Illinois Water Environment Association (IWEA) for 740 days without an accident.
*(continued on page 11)*
From the American Heritage Dictionary of Idioms:
**Food for thought.** An idea or issue to ponder, as in ‘That interesting suggestion of yours has given us food for thought.’ This metaphoric phrase, transferring the idea of digestion from the stomach to mulling something over in the mind, dates from the late 1800s, although the idea was also expressed somewhat differently at least three centuries earlier.
We all know that honest self-awareness provides an opportunity to round out our technical skill sets by learning how to deal with others through building stronger relationships, resolving conflict, encouraging creativity, and developing environments of trust. Yet we don’t always take a breather to think about some the challenges we face on a day-to-day basis – in the workplace, at home, or in our community.
This issue’s question: **When was the last time you did something for the first time?**
The simplicity of this question is profound in the focus it requires to answer. In every aspect of life, we can use that question to trigger thinking. It helps to evaluate stagnation; it helps to trigger creativity; and it enlightens areas in need of change.
Here are some suggestions to get you and/or your group thinking:
1. Identify one area where change would re-energize you and lead to better results.
2. What habits, behaviors, beliefs or activities is it time to discard or modify?
3. What do you need to DO today that would mix things up a bit?
4. Who can you enlist to encourage you and hold you accountable?
Material courtesy of Staver Connect at https://thestavergroup.com
(continued on page 12)
Please consider taking a few minutes to think about how you might answer that question. We also encourage you to consider bringing up the question at an upcoming staff, family, or club meeting to see how others would answer the question. The intended result of doing so would be to provide a forum to begin (or further) open and honest dialogue in your group. Try it, and let us know what happens. Contact Jen Morrison with your feedback, and/or send in some suggestions for future discussion- or thought-provoking questions.
**The LEADER Committee News**
**Save the Dates**
» **The Case for Servant Leadership**
Leadership Book Club Breakfast
Dates -- May 4, 17, and 22, 2012
*There’s still time to read this very short book!*
Click here for more information!
» **What Do They See When They See You Coming**
Speaker Event -- featuring Stephen M. Gower
Dates -- September 27 and October 16, 2012
For more information about the upcoming schedule, contact Jen Morrison, LEAD Coordinator, at 630/762-7090 or email@example.com.
**New Committee Members**
The LEAD Committee is pleased to welcome two new Curriculum Planning Team members for 2012.
**Guy Hoffrage**, Police Department Training Coordinator, St. Charles, will be bringing an invaluable perspective from the Police Department from an officer’s point of view.
**Russ Matson**, Public Safety Supervisor, Elgin, brings an enthusiastic thirst for knowledge and leadership development, while also providing a civilian perspective from the Police Department.
Feel free to contact Guy, Russ, or any other of our Committee members with any questions, comments, speaker recommendations, or book club suggestions!
Trivia Contest Goes Old Country Italian
Congratulations to Mike DeBrocke from Elgin. He won the last issue’s contest and will be enjoying a $25 discount at The Office, one of downtown St. Charles’ trendy restaurants!
For your chance to win this issue’s prize, a $25 gift certificate to Aliano’s Restaurant, one of downtown Batavia’s newer restaurant destinations featuring old world Italian cuisine, you’ll have to answer the following question from this issue of *The Leader*:
**Name the seven vital areas of life as defined by Dr. Don Wetmore.**
Names of all who submit the correct answer will be placed in something vessel-like and one name will be drawn as the winner. So, get your correct answers to Jen Morrison, LEAD Coordinator, via telephone/voice mail 630/762-7090 or e-mail at firstname.lastname@example.org no later than June 1, 2012. The drawing will be held on June 4, 2012. Don’t forget, you must to play to win! ➜
---
*It is not good enough for things to be planned - they still have to be done; for the intention to become a reality, energy has to be launched into operation.*
-- WALT KELLY
---
*The Leader* is a publication of LEAD, a collaborative learning initiative developed by the cities of Batavia, Elgin and St. Charles, Illinois.
www.strongercommunity.net
contact:
Jen Morrison 630/762-7090
© 2012 by the cities of Batavia, Elgin and St. Charles, Illinois
Interested in contributing an article to *The Leader*? Have you been inspired by a particular leadership book you’d like to share? Do you have any article or feature suggestions for *The Leader*? Contact Jen Morrison, LEAD Coordinator with questions, suggestions, or comments. |
More than 1,811 Daily Memoranda issued from 2006 to end of 2016, with 21,732 pages of Business Clips issued covering all African, European Institutions and African Union, as well as the Breton Woods Institutions. The subscription is free of charge, and sponsored by various Development Organisations and Corporations.
The Memorandum is issued daily, with the sole purpose to provide updated basic business and economic information on Africa, to more than 4,000 European Companies, as well as their business parties in Africa.
Should a reader require a copy of the Memoranda, please address the request to fernando.matos.rosa@sapo or email@example.com.
11 YEARS OF UNINTERRUPTED PUBLICATION
ACP GROUP AIMS FOR IMPACT AT UN CLIMATE CHANGE CONFERENCE COP23, see page 18
SUMMARY
Ministerial Conference on Mediterranean Fisheries in Malta ................................................................. Page 2
World Bank announces support for Mozambique’s state budget will resume ........................................ Page 2
European Commission secures 10-year pledge to save Mediterranean fish stocks ............................... Page 3
China and São Tomé and Principe join hands for a brighter future ..................................................... Page 4
Nigeria is set to overtake South Africa as biggest African market by 2022 ........................................... Page 7
EIB signs Kshs 10.45 billion support for East African entrepreneurs .................................................. Page 8
Angolan entrepreneurs create consultants association ........................................................................ Page 9
EIB commitment in support of renewable energy: EUR 115 million financing for a windfarm in Egypt .... Page 9
Vale concludes sale of assets in Mozambique to Japan’s Mitsui ......................................................... Page 10
EIB signs extensive support for Kenyan energy and transport ............................................................ Page 10
Sudan introduces new regulations for gold market ............................................................................. Page 11
Mastercard to Support Development of Ghana's Digital Payment Ecosystem ...................................... Page 12
West 'gagging' S/Africa poultry industry ............................................................................................. Page 13
Algeria set to tender contracts to build 4GW of solar farms ................................................................. Page 13
Result of the audit of Mozambique’s debts should be released on 28 April ....................................... Page 14
Nigeria to begin work on 38km Lagos bridge this year ....................................................................... Page 14
Germany supports Mozambique with donation of 157.5 million euros ............................................. Page 15
Angola Has Yet To Develop Tourism Sector Attractive To International Visitors ............................. Page 16
MINISTERIAL CONFERENCE ON MEDITERRANEAN FISHERIES IN MALTA
The European Commission is organising a Ministerial Conference on Mediterranean fisheries in Malta on 29-30 March 2017. This event is a significant political push to address the alarming state of stocks and its impact on the industry and coastal communities.
Thanks to the preparatory work of Commissioner Vella, the European Commission has secured high level representation from 22 of the 23 countries that share a Mediterranean shoreline. With the political voices from the northern and southern coast present, there is a real window of opportunity.
The Conference will culminate with the signature of the Malta MedFish4Ever Declaration; a political declaration to be signed by all Mediterranean Ministers, thus providing political approval for this unprecedented and important process.
The background:
After the political initiative launched by the EU in Catania (February 2016) and in Brussels (April 2016), the EU is now taking a step further by proposing a ministerial declaration of all riparian countries on sustainable fisheries in the Mediterranean, following up on the Declaration adopted in Venice in 2003. The declaration will set the work in the area the next 10 years on critical issues such as small-scale fisheries, fight against Illegal, Unreported and Unregulated (IUU) fishing, data collection, scientific advice and reliable conservation measures.
The event:
Wednesday 29 March:
10:00 - 13:00 Round table event - Meeting between Commissioner Vella and stakeholders on small scale fisheries (Xara Lodge, Sqaq Tac Cawla, Triq il-Belt, L/O Rabat - RBT 5320)
19:00 – 22:00 Launch event of the conference and welcome reception for delegations and stakeholders (Verdala Palace, Buskett Gardens, in Siggiewi)
Thursday 30 March: Ministerial Conference (Grand Master’s Palace, Palace Square, Valetta)
10:00 – 13:00 Plenary conference
13:00 - 14:00 Signing ceremony of the Ministerial Conference Malta MedFish4Ever
Mediterranean fish stocks: the General Fisheries Commission for the Mediterranean details its strategy for the next 4 years (27/09/2016)
Catania: Putting an end to overfishing in the Mediterranean (09/02/2016)
WORLD BANK ANNOUNCES SUPPORT FOR MOZAMBIQUE’S STATE BUDGET WILL RESUME
The World Bank will resume support to Mozambique’s state budget this year, and expects to provide US$2 billion over the next five years, announced the representative of the institution, cited by Mozambican daily newspaper Notícias.
Mark Lundell also told the newspaper the World Bank’s policy favors supporting the state budget, and the institution has a portfolio of 25 projects in 17 strategic areas, 11 of which related to development priorities, to carry out over five years.
Lundell said the World Bank’s assistance to Mozambique contributed to robust economic growth, having admitted, however, that it did not have the expected impact on improving the living conditions of the Mozambican population, so the next aid programme will focus on areas with impact on poverty reduction.
The World Bank suspended financial cooperation with Mozambique after the discovery in April 2016 of undisclosed loans of over US$1 billion contracted by the former Mozambican government between 2013 and 2014, without the knowledge of the country’s parliament and international donors.
The International Monetary Fund (IMF) and the major donors to the Mozambican state budget also froze their aid to the country, and asked for an international audit of the public debt before resuming support. The results of the audit are due to be delivered at the end of April. (30-03-2017)
EUROPEAN COMMISSION SECURES 10-YEAR PLEDGE TO SAVE MEDITERRANEAN FISH STOCKS
Following months of negotiations, the European Commission has secured today a 10-year pledge to save the Mediterranean fish stocks and protect the region’s ecological and economic wealth.
The Malta MedFish4Ever Declaration, a practical example of EU’s successful neighbourhood policy, sets out a detailed work programme for the next 10 years, based on ambitious but realistic targets. Over 300 000 persons are directly employed on fishing vessels in the Mediterranean, whilst many more indirect jobs depend on the sector. The Declaration was signed by Mediterranean ministerial representatives from both northern and southern coastlines, a signature that gives political ownership to an issue that was up to now managed at technical level. It is the result of a European Commission-led process that started in Catania, Sicily in February 2016.
Commissioner Karmenu Vella, responsible for the Environment, Fisheries and Maritime Affairs, said: “Today we are making history. In signing the Malta MedFish4Ever Declaration, we are affirming our political will to deliver tangible action: on fisheries and other activities that have an impact on fisheries resources, on the blue economy, on social inclusion, and on solidarity between the northern and southern shores of the Mediterranean. I hope that this declaration will come to be seen as a turning point – for a bright future for fishermen, coastal communities and fishing resources alike.”
Commitments made by the signatories include:
- By 2020, ensure that all key Mediterranean stocks are subject to adequate data collection and scientifically assessed on a regular basis. In particular small-scale fishermen are to acquire an increased role in collecting the necessary data to reinforce scientific knowledge;
- Establish multi-annual management plans for all key fisheries. On its part, the Commission has already initiated this process with its proposal for a multi-annual fisheries plan for small pelagic stocks in the Adriatic;
- Eliminate illegal fishing by 2020 by ensuring that all States have the legal framework and the necessary human and technical capabilities to meet their control and inspection responsibilities. The General Fisheries Commission for the Mediterranean (GFCM) will lead the development of national control and sanctioning systems;
- Support sustainable small-scale fisheries and aquaculture by streamlining funding schemes for local projects such as fleet upgrade with low-impact techniques and fishing gear, social inclusion and the contribution of fishermen to environmental protection.
The effective implementation of the declaration will be made possible by involving in the process fishers – men and women –, coastal communities, civil society, industrial, small-scale, artisanal and recreational fisheries, as well as the UN Food and Agriculture Organisation and GFCM. Today’s declaration is another contribution to the EU’s international commitments under the Sustainable Development Goals (Goal 14: ‘Conserve and sustainably use the ocean, seas and marine resources for sustainable development’).
Background
The Mediterranean Sea is a unique sea basin, characterised by its long coastline and a fishing sector providing jobs for over 300 000 people. 80% of its fleet belongs to small-scale fishermen (with vessels under 10m long), who fish a quarter of the total catches. These jobs are at risk as fish stocks in the Mediterranean are shrinking: about 90% of assessed stocks are over-exploited. Food security, livelihoods, and regional stability and security are all under threat.
Today’s declaration is the outcome of the so-called Catania process, launched by Commissioner Vella in February last year and entailing fruitful cooperation with stakeholders, the GFCM Secretariat, EU Member States and third countries. Important milestones include a first ministerial conference of Mediterranean fisheries ministers in April 2016, the GFCM annual session in June 2016, and the GFCM inter-sessional meeting in September 2016.
The following parties were represented at the Malta MedFish4Ever Ministerial Conference: European Commission, 8 Member States (Spain, France, Italy, Malta, Slovenia, Croatia, Greece, Cyprus), 7 third countries (Morocco, Algeria, Tunisia, Egypt, Turkey, Albania, Montenegro), FAO, the General Fisheries Commission for the Mediterranean, the European Parliament, the EU Mediterranean Advisory Council. (EC 30-03-2017)
Malta MedFish4Ever Declaration
#MedFish4Ever campaign
CHINA AND SÃO TOMÉ AND PRÍNCIPE JOIN HANDS FOR A BRIGHTER FUTURE
Patrice Emery Trovoada is already well-known beyond the borders of São Tomé and Príncipe, well into the region of Central Africa. In just a few weeks, his notoriety will significantly increase the world over as he arrives in Beijing to sign a historical co-operation agreement between the Democratic Republic of São Tomé and Príncipe and the People’s Republic of China.
The agreement will mark the return of São Tomé and Príncipe back into the fold of the Sino-Lusophone family as well as the nation’s formal entry to Forum Macau.
**Prime Minister, what are the main unexplored potentials of São Tomé and Príncipe today?**
**Patrice Trovoada (PT):** I think among the most unexplored is the fishing sector, which until today, even at a national level, has been limited to small-scale fishing, even though we control an immense maritime territory bigger than that of many countries, including Cameroon.
There is the possibility of advancing our industrial fishing, especially by establishing a fish processing industry. This is something that has not been explored much but could be a great boom to our domestic economy that would not require much initial investment. We already own the maritime territory and resources; what we need is a port infrastructure capable of receiving and servicing modern fishing boats as well as a land infrastructure capable of handling product exports.
Then there are other sectors with great potential that are already in place but require some support in terms of infrastructure in order to expand. For example, tourism: an airport that could service direct flights from Asia, the Americas and even Northern Europe – markets with tourists looking to take vacations abroad – would really help the tourism sector grow exponentially. Aerial connectivity is fundamental; it could catapult regional weekend tourism growth into the double digits. There’s Accra in Ghana, Lagos in Nigeria – a city with 20 million inhabitants, one of only five cities in all of Africa to have over a million residents – Luanda in Angola, Cairo in Egypt and Cape Town and Johannesburg in South Africa.
These are just a few of the nearby major urban hubs: two of them, Luanda and Lagos, are less than two hours away by flight. At the sub-regional level, the major concern is air traffic safety, and when it comes to intercontinental tourism, the major obstacle is having a modern and capable airport. We could potentially have very strong growth in the tourism sector, which would substantially improve the sustainability of São Tomé’s economy.
Our geographical position is very conducive to supporting both airport and port logistics and creating moderate-sized infrastructures for service and transport. What we have to be careful of, with industrial growth, is maintaining the nature of the islands and our beautiful beaches. We also need to ensure safe transport of people and goods.
There may also be a wealth of untapped potential, although to date this has been mainly speculation, in natural oil and gas reserves. The African continent is rich in oil and gas – from Senegal to Angola, it seems there is no spot without one or the other – so it would seem anomalous if there was not a bit here! (laughs) It is still speculative, but I believe that we could be producing oil within the next five or six years.
**And what are some of the difficulties and challenges that must be overcome?**
PT: There are several types of difficulties. Lack of infrastructure is first and foremost, but I am convinced that our advocacy has helped our partners realise that financing infrastructure is a priority. It is simply a question of the appropriate business model: the interest rate, payback capability, etc., but we are on the right track.
Another difficulty is available human resources. We are an extremely young country. Our education system is good overall, but it needs to be polished. We need specialised labour; more specifically, we need to adapt training programmes to the labour market as well as to regional economic reality. Currently, our jurists all speak Portuguese; none speak English or French. We own maritime territory, but nobody specialises in maritime economics or maritime law. We are an island, yet we have few sailors, sea captains, naval repair engineers, etc. So it is necessary to consider the market when guiding human resources training.
This is a major challenge that could be compensated with foreign labour, but that would have to be accompanied by policies preserving the interests of our nationals and maintaining the identity of São Tomé.
These are not insurmountable challenges. The technological challenge is easier to solve; the issue of human resources is more complicated. It requires a mobilisation of our citizens to define a collective vision for our development. Vision cannot be just dreaming, it must include a component of realism and a reasonable timeframe. If we only dream, nothing will happen.
**How does the decision to resume the relationship with the People’s Republic of China factor into these challenges?**
PT: One cannot play a role in the provision of services and logistics or participate in the global economy and international trade while excluding the largest bilateral partner on the African continent. Thus, the development vision of São Tomé and Príncipe and the well being of our people necessitate resuming economic, political, cultural and diplomatic relations with China.
Another important point is that long-term policies and political and societal stability must have a legal basis. After 20 years, we recognise that, in terms of international law and among the international community, there is a growing sense that there is only one China, which is represented by the government of the People’s Republic of China.
Our ambition for our country and for our people is exercised alongside the humility and self-awareness of our size and our potential, and above all, we must correctly align with what is commonly recognised as international law. We accept, in general, universally accepted values regarding environmental preservation and fundamental rights and freedoms and adopt a policy of non-intervention when it comes to the internal affairs of other countries.
We recognise that the Taiwan issue is an internal matter of the People’s Republic of China and support solutions of harmony rather than encourage situations of friction and rupture. We continue our friendship with the people of Taiwan without question. We also understand that under a one-China policy, the values of all native residents of China are preserved.
Our shift in allegiance is a matter of political domain: twenty years ago, the reality was very different. Today, the world is different, the options are different. Taiwan has failed to gain official recognition in the eyes of international law, while China has made great progress in all areas – its economy, its understanding of human rights – including ratification of the Paris Treaty. All this has factored into our decision. We are pro-globalisation because a country such as ours relies on trade to flourish. China will help open us up to many opportunities.
**What can you tell us about the content of the agreement to be signed between the two countries?**
PT: Obviously we want this to be a relationship of mutual advantage, with both governments cooperating for respective economic growth. China does not throw money out the window: it invests in its interests, and there definitely are many projects of mutual interest. We need time to assess our capacity to take on debt as well as our capacity to supply and support these projects – the materials, workers, infrastructure and equipment required. There is also detailed technical-financial cross-analysis that needs to be done.
We have nearly completed this step, after which we will be able to detail our infrastructural collaborations.
Given China’s access to funding, it will be Chinese companies carrying out the work, but we also have to repay China, so we need to discuss how to do that while still optimising our domestic economy to benefit our people. But I want to emphasise that neither government wants to gift white elephants. These investments will generate revenue, which will in turn pay for themselves. They are well-studied, well-financed and well-prepared. Their internal rate of return is on par to that used by private investors to ensure that these investments are repaid as the economy simultaneously grows.
This partnership will provide political, diplomatic and geo-strategic advantages for both parties, but I think most importantly, there is great potential for mutual profit. That is the only way to ensure that the flow of investment and credit continues. We have a very clear idea of what funding is needed over the next 30 years, but for that to happen, there needs to be mutual confidence that the projects are sound and that they have financial backing. Hopefully they will inspire our other partners to take the plunge (laughs) and invest more in this country. We are banking on simultaneously diversifying both our economy as well as our foreign policy.
**What kind of partnership do you expect with China?**
PT: I am convinced that this new partnership will bring many positive outcomes; however, our co-operation may not be the “classic” model that China has with many other African nations.
Today, there are roads, hospitals, schools, public buildings, water and energy infrastructure all over the African continent built by Chinese companies. But predominantly, this has been in countries with vast oil, gas and mineral reserves, which is not the case with São Tomé.
We are a small but well-situated nation with many appealing factors in our favour: we are streamlining our visa application process for visitors, lowering taxes, pro-reform, pro-business, pro-trade, just to name a few. We are not interested in what has become the standard model for Sino-African co-operation. We envision a truly long-term intellectual partnership upon which to build a platform of understanding. I am convinced that there is much to do together.
As I see it, there are two African continents: the first is dominated by capitalistic international mega-companies like Total S.A., Shell Oil Company, Sinopec Limited and other Chinese companies. But there is also a continent, which in 30 years will have 2 billion eager consumers. Hopefully the standard of living will have increased and poverty decreased and a growing labour force will entice China to relocate some of its major industries, as in the case of Ethiopia, which has been transformed into a hub for the production of footwear. It is with this perspective that we seek this long-term partnership.
I am quite confident that this co-operation will transform São Tomé and Príncipe. But it’s a matter of expectations. There are people who may be waiting for castles, but we will not build castles or palaces (laughs), nor will we offer cars to every government official. We will, with China’s help, build up our infrastructure, thus creating employment and a good business climate, which will attract more companies here, which will again create more jobs. As household income rises, people will be able to send their children to school. It will be a process but will provide a basis for a more independent, sustainable, tranquil and optimistic future.
One issue we are currently analysing is inflation. When an economy “overheats” from an influx of investments, how will we control inflation? Over the next two years, we have to control inflation to ensure that our people do not lose purchasing power and that wages remain competitive.
The government’s plan to maintain political and social stability is to call on the population to remain calm and confident. Opposition parties have been stirring the pot and trying to complicate the situation. They are playing their part in politics, and their time will come, but for now, we have to put aside our differences and partake in a climate of responsibility, because at the end of the day, we all share a common goal and want the best for our country. I often say that my political base consists of the poor. 65 per cent of this country’s citizens live in poverty. We have to make the fight against poverty a priority. Economic development and private capital inflow will only happen if the State provides basic infrastructure.
**Are you going to China in April to sign the agreement?**
PT: We have established a timeframe to finalise the agreement before the end of April, and probably at that time I will visit China with great pleasure, but we will see.
When you talk about the port and the airport, you underline the issue of profitability. Does that mean Chinese groups could use São Tomé as a platform for distribution of goods and services to the region?
PT: Yes. The port and the airport are six kilometres apart. We want to connect them via a highway and build warehouses and offices, etc., along this corridor. It would be the ideal place for Chinese companies to facilitate re-export activities, as long as some value is added locally.
This would be set up in phases. In the first phase, we need to see how things evolve at the port and analyse business growth. Given São Tomé and Príncipe’s geographic situation, the port ought to be highly competitive with others on the continent. The port allows a draft of less than 14.5 metres, which is unusual for most African ports. So this first phase, in which risk is controlled, allows for the establishment of an industrial fishing operation as well as a transhipment operation, where arriving goods are re-exported regionally. Additionally, should the country enter oil-related industries, it is ideal to have the airport and port in close proximity. Having the two infrastructures side by side would facilitate efficiency in exporting relevant goods by sea, by cargo plane, by speed boat to the oil rigs, etc.
These kinds of agreements with China tend to include a training component, given the capacity of Chinese universities. Has a training or educational exchange been established?
This year, we have already sent about 90 university students to China. We also want to promote short-term but in-depth training in various fields: media, public and private management in different sectors, building and factory maintenance, defense, security, traffic policing, non-profit agencies – these are areas in which we particularly need qualified workers.
Collaborating with a small country like ours, the opportunity costs to China are minimal. When, for example, China invests US$200 million in a massive road project in the middle of the forest, no one sees that road. But invest US$200 million in São Tomé and Príncipe, and you could see the changes it brings even if you were on the moon!
We must prove that this partnership can indeed be a success story with regards to transparency and maximum impact for the people of our country and the businesses of theirs. I am convinced we’re going to make it happen.
Entrance to the Forum Macau will open the door to China’s investment funds, right?
PT: Exactly. We have expressed to the Chinese government that we wish to utilise all existing mechanisms available for economic and human development. So it is true that we see Forum Macau as a major bonus and means by which we may mobilise financial investment to aid in our country’s growth. Forum Macau is unique in that it allows us to interact with others in the Portuguese-speaking world, to share knowledge and learn from each other’s experiences.
I do not believe that failures perpetuate themselves. It’s the success stories that repeat themselves. Business ventures can only create potential opportunity; how they turn out – good or bad – depends entirely on the effort and dedication put forth. In São Tomé and Príncipe, a million-dollar business or investment could be a great thing.
It is not enough to dream big, nor is it enough just to have an abundance of resources. Sometimes it is better to leave something well enough alone than take a chance and ruin it. Success takes partnerships and collaboration, working together with people who have the know-how and skills. This is our attitude, how we want to approach our future: partnerships, partnerships, partnerships, openness, openness, openness, so that we can build a sustainable economy. Ultimately, the end goal is not growth but happy citizens and higher standards of living for all. (Macao Magazine 30-03-2017)
NIGERIA IS SET TO OVERTAKE SOUTH AFRICA AS BIGGEST AFRICAN MARKET BY 2022
With ongoing political and policy uncertainty, Nigeria is set to overtake SA as the biggest market in Africa by 2022. According to the 2017 African Business Outlook Survey, the top three markets were the sub-Saharan countries with the three largest economies, SA, Nigeria and Kenya, with 91% of respondents having business based there. While these markets were expected to remain the top three over the next six years, respondents expected Nigeria to emerge as their single most important market by 2022.
The survey compiled by The Economist Corporate Network (ECN) is based on responses from 150 CEOs who have Africa-based commercial operations. The report looks at current sentiment in the region and future prospects.
ECN Africa director Herman Warren, who authored the report, said: "Nigeria led in 2015 but the recession in 2016 knocked it. Nigeria is expected to make a big come-back and grow faster than SA."
The survey shows African governments are trying to attract more investment and be less reliant on low value-added commodity exports by diversifying their economies. The slowdown in commodity prices affected the region’s rate of economic growth: 2016 was the first year in more than a decade that Africa’s economic growth (1.4%) did not exceed the rate of global GDP growth (2.2%).
In spite of slower economic growth, compared with the rest of the world, 63% of respondents indicated that their firms achieved similar or higher margins from Africa-based operations in 2016, with East Africa-based operations singled out as the most profitable.
Things are looking up in 2017 with respondents expecting their firms’ profit margins to improve across the board. Warren said some of the biggest trends to emerge were challenges businesses faced. Despite the prospects for growth, operating in the region still poses major obstacles. Respondents cited regulatory issues, macro-economics, currencies and a skills deficit as the biggest concerns.
“Talent featured as a major challenge. The feeling is that we need to create a talent pool by developing curriculums to create a skill set,” the report notes.
The report also outlines specific challenges for business in SA. While SA is expected to grow slightly faster in 2017, Warren says the overall pace of expansion remains below potential. “Concerns around SA were raised. Political uncertainty, labour rigidity and ambiguous policy played a huge factor and have contributed to the dampened economic outlook.”
In the report, Warren says: “SA is likely to remain a key market for their firms for at least the next five years. This may reflect SA’s importance for many companies as a springboard into the rest of the region.” (BD 30-03-2017)
EIB SIGNS KSHS 10.45 BILLION SUPPORT FOR EAST AFRICAN ENTREPRENEURS
The European Investment Bank has signed two new credit lines for East-Africa this morning for a total of EUR 95 million (Kshs 10.45 billion) to be made available through Equity Bank and HFC Limited to support smaller local projects in Kenya, Tanzania, DRC and Uganda.
EIB Vice President Pim van Ballekom, responsible for operations in East Africa, commented: “The credit lines signed today will not only benefit people in Kenya, but are meant for people in neighbouring countries as well. The EIB is committed to supporting Kenyan Banks in providing credit to the young and growing population in the region. Kenya is increasingly becoming a hub for the region on many levels and we as a Bank must look at this from a very basic point of view: there is a young and growing population with enormous potential, you need credit to support that momentum.”
The EIB signed a EUR 75 million (Kshs 8.25 billion) credit line with Equity Bank, under which funds are earmarked for three subsidiaries: EUR 36 million (Kshs 3.96 billion) for Equity Tanzania, EUR 20 million (Kshs 2.2 billion) for Procredit DRC and EUR 19 million (Kshs 2.09 billion) available through Equity Uganda. The on-lending will be available in USD or local currencies with the objective of contributing to job creation and poverty reduction. In addition, Equity Group will benefit from a EUR 2m (Kshs 220 million) technical assistance program funded by the EIB to support its strategy of transforming branches into SME business centres.
Equity Group Managing Director & CEO Dr James Mwangi said ‘With this facility of Kshs 8.25 billion (EUR 75 million) we will be in a position to support up to 1000 regional companies with an average loan of nearly Kshs 10 million each thus assisting develop local entrepreneurs to compete at regional level furthering integration and cross border trade.’
Next to this, a EUR 20 million (Kshs 2.2 billion) credit line under the EIB’s East and Central Africa Private Enterprise Finance facility was signed with HFC Limited. This credit line will support HFC in providing the much needed longer term financing to private enterprises and commercially operated public sector entities in productive sectors in Kenya, in line with EU and national development priorities. In addition, HFC will benefit from EIB funded technical assistance program aimed at strengthening capacity in line with its strategy.
“I am proud to note that the success of the initial funding by EIB, has now brought more opportunities and we are happy to be recipients of another 20 million Euros, which is undoubtedly, an endorsement of the impact HFC is having on the SME sector. This new funding will be channelled towards financing the
working capital and expansion of our growing SME customer base." said Sam Waweru, HFC Managing Director.
Since September 2014, the credit lines in the region are supported by a EUR 5 million (Kshs 550 million) technical assistance (TA) programme to support financial intermediaries and SMEs over a 3 year period. The programme will be extended for a further 3 years from April 2017 for an additional EUR 4.7 million (Kshs 517 million) and is coordinated out of Nairobi, with a permanent presence of consultants in Kenya, Tanzania, Uganda and Rwanda.
In the last seven years, the EIB has provided EUR 321 million (Kshs 35 billion) in credit lines for Kenyan businesses, which have benefitted nearly 800 Kenyan companies, creating over 9,000 new jobs in agriculture, education, transport, tourism, trade and other sectors. (EIB 29-03-2017)
ANGOLAN ENTREPRENEURS CREATE CONSULTANTS ASSOCIATION
The Business Confederation of Angola (CFA) plans soon to create an association of consultants to support the activity of entrepreneurs, the organisation's president, Francisco Viana announced in the city of Huambo.
The president of the CFA said during a meeting with businessmen in the city that they must plan to supervise their projects, which is the reason for the creation of the national association of business consultants.
This association will give the country's entrepreneurs access to greater business management skills, avoiding bankruptcies, he said cited by Angolan news agency Angop.
Francisco Viana said that economic and social development depends mainly on small and medium-sized enterprises, because they are the ones who drive the economic sector, as is the case in any country.
The Business Confederation of Angola, incorporated on 27 January, 2017, aims to be a platform for dialogue between the government and the various business associations and cooperatives from the country's 18 provinces. (30-03-2017)
STRONG EIB COMMITMENT IN SUPPORT OF RENEWABLE ENERGY: NEW EUR 115 MILLION FINANCING FOR A WINDFARM IN EGYPT
The EIB and the Arab Republic of Egypt signed today a loan agreement of EUR 115 million for financing a windfarm in the Gulf of Suez to further expand energy generation from renewable resources. The windfarm will contribute to meet growing electricity demand using sustainable wind energy.
In the presence of Dr. Sahar Nasr, Minister of Investment and International Cooperation and European Union Delegation Chargé d’Affairs, Reinhold Brender, the financing agreements were signed in Cairo today by Mr. Heinz Olbers, EIB Director of operations in the Neighbourhood Countries and Dr. Engineer Mohamed Mousa Omran Executive Chairman of New and Renewable Energy Authority.
"The EIB is proud to finance the Gulf of Suez windfarm which contributes to environmental sustainability and climate change mitigation. The project is in line with the Bank’s objective to provide more finance to renewable energy projects. The European Investment Bank is the world’s largest financier of climate action; last year we provided EUR 20.7 billion for climate related investment across Europe and around the world." said Heinz Olbers, European Investment Bank’s Director of operations in the Neighbourhood Countries.
The Gulf of Suez windfarm project involves the design, construction and commissioning of a large-size onshore wind farm of about 200MW located on the west bank of the Gulf of Suez, some 400 km southeast of Cairo with up to 100 turbines will be installed. The site, of a size of around about 57 km², is characterised by its arid desert conditions and has very favourable wind resources.
The project is financed by European Finance Institutions; European Investment Bank (EUR 115 million), KfW (EUR 72 million, including a EUR10.5m grant), Agence Française de Développement (EUR 50
million) and the New and Renewable Energy Authority. The European Commission provides a grant of EUR 30 million to the project.
EIB’s lending activities in the Mediterranean region in general and Egypt in particular are based on a Mandate from the European Union – the External Lending Mandate (ELM) currently covering the period 2014/2020 – through which the Bank works together with the EU and the government of Egypt to support socio-economic development in the country. (EIB 28-03-2017)
**VALE CONCLUDES SALE OF ASSETS IN MOZAMBIQUE TO JAPAN’S MITSUI**
Brazilian group Vale completed the sale of its stakes in assets in Mozambique to Japanese group Mitsui & Co, receiving an initial payment of US$733 million, the mining group said in a statement on Monday. The statement added that the Vale group will receive an additional US$37 million when the financing for the coal project at Moatize, in Tete province, is concluded, with the Japanese group having the option of returning the stake if that does not happen by next December.
After about three years of negotiations, the Mitsui group agreed to buy 15% of the 95% stake owned by the Brazilian group in the Moatize coal mine (the remaining 5% is owned by the Mozambican state) and half of the 50% the Vale group owns in the Nacala Logistics Corridor, which comprises a railroad between Moatize and Nacala and port facilities.
In a statement issued in September 2016, the Vale group had announced it expected to receive US$768 million from the sale of its stake in the Moatize coal mine and the Nacala Corridor to Mitsui & Co, under the new terms of an agreement originally signed in 2014.
Meanwhile, the Vale group appointed a new chief executive, Fabio Schwartsman, to replace Murilo Ferreira. (29-03-2017)
**EIB SIGNS EXTENSIVE SUPPORT FOR KENYAN ENERGY AND TRANSPORT**
During an official visit to Kenya, the European Investment Bank (EIB) has pledged new support for projects in the power and transport sectors. Also, at a press conference in Nairobi with Cabinet Secretary for the Treasury Henry K. Rotich, the signature of a connectivity project was announced. The EIB’s three-day programme will include a site visit to the Lake Turkana Wind Park, the largest windfarm in sub-Saharan Africa developed by the private sector, which the EIB helped finance in 2014.
At the Treasury the EIB signed the “Last Mile Connectivity” project, which will connect nearly 300,000 Kenyan households (equalling up to 1.5 million Kenyans) to the national electricity grid. The EUR 60 million (Kshs 6.7 billion) EIB loan concerns a multiple scheme electrification project, targeting universal access to electricity for the Kenyan population by 2020. It is part of a European “blended” financing package comprising a EUR 90 million (Kshs 10 billion) loan from the Agence Française de Développement and a EUR 30 million (Kshs 3.3 billion) grant from the European Union.
EIB Vice President Pim van Ballekom, responsible for operations in East Africa, commented: "Kenya is increasingly becoming a hub for the region on many levels. We as a bank must look at this from a very basic point of view, namely that there is a young and growing population with enormous potential, and that you need investments to support that momentum, both directly and indirectly. Thanks to today's signature over 300,000 Kenyan households – up to 1.5 million people - will soon be connected to the electricity grid, a basic condition for further economic growth. Two further projects that we have committed to will improve access to Mombasa harbour and support geothermal energy at Olkaria. Contributing to key infrastructure is one of the ways in which the EIB supports basic services, entrepreneurship and competitiveness in Kenya and we are happy to be able to partner up with local and European partners to achieve this."
Letters of intent were signed for two further very advanced projects, one being an extension of the existing Olkaria I geothermal plant. Here, the financing - for a total amount of EUR 113 million - will support the addition of a 70MWe turbine, as well as the construction of the necessary wells, steam gathering system and interconnection facilities. Next to that, the EIB pledged to finance an upgrade and widening of the Port of Mombasa access road, regarding the section of 42kms between Mombasa and Mariakani. The project aims to improve the safety situation on the road as well as alleviate congestion which causes delays for goods travelling through Mombasa. The project is co-financed by a concessional loan of EUR 50 million approved by the German Government and to be provided by KfW Development Bank, as well as a EU grant contribution and a loan from African Development Bank.
Just last week, the EIB signed a USD 17.5 million commitment into Catalyst Fund II, a Nairobi based growth equity fund supporting SMEs and Mid-Caps in East Africa. Priority target countries for this fund include Kenya, Tanzania, Ethiopia and Uganda, with several others also under consideration. The fund has a target size of USD 175 million with which Catalyst intends to invest in up to 12 companies, with the goal of generating social and developmental impact benefits.
The European Investment Bank has supported transformational investment across Africa for more than 50 years and operates in Kenya since 1977. Over the last decade the EIB has provided more than EUR 22 billion for long-term investment across Africa (EIB 30-03-2017)
SUDAN INTRODUCES NEW REGULATIONS FOR GOLD MARKET
The Sudanese Minister of Minerals, Mohamed Al-Sadiq Al-Karouri on Saturday announced new regulations for the buying and export of gold.
He said the new regulations allows the private sector to export 50 percent of the gold it buys with the freedom to dispose of its revenues and sell the remaining 50 percent to the Sudanese Central Bank. Before the new regulations concession companies were allowed to export 70 percent of their gold output and sell the remaining 30 percent to the country’s Central Bank.
The minister said in a statement after a joint meeting with Central Bank Governor, Hazem Abdelkader, that talks were held with the bank so that the old ratios are modified to protect the gold industry in Sudan.
The country produces about 70 tonnes of gold each year, the traditional sector accounting for 80 percent of production. The private sector follows with 20 percent. Over one million people work in the exploration of the country’s gold including at the Hassai Gold Mine, 50km to the northeast of the capital Khartoum. (APA 25-03-2017)
**MASTERCARD TO SUPPORT DEVELOPMENT OF GHANA’S DIGITAL PAYMENT ECOSYSTEM**
From left to right: Kadijah Amoah (Head Investments VP’s office), Sola Okeowo (Mastercard), Omokehinde Adebanjo (Area Business Head West Africa, Mastercard), Vice President of Ghana Mr. Mahamudu Bawumia, Paul Tswana (VP Government Services, Mastercard) and Obi Okwuegbunam (Country Manager for Ghana, Mastercard)
Mastercard, in support of Ghana’s Vision 2020 goals, has committed its support to helping the country to develop a cashless economy, in furtherance of its push to be an economic powerhouse in Africa. This was highlighted during a recent discussion between Mastercard and Ghana’s new Vice President, Alhaji Dr Mahamudu Bawumia.
With aspirations of becoming an African economic giant, Ghana has long recognised the importance of integrating science and technology into all aspects of the economy. Technology innovation will ensure Ghanaians are financially included by giving them access to smart, secure and accessible financial solutions.
The commitment comes at a time when Mastercard is able to diversify its suite of digital payment solutions available by introducing Masterpass QR. Harnessing the power of mobile technology, the solution enables consumers to pay for goods and services directly from their smart or feature phones. Mobile penetration in Ghana is estimated to be over 128 percent, allowing accessibility to millions of citizens. The true impact will however be made by the inclusion and empowerment of the country’s Micro, Small and Medium Enterprises (MSMEs), which contribute significantly to job creation and to GDP. According to Ghana’s Registrar General’s department, approximately 92 percent of companies registered in Ghana are MSMEs, contributing about 70 percent of Ghana’s GDP.
Mastercard has pledged to financially include 40 million micro and small enterprises globally by connecting them to digital payment solutions. This can only be achieved through collaboration between the public-private sectors, as well as through private-private sector partnerships.
Omokehinde Adebanjo, Vice President and Area Business Head for West Africa at Mastercard met with the Vice President to introduce the company’s vision of a ‘world beyond cash’. The cost of cash has a tremendous impact on local economic growth and allows for the shadow economy to exist.
“Digital payment solutions, whether a debit or prepaid card or the Masterpass QR mobile solution, ensures that transparency and efficiency is introduced into the economy, and this will mean that Ghana can grow and flourish, reaching its full potential,” Omokehinde Adebanjo explained. (ITNA 22-03-2017)
WEST ‘GAGGING’ S/AFRICA POULTRY INDUSTRY
South Africa’s poultry industry has shrunk by 7 percent and continues to shrink, placing the country’s main source of protein at risk thanks to Western nations dumping such cheap consumables into the country, a public hearing on the matter revealed.
The complaint was made during hearings into the country’s state of the poultry industry by the Portfolio Committee on Trade and Industry of Parliament, on its second day in Cape Town on Friday.
The local poultry industry has been beset by a number of challenges in recent years due to chicken parts dumping by the United States and European countries into South Africa.
As a result, the industry has lost 6,000 jobs over the past 12 months after Washington threatened Pretoria with tit-for-tat sanctions if the latter failed to allow the dumping on its markets.
South Africa, which exports luxury motor vehicles to the US, had no choice but to welcome the dumping for fear of losing its duty-free market access to the United States under the African Growth and Opportunities Act (AGOA).
The South African Poultry Association (SAPA)’s Chief Executive Officer Kevin Lovell reiterated his group’s concerns for dumping in the industry during the hearings.
“We need action against dumping for us to survive. Everything exported to us is surplus to local requirements in the exporting country,” Lovell observed. (APA 25-03-2017)
ALGERIA SET TO TENDER CONTRACTS TO BUILD 4GW OF SOLAR FARMS
Algeria is planning to build 13GW of solar over the next 13 years
Algeria's Ministry of Energy has announced the imminent launch of a tender for the installation of more than 4GW of solar farms in the next two weeks, according to a notice in government-owned website. The website said the tender would be issued in three 1,350 MW phases as part of the country's renewable energy policy, which aims to meet 27% of the country's electricity demands from renewable sources by 2030. To this end, Algeria is looking to install 13.5GW of photovoltaics and 8.5GW of wind power in the next 13 years.
The upcoming round of tenders will enable the construction of several large-scale PV plants in the region of Hautes Plaines in the north of the country.
Among the foreign companies reported to be in the running for Algeria's solar contracts are Carlo Gavazzi of Italy, Belectric of Germany, Cobra Instalaciones y Servicios of Spain, Engie Fabricom of Belgium, and JGC Corporation of Japan.
There are also joint bids by Athens-based Consolidated Contractors Company with KPV Solar of Germany, and from Yingli Energy with China National Technical Import and Export Corporation.
The projects will be owned, developed and operated by special purpose companies that will be 49% owned by an international partner. Another 40% will be owned by Sonatrach, Algeria's state-owned oil company, and 11% by other public or private Algerian concerns, including state gas utility Sonelgaz and electronic components producer Entreprise Nationale des Industries Electroniques.
Algeria is planning to build several manufacturing plants to produce components for the farms.
The aim of the solar farms is to divert Algeria's gas resources away from domestic power production towards export. (GCR 21-03-2017)
RESULT OF THE AUDIT OF MOZAMBIQUE'S DEBTS SHOULD BE RELEASED ON 28 APRIL
Kroll Associates UK was granted a second four-week extension to complete the audit of the loans by three public companies in Mozambique with a State guarantee, said the Attorney General's Office (PGR) on Friday in a statement issued in Maputo.
The company was hired by the PGR to audit the loans taken on by tuna company Ematum, Proindicus and Mozambique Asset Management (MAM), with a combined value of more than US$2 billion, and initially had a deadline of 90 days, which expired at the end of February.
The statement said that a few days ago Kroll presented a report describing the progress made, the prospects for completion of the review process of the information collected and the final report and to this end, requested, once again, an extension of the deadline for an additional period of four weeks.
The PGR said in the statement that after reviewing the reasons given in collaboration with the entity paying for the audit – Sweden – and the International Monetary Fund, it consented to the request and set 28 April 2017 as the new date for submission of the final report.
The audit is to verify the existence of criminal, or other, offences, in the process of establishment, financing and operation of these companies. (27-03-2017)
NIGERIA TO BEGIN WORK ON 38KM LAGOS BRIDGE THIS YEAR
The government of Lagos State has announced that a 38km, four-lane road bridge is to be built across the Lagos lagoon to ease congestion on crossings between Lagos Island and the rest of the city. Work on the $2.7bn Fourth Mainland Bridge is expected to begin this year, once agreement has been reached with those Lagos residents whose homes will have to be demolished to make room for the scheme. It is estimated that between 800 and 3,000 dwellings, most of them in informal settlements, will be affected.
The project is to be procured as a build, operate and transfer model, although the length of the concession has not been decided.
Lagos Island lies to the west of the main city, and the three existing crossings provide a direct east–west link to it. The fourth bridge will be situated to the east of the island and will provide a north–south link between Baiyeku and Langbasa over the Lagos lagoon.
The bridge would be the longest in Africa
The bridge has been under discussion since 2003. A concept design for the project was produced by Nigerian architect and urban planner NLÉ. It has produced a concept design for a two-level bridge that would be part of a ring road around the city.
NLÉ comments on its website: “The 2 level bridge will not only function as a means for vehicular traffic on its upper level, it will stimulate and accommodate pedestrian, social, commercial and cultural interactions on it’s lower level – ‘Lagos Life’ – with its tropical environment and intimate street level exchanges.
“The Fourth Mainland Bridge in conjunction with existing road networks would establish a primary ring road around Lagos. This ring road will provide alternative traffic routes from Lekki to Ikorodu, Ikeja to Ajah, relieving the Third Mainland bridge of its overstretched capacity.”
NLÉ’s 2008 plan would relieve Lagos’ legendary congestion with a grand bypass
If the bridge is 38km long, it will be the longest in Africa. The present holder, the 6 October Bridge over the Nile in Cairo is 20.5km and Lagos’ Third Mainland Bridge in 10.5km long.(GCR 20-03-2017)
GERMANY SUPPORTS MOZAMBIQUE WITH DONATION OF 157.5 MILLION EUROS
Germany decided to grant Mozambique a donation of 157.5 million euros to finance various development projects, under two technical and financial cooperation agreements signed on Friday in Maputo.
Education, sustainable economic development, and decentralisation and public finances will receive 118.5 million euros and 35.5 million euros will be invested in training, 23.5 million euros in decentralisation and public finances and 20.5 million euros to support small businesses.
The remaining 39 million euros will be invested in the energy sector, funding a regional power transmission line between Mozambique and Malawi and the short-term investment plan of state electricity company EdM, according to Mozambican news agency AIM.
Mozambique and Germany established diplomatic relations in 1976, and since then the European country has supported Mozambique’s development by financing a variety of programmes in priority areas, having already disbursed more than 1.2 billion euros. (27-03-2017)
ANGOLA HAS YET TO DEVELOP TOURISM SECTOR ATTRACTIVE TO INTERNATIONAL VISITORS
Cuando Cubango province, Angola, is the focus of a government plan to develop sustainable tourism in the Okavango Delta.
Angola pushed Egypt out of second place in 2016 among the top 10 African countries for number of hotel rooms under construction.
While Angola has yet to develop a tourism sector that attracts many international tourists, the country’s internal tourism sector is thriving. Angola is already equipped with what it takes – stunning beaches, wildlife safaris, the Trans-Kalahari Reserve (the world’s largest game reserve), vast natural rain forests and an extraordinary breadth of wildlife.
With 40 percent of its 25 million people of working age, Angola has a huge workforce. Recognizing this, the country’s sovereign wealth fund, FSDEA, has invested in the College of Hospitality Management in Benguela. Its mission is to deliver quality hospitality education to young aspiring Angolans.
From Ventures Africa. Story by Adrian Leuenberger, group head asset management at Quantum Global, an Africa-focused private equity firm.
National infrastructure is a central ingredient in the pursuit of long-term sustainable economic growth. Global business hubs such as Dubai and Singapore placed infrastructure at the heart of their approach to nation building, turning them into world-class cities and a haven for investors.
Whist roads and logistics are crucial, high-quality accommodation and hospitality facilities are critical. Investors, policy makers and business travelers will always need somewhere to sleep – so hospitality is as much a practical necessity as it is a fast-growing industry sector and investment opportunity.
The scale of growth in Africa’s hospitality market is most noticeable in sub-Saharan Africa. In April 2016, figures from W. Hospitality Group’s Hotel Chain Development Pipeline Survey showed that 365 new hotels were under construction – adding 64,000 new rooms. In the same month, Traveller 24 magazine stated that the hospitality sector grew by 42.1 percent in Sub-Saharan Africa in 2015 – compared to just 7.5 percent in North Africa.
This growth represents rich pickings for global hotel chains. Ibis Styles has 28 planned hotels, Radisson Blu plans 25, Mercure 24 and Hilton 16 — major opportunities for companies in the supply chain. With
this growth comes the double-edged sword of human capital development: thousands of new jobs coupled with an under-skilled workforce.
The College of Hospitality Management in Benguela has a mission is to deliver quality hospitality education to Angolans. Since opening in 2016, 37 Angolan students have enrolled with varying degrees of experience, each selected by College of Hospitality Management’s global hospitality education partner, Ecole Hoteliere de Lausanne in Switzerland.
Developing a skilled local workforce means that employee earnings remain in the country and contribute to multiplier industries and GDP growth. It also means that employees can build better futures for themselves and their children and spend money on non-oil goods and services. In the current climate, this goal is front and center.
There is an intriguing authenticity for foreigners when they interact with local workers. Across most of the Gulf (with the exception of Bahrain), visiting executives almost always interface with expat workers in the hospitality sector. It gets the job done but can create a distance between locals and foreign businessmen. In Bahrain, taxi drivers and hospitality employees are almost always nationals – a source of pride for the nation and infinitely more interesting for visitors.
In Africa, countries that use their own workforce and showcase their cultural uniqueness (as opposed to imposing a bland and standardized “global” hospitality experience), stand to gain a competitive advantage. Almost every aspect of the hotel experience is a potential opportunity – food and beverages that reflect local tastes and traditions, art and music in communal areas, local entertainment and cultural excursions turn a business trip into something memorable.
As growth continues, we are seeing more opportunities emerge for private equity and institutional investors. At the Africa Hotel Investment Forum in June 2016, there was consensus that optimism in hospitality across sub-Saharan Africa remains high. In March 2016, the Mauritius-based private equity fund, QG Africa Hotel LP (managed by QG Investments Africa), acquired a 100-percent interest in the InterContinental Hotel in Lusaka from Kingdom Hotel Investments. The 244-room landmark hotel is in a prime location in Zambia’s capital. The acquisition is a prime example of how private equity investors can take advantage of Africa’s burgeoning hospitality sector.
Angola has a national master plan for its tourism industry, which includes the greenfield development of a 250-room luxury five-star hotel in the city center of Luanda. A three-star hotel is also being built next to the future Port of Caio in the province of Cabinda, which is an important strategic location for what is set to become one of the biggest ports on the continent.
The hospitality sector and its role in the value chain is not merely a necessary accompaniment to economic growth or an afterthought to infrastructure development. It is absolutely critical to both – and a category that offers potentially life-changing opportunities for thousands of Africans. Without education, however, none of that can be realized. Graduates of the College of Hospitality Management will receive an internationally recognized qualification with European accreditation. The quality of education means that over the medium term, graduates will lead and develop teams of professionals who are all trained to global standards, significantly building capacity in the national hospitality workforce.
These future leaders now also have an opportunity at the College of Hospitality Management to build upon their core skills in fields such as game reserve or safari lodge management. This is how African professionals can add value to the hospitality and tourism experience and carve out a niche that reflects national character, tradition, and culture. This is how Africa’s hospitality sector should mature and where it has the capacity to add great value to wider socioeconomic growth — especially for private equity investors taking advantage of this exciting and important industry sector. (AFKI 21-03-2017)
-----------------------------------------------------------------------------------------------------------------
The Memorandum is supported by the ACP-African, Caribbean, Pacific Secretariat, Chamber of Commerce Tenerife, AHEAD-GLOBAL, Business Council for Africa, Corporate Council on Africa, ELO - Portuguese Association for Economic Development and Cooperation, Hellenic-African Chamber of Commerce and Development, HTTC - Hungarian Trade & Cultural Centre, NABA - Norwegian-African Business Association, NABC- Netherlands Africa Business Council, SwissCham-Africa and other organisations.
The Memorandum is also made available by AHEAD-GLOBAL, BCA, Chamber of Tenerife (by posting it at the Africa Info Market), CCA - Canadian Council on Africa, CCA - Corporate Council on Africa (USA), ELO, HTTC, NABA, NABC (by posting selected news) and SwissCham-Africa to their Members.
ACP Group aims for impact at UN Climate Change Conference COP23
Ambassador of Kenya to the EU, H.E. Mr. Johnson Were
Climate change remains a major collective concern for African, Caribbean and Pacific countries, which are particularly vulnerable to its negative effects. A special meeting organised by the ACP Sub-Committee for Sustainable Development laid the groundwork for an enhanced ACP role at the COP23 global climate talks in Bonn, Germany this November.
The Special Meeting on the UN Climate Change Conference COP23 held at the ACP House on 14-15 March sought to assess the outcomes of the previous climate conference (COP 22), and identify follow-up actions, with an emphasis on key issues for the ACP Group including mitigation, adaptation, finance, technology development and transfer, capacity building and REDD+.
“The adverse impacts of climate change remains the single greatest challenge to the sustainable livelihood, security and well-being of our people, posing immediate and long term risks to sustainable development efforts,” said ACP Secretary General H.E. Dr. Patrick I. Gomes.
Representatives from the African, Caribbean and Pacific regions, as well as negotiating groups including the Alliance of Small Island States (AOSIS), the Least Developed Countries (LDCs), G77 +China, and the African Group of Negotiators (AGN) made presentations on actions to implement the Paris Agreements, along with members of key institutions such as the European Commission, UNDP, UNEP and UNFCCC.
The meeting also explored different ways and means to enhance support to ACP Member States’ implementation of the Paris Agreement, taking into account priorities in the ACP Action Plan on Climate Change 2016-2020.
An ACP Roadmap to COP23 is being finalised to support member states in this regard.
“As one of the largest groupings of developing countries, the ACP Group has a unique role to play in ensuring that issues affecting countries most vulnerable to climate change will be given adequate focus leading up to and at the, in order to contribute to reducing vulnerability and strengthening resilience and adaptive capacities in these countries,” stated the Ambassador of Kenya to the EU, H.E. Mr. Johnson Weru, who chaired the meeting. (Read full remarks)
The ACP Group counts among its membership 37 Small Island Developing States, 39 Least Development Countries and 15 Land-locked Developing Countries. Out of more than 140 countries that have ratified the Paris Agreement to date, 69 are ACP member states.
While Amb. Weru welcomed the historic entry into force of the ambitious, legally binding and universal Paris Agreement in November 2016, he stressed the need to forge ahead with the work programme under the Agreement and the development of the rule book to ensure implementation.
He also urged active participation of the ACP Group in the Facilitative Dialogue, which will assess the progress towards reaching the long-term temperature goal, and inform countries’ preparations of Nationally Determined Contributions (NDCs).
The ACP Group welcomes Fiji’s presidency of the upcoming UN Climate Change Conference to take place 6-17 November in Bonn, Germany and sees it as a special opportunity for the ACP Group to work closely with the COP23 Presidency to demonstrate political engagement and leadership in advancing the Paris Agreement.
ACP Action Plan on Climate Change 2016-2020
Assessment of COP22 outcomes
Fernando Matos Rosa
firstname.lastname@example.org
email@example.com |
Community causes share £4,141
THREE charities in Linton each received a funding boost from the Co-op in April as a result of the Co-op’s membership scheme which was launched last September.
Linton Community Action Fund received £1,464, St Mary’s church received £1,355 and Chestnut playgroup received £1,329.
When a Co-op member buys own-brand products from food stores or a funeral plan or a funeral from Funeralcare they earn a five per cent reward for themselves, with one per cent going to local good causes. All those one per cent rewards, plus the proceeds of the carrier bag charge in England, have added up to an £11m payout in Linton. Nationally, 4,000 good causes are sharing a total pot of £9 million.
Chief Membership Officer at the Co-op, Ruth Collins, said: “The Co-op has always been community focussed.”
Don’t miss Linton Clubs Introduction day
JUST a reminder that the Clubs Introduction day is being held in Linton village hall from 10am until 2pm on Saturday 10th June. There has been an excellent response with around 30 groups, clubs and societies attending, all eager to tell you about their activities. The theme of the day is ‘something for everyone’ so there is sure to be something of interest for everyone so tell your friends and see you there on Saturday 10th June.
Why not meet up with them and friend and discuss your choices over a cuppa or coffee and a biscuit?
ACES, Linton, Belle Plate Group, Army cadets, Linton Children’s Book Festival, ATC, Linton News, Badminton, Linton Parish Council, Beacon trust, Men’s keep fit, Bell ringers, Mother’s Union, Camera Club, Music Society, Choir, Scrabble, Friends of St Mary’s, Sustainable Linton, Garden club, Village Hall trustees, Grants, Linton Women’s Institute, Guides, Brownies & Rainbows, WEA, Helping Hands, Whist drive, Historical Society, WI and the IT Club.
Jim Forrest, firstname.lastname@example.org
Change ahead for Michael Beaumont Linton
THE team at Michael Beaumont Butchers (Linton) Ltd will be closing the Linton store on Sunday 2nd July. Luckily this will not be the end of Michael Beaumont Butchers (Linton) Ltd as the guys will be launching an online home delivery service to keep us stocked with those amazing free range meats ready to cook meat ideas, own made pies, artisan cheeses, sausage rolls and deli counter.
They have had to make this very difficult decision after Michael Beaumont, James’ father, retired from his Fulbourn Shop due to ill health. Over the last 50 years Michael has built up the Fulbourn Butcher’s shop to be one of the most successful Cambridgeshire butchers today.
James and the team at Linton feel strongly that the legacy of his father should live in Fulbourn continue but they plan to be a strong part of the Linton Community and surrounding villages through their delivery service.
“We have enjoyed every moment working in Linton and all local villages – you have looked after us well over the last three years. This is why we decided to launch our home delivery service so that we could continue to serve you all without the hassle of the well-known parking problem. Of course, you are all very welcome in the Fulbourn shop and we very much hope we will see some of you there. We would like to have a party in July for you all – we have some familiar faces.”
“We are finalising details of the home delivery service at the moment and will be announcing the result in the shop in Linton and via Facebook @MichaelBeaumontButchersLinton
“We hope to continue to serve you all over the coming years and thank you for your custom. Our thanks also to the Ashford family for agreeing to let us rent their shop.”
James Beaumont, Owner
Michael Beaumont Butchers (Linton) Ltd
Another successful Bartlow walk
A REALLY big thank you to all those walkers who participated in the Bartlow 2017 walk.
We were so lucky with the weather, a beautiful day but not too hot for dogs and walkers alike.
We had a record number registering this year, many of whom came online before the event to take advantage of the cheaper prices. All in all we had a great turn out for 70 walkers.
Great fun was had walking the route, a new one this year, through the three counties beautiful countryside. The 10 mile trail was very popular too. Live Corners were spotted. The children gathered wonderful bags of goodies and brought them back to show the staff on duty.
Linton Jazz Band was brilliant as always and entertained us all back in the farmyard.
The money raised will be processed but will be used for the three charities chosen for this year.
Happy in retirement
Photo supplied by Liz
CamSight quiz gives something back
CAMSIGHT, our local charity for the blind, (see www.camsight.org.uk) will be holding a fundraising quiz and raffle starting at 4.35pm on Saturday 1st July at the Frank Lee Leisure Centre on the Addenbrookes site. (Entry open 3pm)
The cost is £5 per person and there are teams of six. Under 16s are allowed. Parking free at Long Road (next Ford Garage, two minutes walk away).
For tickets contact email@example.com
I joined the committee of Cam Sight because I lost a considerable amount of my sight during the summer of 2009, at the age of 19, when I was doing A-Level exams.
CamSight have helped me so much over the years; they taught me to tour tape and use screen reading software; helping with the forms I needed to get into university and since graduating have been giving me a job at CamSight which allowed me to buy my own house.
I feel the support of CamSight has played a significant role in helping me maintain the confidence and independence necessary to live a fulfilling life.
The charity supports visually impaired people in Cambridgeshire in any age and in whatever capacity necessary, whether it’s emotional support, mobility training, providing assistance or organising social events for visually impaired peoples as a rural support group and a telephone group. I thought it would be nice to give something back to them and every year since then I’ve organised a fundraising quiz and raffle.
Warren Wilson
Bus times changes
THE County Council advised us that as from 21st May the number 19 bus which runs between Burrough Green, Linton and Haverhill will be run by the Big Green Bus Company.
The following changes to the service have been made:
- The 7.15am journey between Balsham and Haverhill no longer operates.
- Journeys at 10.52am and 12.52pm between Burrough Green and Haverhill now operate 30 minutes later.
- Journeys at 9.56 and 11.56am between Haverhill and Burrough Green now operate 30 minutes later.
- A journey at 4.50pm between Haverhill and Burrough Green has replaced former journeys at 2.52pm and 4.52pm.
- A journey at 2.50pm between Burrough Green and Haverhill has replaced former journeys at 1.56pm and 3.56pm.
Passenger Transport, Cambridge County Council, Passenger firstname.lastname@example.org
Too good for jumble...
ON offer this month is a Sony 46in flat screen TV and a heavy duty, quality polythene Drafter pato set cover 90 cm high x 150 cm diameter (opened but unused). The wood framed cork notice board (60cm square) is still available.
The Yamaha electronic keyboard £20 – £10 for each Linton Browne unit.
To buy either of the above, or if you have an item to profit a charity, please contact Kate France on 891602 or email email@example.com.
NB: The donor chooses the charity to receive the money.
Summer Fun and Fundraising
THE lovely weather has enabled us to get out in our garden this term. We are starting to think about our garden redesign and would welcome any input from local people with an interest in garden design or contacts in the industry. Please contact our Administrator.
Our first jumble sale on 22nd April was a great success and we thank everyone that helped donated or attended the event. We raised just under £300 which will go towards our new garden. We had the intrepid committee running the Sawston Fun Run on 14th May and appreciate the Fun Run committee for selecting us as one of their beneficiaries. We would also like to thank Linton Parish Council for their very generous grant to fund real world scenes to enhance our play area.
Our next open day is scheduled for Friday 30th June. Contact Michelle (07817 069696) to arrange an appointment between 10 and 12. Plans are not yet finalised but we hope for summer 2017, We take children from two years old and offer a range of morning, lunch and afternoon sessions five days a week.
If the open day date is not convenient, please contact our Administrator on 07342 900120 to arrange a visit at an alternative time or for further details see firstname.lastname@example.org
Wendy Hine
on behalf of Chestnut Playgroup
What the Dickens?
MIKE PETTY, noted local historian and regular columnist in Cambridge News, joined the ACEs on 13th April for our Ploughman’s Lunch. His talk on Dickens’ Cambridgeshire, ‘Scrooge’ was a compilation of many local tales, collected by the Pickwick Club ie local folk in Dickens’ time.
We met up with Harry Tuppence Pamment, a clever fellow making the best of his talents but beset with adversity and ill fortune. Since a wooden leg, his family lost their home when their father died and they moved into Linton. Life was hard, so he sought to improve himself in London, setting up in business with mixed fortunes.
Homesick for Linton, he returned to his family. We heard how his health improved after about 15 houses, properties taken and lost, all set against a background of descriptions of life in the village.
Local events were recalled - the Linton riots, poaching in the woods, life in the workhouse, rural poverty and poor housing conditions.
Our historic buildings were often no better than slums but now, thanks to thoughtful owners, they are preserved in the Special Conservation Area and a joy to us all.
By the time you read this we will have had our trip to Sherborne. Our annual Garden Party is on 13th July.
We advertise events in Linton News and on the notice board. Can you suggest more/better ways to keep you updated with our events? What other social events would you like? All older people are welcome.
End Bald 891069
Outdoor Control Fencing and Garden Services
Grass cutting, hedge cutting, fencing
Tel: 07701056142
Jamie Curtis from Cooke Curtis with the headmaster and boys
Picture by Natalie Morris
New sports kit makes students look the part
LINTON Heights Junior School has been busy collecting trophies for its outstanding sports performance across Cambridgeshire over the last 12 months and, thanks to the kind gift from Cambridge based estate agents Cooke Curtis and Co, they are looking the part too!
In times of government cut backs, schools such as Linton Heights depend on the kindness of community-minded companies. With this gift we have purchased some stylish sports kit, which now allows our pupils to look their part! It really gave our runners a strong sense of pride and added to the winning edge in April’s county cross country event and all other events.
We were delighted that our boys came in sixth out of 133 teams and girls 13th. Over the last year Cooke Curtis have sponsored banners and programmes for the annual summer fair as well as the sports kit.
Linton Heights is currently looking for sponsors for replacement interactive white boards. Please contact the school on 892210 if you or your business would like to help.
Cook Curtis & Co for Linton Heights School
The Infants Ofsted and …..
WE recently had a visit from Ofsted. As with usual Ofsted protocol we received a phone call at 12noon the day before to notify us of the inspection. Under the new Ofsted framework, the same and detail with which they scrutinise is huge and in such a short space of time it is an intense process. However, despite this, the staff pulled together, worked incredibly hard and really did show off all the good things that we have at Linton Infants.
I must congratulate all of the staff team and of the children who were brilliant throughout the two days. The feedback we had was that the children are a real credit to the school. We are not able to discuss the final outcome until we have received the official report which can take up to a month to be published.
On 16th May we held our Open Evening for the parents of children due to join us in September. It was nice to meet parents who are new to our school and we look forward to working with them.
Sainsburys have now stopped issuing the Active Kids vouchers as it is nearly time for them to phase away. Please check your bag, pull down the side of the sofa and drop any into the school.
Kelly Harries
Head Teacher
…… Admissions disappointments
WE are very aware of some distress among members of our community, following the Admissions for children due to start with us in September 2017.
Cambridgeshire County Council co-ordinates the admissions to all maintained schools and aims to make the process fair and transparent for all.
When a school receives more applications than it has places available, it is referred to as being over subscribed, as is the case with Linton CE Infant School this year. We have places for all children and this year, more than 60 children applied to join our school.
As a small village school, the staff and governors are very sympathetic to the upset this has caused in the community. We do not work in isolation and as such, we embrace the opportunities that the whole village network gives to the school, to that assistance with raffle prizes, help to set up for the annual fireworks or coming along to our Summer Fair to name but a few.
The situation this year is very unfortunate, and we continue to work closely with the Local Authority to achieve the best outcome for all children.
Kelly Harries, Headteacher and Chris Komodromos, Chair of Governors.
Race night including snails
ON Saturday 24th June, Linton News readers are invited to attend a Race Night in Linton Village College hall. It is being organised by Linton village Cricket Club, to help raise money to support local children’s cricket and the Cystic Fibrosis Trust.
There will be Tote betting on eight races, with a live snail race and there will also be a raffle with amazing prizes drawn on the night.
Tickets are £10 each and this includes entry and a full fish and chip supper. Doors open 7.30pm with the first race starting at 8pm. All are welcome.
Tickets available by contacting (07752 275642. Darren Leech
Barn Owl Accommodation
Three private self-contained annexes all in Linton
www.barnowlaccommodation.co.uk
Call Michelle on 07584 430051
PSH Electrical Services
Rewires, New builds, Extra lights, Extra sockets, Repair works, Garden lighting, Showers Registered NIC EIC domestic installer email email@example.com telephone 07867980738
Steve Webb Painter & Decorator
Now upcycled furniture painted to your requirements Excellent present ideas Call for viewing details Tel: 01223 893864
NRS Carpets
HOME SELECTION FREE MEASURING & ESTIMATING All types of flooring available Tel: 01223 893634 Mobile: 07885 173113
Watercolour Painting
Explore the magical world of watercolour painting with an experienced artist in the comfort of your own home. For details contact: Susan Mackenzie (01223) 891521 or: firstname.lastname@example.org
A Touch of Nails
Gelish Manicures & Pedicures Performed Like a Gel Applies Like a Polish Call Michelle on 07866 017 801 email@example.com HORSEHEATH
Capri Blinds
ALL TYPES OF BLINDS SUPPLIED AND FITTED FREE QUOTATIONS FREE HOME VISIT 01223-894020 or email firstname.lastname@example.org ALL BLINDS ARE MADE IN ENGLAND - FAST LOCAL SERVICE Visit capriblinds.co.uk
Neil Claxton
Painting & Decorating Interior/Exterior Rooms Emulsioned from £120 FREE ESTIMATES Tel: 01223 893487 Mobile: 07724073045 E-mail: email@example.com
Le grand départ: Linton dad bikes London-Paris for cancer charity
ON 25th May Richard Wadrup, of Joiners Road, will have set off on an epic cycle ride from London to Paris in aid of Myeloma UK. Approximately 25 riders will take on the four-day, 500km ride beginning at Greenwich Park ending with a police escort onto the Champs-Elysées.
Richard has completed over 2,250 training miles in and around Linton and has raised very nearly £3,000 for Myeloma UK, which is the only organisation in the UK focused exclusively on myeloma, a treatable bone marrow cancer. To learn more about myeloma, please visit www.myeloma.org.uk
Richard hopes to enjoy a celebratory beer in the shadows of the Eiffel tower after which he looks forward to sharing his experiences with the pupils at Linton Infant School during their Healthy Living week after half-term.
To sponsor Richard and help Myeloma UK up to the end of July, please visit www.justgiving.com/RichardWadrupParis.
Homestart concert at Jesus College
A CONCERT in aid of Homestart will be held at 7pm on Thursday 6th July in Jesus College Chapel Cambridge with Alexander Burt, piano and Nigel Yandell on piano.
Pieces include Sonata for Gamba and Keyboard No 2 in D by JS Bach, Sonata for cello and piano No 5 in D by Beethoven; Sonata for cello and piano (1915) by Claude Debussy; Elegy for cello and piano by Kenneth Leighton and Canciones populares Españolas by Manuel de Falla.
Tickets cost £20 with gift aid, no gift aid £16, student £10 (to include an autograph). Tickets and payment details from HomeStart Cambridgeshire by Wednesday 28th June. Email firstname.lastname@example.org.
Tickets may also be available on the door. Tel 210202.
1000 visits by Safe and Well campaign
CAMBRIDGESHIRE Fire and Rescue Service is proud to announce the launch of our new campaign for our Safe and Well visits, launched last summer to support the safety and wellbeing of the most vulnerable residents in the community.
Working with key partners including Cambridgeshire County Council, Cambridgeshire Constabulary, Age UK, Mind, MindWell, Cambridgeshire and Peterborough NHS Foundation Trust, we developed the Safe and Well model to offer a holistic approach to supporting people in their own homes.
Building on the previous work of Home Fire Safety Visits, the Safe and Well visit encompasses other aspects including preventing falls in the home, monitoring alcohol consumption, staying well, warm and nourished at home and crime reduction through fraud and scams. The Safe and Well visit now provides the service with sufficient information to support individuals and, with consent, to refer individuals to selected partner agencies, who will be able to support them further.
Our Safe and Well visit incorporates a comprehensive list of checks, including smoke alarms. For more information please visit our homepage at www.safeandwell.org.uk and click on Safe and Well.
You can also find us on Twitter, Facebook and Instagram.
Making Linton more sustainable
SUSTAINABLE Linton is an action group which meets monthly to discuss how Linton can become more sustainable.
The consequences of climate change, resource depletion and declining biodiversity are now subjects of urgent discussion, so we are looking at ways in which the individual and the wider community can make changes that will produce benefits.
Current group endeavours include community awareness-raising events, reporting on research and liaison with similar local groups.
Recent discussions have focussed on the Centre for Alternative Technology’s Zero Carbon Britain initiative.
Sustainable Linton proposes to look at the application of these ideas in our local environment.
Issues relating to the wider world such as electric vehicles and wind-power reflect the group’s general interest.
The major effect any individual can adopt is to limit consumption by monitored energy use, creative recycling and energy awareness. Lifestyle changes could include tried and tested things such as growing food, recycling unwanted items, repairing broken items rather than new-used items, buying less and reducing long-distance air travel.
Leisure activities can incorporate cost-free options such as walking and cycling with the added bonus they bring to health and the enjoyment of the natural world.
Sustainable Linton promotes these ideas and is passing them on to the next generation.
Meet us on 10th June (Introduction morning at the village hall) and 18th June (Linton Youth Music Picnic). More information on Facebook or contact Paul Richardson on email@example.com or 0923941. The next meeting is at 8pm on 13th June 2017 at the Dog and Duck.
All the fun of the fair
HADSTOCK’S Fête, as those of you who have been before know, is a special event.
Held every year on the Saturday nearest St Botolph’s Day, this year it is on Saturday 17th June from 10am to 4pm.
The fête is centred around the village’s beautiful green extending into the church and churchyard, village hall and neighbouring gardens. This, together with the colour stalls and games, attracts hundreds of visitors.
This year the fête continues its traditional theme with old fashioned fairground attractions such as the swingboats on the green, tombolos, kids golf, bric-a-brac, produce, plants and crafts, face painting and henna applications.
There will be two different bouncy castles and field events to test your skills such as skittles, egg throwing, smashing crockery, golf, a fitness course and football.
You can relax in the gardens with a Pimms whilst listening to the Hadstock Silver Band or enjoy a pint in the beer tent.
Well-known local artist Sarah Symes will be running an exhibition of her and other local artists’ work on display in the church.
In the village hall there will be hot drinks and a wide selection of cakes available all afternoon.
Later in the afternoon you can watch homemade bids for in the ever-popular auction on the green. You can find out the latest news and see the auction lots on our own Facebook page, Hadstock Village Fete.
Proceeds are divided between St Botolph’s church maintenance fund and the village hall.
Tim Boyden, 892746
Music in Hadstock Church
Charity concert
At 7.30pm on Saturday 8th June
CLARE Vane, piano organist, will perform each year in Hadstock to raise funds for charities. This year it is to support the charity helping those suffering from Ehlers-Danlos Syndrome. This is a disease that affects 1 in 5,000 people. The program includes Beethoven’s piano trio for clarinet, cello and piano alongside a collection of pieces by J S Bach. Tickets at the door cost £10 (£7 concessions); also online from clarevaneorganist.com
Concert helping to raise funds for the Church Fabric Fund
At 6.45pm for 7.30pm start on Saturday 8th July
JOANNA Eden, an acclaimed jazz singer and acoustic singer-songwriter, will perform solo piano versions of the original songs recorded for her new album Like Gold along with some well-known songs she has recorded over the years: Mr Bojangles, Yesterday, A Taste of Honey, The Man I Love.
The concert will include a support act – the Simpson Sisters, two students who live near Newmarket and sing beautifully together harmoniously as well as playing piano, guitar and sax. This concert will be organised by The Friends of St. Botolph Church Hadstock. Tickets cost £12.50
Robin Betser, 891385 or firstname.lastname@example.org
Lambs at church service
ON 7th May the congregation at the Fabulous Service at St Botolph’s Linton heard, on the theme of Jesus the Good Shepherd, had a real illustration of what this meant when two lambs were joined to the church, together with their owner Lou Symes-Thompson.
Lou, who keeps a flock of over 20 Shropshire sheep in a field at Hadstock, describes what looking after sheep and lambs involves and how she knows all her flock by name. The lambs she had brought, called Ivoir and Idris – just four and two weeks old respectively - are examples of the way they trust her as their shepherd. Lou said: “It can be hard work keeping them going out in the middle of the night but I know how much my sheep rely on me.”
Karen Beaumont, a member of the congregation, said “Seeing the lambs and hearing about the care they need really brought the bible passage alive.”
Paula Griffiths: 01799 599141 email@example.com
Volunteer van crew needed
CAMBRIDGE Re-Use (formerly Cambridge Sofa) is a furniture re-use project dedicated to helping low-income families. We accept good quality donations from the general public including a wide range of furniture and electrical goods. Our base is at Unit H, 347 Cherry Hinton Road, CB1 8DH.
Currently Cambridge Re-Use needs physically fit and energetic people to fill voluntary vancrew positions within a dedicated team.
For further information on how to apply, please contact me on 414534 or email: firstname.lastname@example.org
Cara Moorey, manager)
Designer Drapes
Linton Rd, Hadstock Tel: 01223 890 556
For all your Curtains and Blinds give us a call.
Curtain Alterations
Fitting Service
Fitting Service
Fabric sold by the meter
Patchwork lessons
Wednesday mornings and Tuesday evening
Book your place now, spaces limited
Sewing machine repair and service
www.curtainsandcraft.co.uk
email@example.com
Abrams Heating
• Boiler Installation, Service & Repair
• Gas Fires, Cookers & Hob
• Landlord Safety Certificates
• Quality Kitchens & Bathrooms
• General Plumbing
• 24hr Emergency Callout
01223 897236 07931 685774 www.abramsheating.co.uk
The Darryl Nantais Gallery
Fine Art, Framing & Restoration
We offer a wide selection of vintage, contemporary & traditional floral & interior/exterior paintings, prints, ceramics, glassware, sculptures & jewellery. The Gallery also offers a full frame, pitch pine restoration and cleaning.
Darryl Nantais
01223 897236 07931 685774 www.darrylnantais.co.uk
Find & like us on Facebook and follow on twitter for regular news updates
Tuesday to Friday 10am to 5pm & Saturday 10am to 4pm
**Oil seed rape**
WE’VE all seen the swathes of yellow across the landscape. Love it or hate it, oil seed rape is now an integral part of our arable landscape and has been increasing over the years. In 1965 there were 5.2 million tonnes harvested worldwide, and in 2014, 73.8 million tonnes.
Oil seed rape, brassica napus, is a bright yellow flowering member of the mustard/cabbage family and its name derives from the Latin for turnip.
It used to be planted as a break crop to suppress weeds and improve soil quality, but now it has become profitable in its own right. It is cultivated mainly for its seeds, which are 45% oil and 55% high protein animal feed. The plant is ploughed back into the land or used for animal bedding.
For humans it complements butter and lard, and it is contained in mayonnaise, margarine and salad cream, being low in saturated fat and high in Omega 3. It has become a building block of few celebrated chefs and as far as oils and nutritional nutrients, will doubtless stay on the human menu. It is the third largest source of vegetable oil in the world, after soybean and palm oil.
The processing of the seeds produces rapeseed meal as a by-product, used as cattle, pig and chicken feed. It is used as biodiesel in heating systems, when blended with petroleum distillates. This is expensive, so used oil is more practical. It is also used for chemotherapy. Amazingly it was used to contain the radionuclides after the Chernobyl accident, taking up three times as much as any other plant.
Oil seed rape is sown in August and September and, not surprisingly, looks like a cabbage plant. It grows rapidly from March, growing from 1.5cm up to two metres. It flowers in April and the seed pods are harvested later in the summer. The stubble is exceptionally clean and near white. Scientists at the weather pilots are warned never to land on it. It’s amazing where web sites lead.
Regular readers of this column might remember that last year there was a panic about cabbage stem flea beetles and the danger they posed to oil seed rape. In 2016 in Cambridge, 10% of the crop was lost to drought and 12% to the flea beetle. This year unless it rains soon the situation could be worse.
**Aware alert, alive**
JEANETTE MOSER, our speaker, is an extremely courageous woman who, after years in the police in South Africa and Rhodesia, is an expert in self-defence and particularly in training on her knuckles – karate. She ran in the Rhodesian marathon and later was headhunted by special branch who wanted her to infiltrate a dangerous extreme right wing group bringing in a huge cache of arms.
During a carjacking she was shot in the leg through the door but managed to drive away. She has done a lot of radio and TV work including South African Gladiators when she brought down a huge bulk after catching him off guard.
With her husband, Alan, acting as the attacker she demonstrated several ways of defence. She also showed one of our members how to escape from a lock hold round the neck by holding the assailant’s arms down and falling against him to unbalance him.
Our next meeting is on 6th June. Rosemary Wheeler is the speaker and the talk entitled *Hats Galore*. New members and visitors are always welcome.
Sally Proberts, 891021
**June at Wandlebury**
Healthy Walking
EVERY Thursday: 10am or 10.30am to 11am Walking for Health accredited sociable walks around the park. Meet at the Stable Rooms at 10am for a longer walk or 10.30am for a shorter stroll. NB: Free of charge and no need to book but please arrive 10 minutes early to register if it’s your first time.
Grasses of Wandlebury
SUNDAY 11th June from 2pm to 3.30pm
Meet at Wandlebury car park notice board. Booking recommended.
Plants of Wandlebury
SUNDAY 25th June from 2pm to 3.30pm for adults. You will need to book.
For more information on any of the above email firstname.lastname@example.org, call 243830 extension 207 or visit www.cambridgeppf.org/whats-on
Lorna Gough
**A safari supper**
THE Friends of St Mary’s are arranging another popular Safari Supper on Friday 2nd July, starting at 6.30pm at Richmond for Pimms. After that people, in pairs, go to different houses for a starter, on to another hostess for the main course, then on again for dessert before going back to Richmond for coffee. All are welcome. Tickets, £18 per person, are available from Chris Morse 891612 or Margaret Cox 1290.
At the Friends AGM on May 15th Chris Morse was confirmed as Chairman, Hugh Paton as Treasurer and Margaret Cox as Secretary. The Meeting stood to remember Dr Bruce Conchie, for many years President of the Friends. The Chairman reported that following a successful year, more events are planned for the coming year including teddy bear’s picnic at the Flower Festival, summer tea, a concert in October, a quiz night in November and the postponed *Flanders and Swann* performance next winter.
Margaret Cox
**Didn’t we do well?**
IT was Linton Heights School’s first car boot sale on Sunday 14th May. We set up for the event hoping to raise much needed funds for the school’s new interactive whiteboards. We were also hoping to create a community event involving Linton and its residents.
Happily the sun shone and we estimate that about 45 cars booked pitches and approximately 100 people came to look around and pick up a bargain.
We have had a final figure yet but we did raise a good amount on the day encouraging us to run another one.
For more information contact the office directly or on email@example.com
Siobhan Judge
**Llamadrama**
INVITES you to a Murder Mystery & Fish & Chips Evening on Saturday 24th June at 7pm, at The Townley Memorial Hall, Fulbourn Centre. There will be a supper bar, prizes, raffle fun and Murder. For more details go to www.llamadrama.org.uk or Email: firstname.lastname@example.org or ring 01354 694782
Carole Ransome
**LONG & SHORT STAY ACCOMMODATION COTTAGE & CONVERTED BARN HOLIDAYS, WEEKENDS BED & BREAKFAST**
WEST WRATTING CAMBRIDGE, CB21 5LU
T: 01223 290492
www.bakerycottage.co.uk
**A & R PLASTERING**
All aspects of plastering undertaken:
- Plasterboarding
- Rendering
- No job too small
- 30+ years experience
- Free estimates
Tel: 012233890228
Mobile: 0774 8627920
**PLUMBLINE**
PLUMBING AND HEATING ENGINEER
PROFESSIONAL DOMESTIC PLUMBING SERVICE
Including Property Maintenance
Painting/ Tiling/ Plastering/ Carpentry
Call John on
01223 893903
email@example.com
Fully Qualified and Insured
Friendly and Reliable Service
No Job Too Small
**POT POURRI 146**
The sixth purchase I ENDED up with six receipts (all in whole £s) – 2, 8, 32, 86, 185, 250.
What was the cost of the sixth purchase?
Solution to 145 – What was the Fields Area?
Equilateral field area = 11818.7 square meters.
**NEWS IN BRIEF**
Come and have fun
EVERYONE is invited to a Family Activities day from 1pm until 4pm Saturday 17th June at Sydenham Hall to raise funds for the Homes. Hopefully there will be enough for some gazoo for the garden as last year ours blew away in storm Doris.
There will be a bouncy castle, stalls, activities for the children such as hook a duck, hoop la, treasure hunt...for the adults as well.
Julie Cleland, Home Manager, 891237
Library events
THE next ENGAGe meeting in the library is on Wednesday 21st June from 2pm to 3.30pm. The speaker is Veronica Bernard on the theme ‘Gardens, and the National Trust’. Tickets are required for this event and are available from the library for £1 to include refreshments.
Lindsay Healy, Community Library Assistant
Cafe Church
St Botolph’s church, Hadstock is holding another Cafe church service at 9.30am on Sunday 2nd July. Refreshments, chat, activities for all ages and time to think, with a simple service based around Jesus’ words in the Disciples’ Worry. All are welcome. For more information contact me on firstname.lastname@example.org or phone 01799 599141.
The Revd Paula Griffiths, Priest at Hadstock
Come to the Fair
LINTON Infant School and PTFA would like to invite you to our summer fair. This year it is being held on Friday 16th June between 5 and 7pm. We have food themes and the children are learning about the origins and preparation behind the food on their plates. Along with a BBQ, crafts and stalls there will be live music, a bouncy castle, the wiggle cars and loads more. The event is open to everyone in the village so please do come along and support us.
Julie Calver, 892179, Linton Infant PTFA
They have won again
CONGRATULATIONS to The Crown, Linton for being the winner of the Saffron Walden Lions Club district Easter egg raffle. Last year’s raffle raised nearly £1000 and £270 for charity. The Crown’s egg was won by Alison Omond-Lewis of High street Linton.
The Lions raised a total of £1,200 which will be divided between the Essex air ambulance and St Clare’s Hospice. The Lions wish to thank all the staff at the Crown who supported us and all those who donated.
Dr Derek Lockstone 891931
The Accelerants unmasked,
COME to a Band and Beerfest on Saturday 1st July at 7.30pm in Linton Village College main hall. (Wine and soft drinks too).
For more information email email@example.com or go to www.friendsoflvc.org Tickets are available through Eventbrite at https://www.eventbrite.co.uk/e/the-accelerants-unmasked-tickets-3417181883 Tickets cost £9 plus £1 Eventbrite charge.
Dobbie Keenan
**MALLYON & DONALDSON**
Linton
Specialising in both Modern and Traditional building methods.
Tel: 01223 891267
Mob: 07941 220868
All contracts finished to a high standard. Reliable service. Local references available.
**Acupuncture and Massage in Linton**
Some of the conditions acupuncture can help ...
- sports injury and tension
- back, knee, hip, shoulder and elbow pain, rheumatism
- fertility optimisation, IVF, pregnancy and menstrual
- headache and migraine, stress
- high blood pressure and circulation conditions
- anxiety and addictions
Peter White MBAcC, MSc, MBAcc,
Call: 01223 891145 for an appointment or free assessment
Email: firstname.lastname@example.org
The Marsh mail
WE look forward to a range of celebratory events at Linton Village College over the coming months.
Having enjoyed Year 11 Leavers’ Day at the end of May, including farewell messages and a BBQ, at the end of June we have another opportunity to celebrate with our Year 11 students: the Leavers’ Ball.
The Ball on Friday 26th June marks the end of the GCSE examination series and the finale of Year 11’s time at the Village College. It’s a great tradition that provides an opportunity for the year group to enjoy an evening together before embarking on the next stage of their education and future at different educational establishments.
The Leavers’ Ball has become a rite of passage, symbolising students’ maturity and independence as they depart from secondary school. This year’s theme is Masquerade and promises to be a special and memorable event. I know that students and staff present join staff and students’ families to welcome their arrival and the spectacle is somewhat of a community custom.
Thanks in advance to village residents for your patience with unusual modes of transport and local traffic and any noise or inconvenience caused on the evening of 26th June.
On the following evening, Saturday 1st July, the decorated school hall is the venue for the Friends of LVC’s summer fundraising event: The Accelerants Unmasked – Band & Beerfest. It is a great community event involving live music and great food from Milton Bistro. We hope that this will be a very well-attended evening. Tickets can be purchased via the Friends’ website: www.friendssoft.org.
Another date for your diary is 17th September – LVC’s 80th birthday celebrations to celebrate 80 years of learning since the college opened in 1937.
Further information is available on the LVC80 Facebook page. It would be great if former students can attend. Please do share the link with any LVC alumni that you are still in touch with.
If anyone is willing to lend memorabilia (photos, records or items of uniform from previous times) we’d love to hear from them via Facebook or email@example.com
Helena Marsh,
Executive Principal
Looking for £1 coins
SINCE receiving a silver threepenny piece in place of a sixpence for a child, which I presented my Mum who added the 3rd to her collection, I have always been interested in coins but I never really looked at what are now the old £1 coins. That was until I saw a picture of 24 of what I believe to be the original 25 designs on the reverse side of the coin.
My daughter and I decided to try to collect a set before they went out of circulation. Hearing that some of the old £1 circulation were fake, we decided to check each one for a clear milled edge and inscription. We understand there are actually seven different inscriptions in total but so far we have found three.
I collected about ten different ones in the weeks before the Guildhall people take, after which the treasurer very kindly allowed me to check and replace any of the four hundred that he had. It was then we realised that the four bridge designs just have a wavy pattern without an inscription around the edge. I found about six more for our collection. Inscriptions are shown on the chart. It has three bent legs interspersed with bells, is only half milled and has no inscription. Millennium Bells is shown on the top. The bent legs surely indicate the Isle of Man.
Have a look through your coins and see what you can find
Photo by Roger Lapwood
which I’m told does produce its own UK money. One of the sets has a similar bent leg to the last shown on. London, Cardiff and Edinburgh also have their own designs and inscriptions.
Then I found one with a larger ship quotation of Buckle of Jersey on the top. Because of the ship in much smaller print it is Resolute 1877. Checking the internet I discovered that in 1850 the HMS Resolute was searching the Arctic for the lost Franklin expedition when the Resolute got stuck in the ice. The captain was advised to abandon ship.
A year later an American whaling ship discovered her in the Davis Strait. The rescuers rescued the ship, then returned to England. They salvaged three decks made from the wood and presented one to the US President. This deck was sometimes shown on television in the Oval Office and used by President Obama. I don’t know if President Trump still uses it but it is supposed to signify the special relationship between the US and UK. I don’t yet know to what the 1877 refers.
We are still six coins short, including those for London, Cardiff and Edinburgh. Presumably they have become harder to complete as the withdrawal date of 15th October draws nearer but we’ll keep looking.
Kate France
BED & BREAKFAST
Mrs Monica Clarkson
4 Harefield Rise, Linton
Tel: 01223 929288
Quiet modern bungalow
Families welcome
No Smoking
Springfield House
B&B
14/16 Horn Lane,
Linton
River views from bedrooms & guest lounge;
01223 891383
www.springfieldhouselinton.com
SHINE
TRADITIONAL WINDOW CLEANING
EXTERIOR AND INTERIOR WINDOWS, GUTTERS, FASCIAS, CONSERVATORIES AND PATIO CLEANING
TEL. 01223 893529 Mob. 07587 866309
FREE QUOTATIONS
FULL PUBLIC LIABILITY INSURANCE
firstname.lastname@example.org
Cambridgeshire Family Chiropractic Centre
Bespoke massage and chiropractic care for aches and pains
Specific techniques for pregnancy and children
15 years’ experience
Call 07870568548
www.cambridgefamilychiro.co.uk
Granta Medical Practices update
THE Partners of Granta Medical Practices and Barley Practice took the decision to merge the two partnerships and this happened on 27th April. Unfortunately the Linton News was not on their mailing list initially so there are several updates from the partners, some of which will be summarised here.
The merger
THE merger of Sawston and Linton has enabled us to recruit two Emergency Care Practitioners (paramedics), helping us improve care for the young and elderly in their own home. Working with South Cambridgeshire District Council we are about to have patient advocates in the practices to assist patients to get access to all the community services available. The Mental Health trusts are placing Community Psychiatric Nurses in the practices giving our patients direct access to these services. These are just a few of the services we will now be able to offer directly rather than referring to other organisations.
Gerard Newnham, Business Manager
Travel advice
IF you are planning to travel abroad please make sure you allow enough time for any vaccinations you may need. You can collect a Travel Risk Assessment Form from reception or download one from our website (www.grantamedicalpractices.co.uk). Fill it in fully and return it to us no later than five weeks before you travel. For additional information you can visit the following websites: http://travel.nhs.uk, www.nathnic.org, www.ice.gov.uk/travel
Correspondence to Granta Medical Practices, London Road, Sawston CB22 3HU www.grantamedicalpractices.co.uk
Granta Medical Practices
Can you please help?
HELPING Hands requires one more driver.
Do you have an hour or two free each week? If so please contact me.
Oriel 0919357 or email@example.com
2523 Linton Air Cadets
IT has been a busy few weeks at 2523 Linton Squadron ATC. As well as our regular activities we have had success with our spring recruitment night with seven new faces joining the squadron.
In the first week of the Easter break a group of our cadets were invited to a station based at RAF Marham. This included a tour of the active base, a chance to get up close to the RAF’s Tornado aircraft and talks conducted by the pilots who fly them.
This was followed by the much anticipated annual Easter camp on Nesselfit Training Area in Shropshire. Company Commander Tom was one of those Linton cadets lucky enough to get a place.
He tells us that: “Eastern Camp is broken down into two sections. Field craft and adventure training.
“The field craft section had a 12 hour block of military skills training covering movement in the field, hand signals, cooking in the field, shelter building, camouflage and concealment. Then it was our training exercise, the following day, with a scenario of various missions to be completed. This was the highlight of the week.”
“Adventure training consisted of many activities including leadership tasks, orienteering, physical training, climbing, abseiling, paint balling and laser questing. The camp was then finished off with a disco and final parade.”
For anyone that would like to learn more about what we do come along to one of our meetings. We parade on Monday and Thursday evenings at the Red Zone of Linton Village College. Email firstname.lastname@example.org for details.
Brigid Wright
01229 714521 or 079943873272
Parish Council Matters
Chairman’s annual report
Parish staff
Linton parish council (and indeed any parish council) are highly dependent on the expertise and dedication of the professional staff who support and help guide the workings of a parish council on a day to day basis. This is especially true in a large and growing parish like Linton with 3,555 residents on the electoral roll alone. Linton parish council has struggled in this respect having had six full and part time council staff positions over the years. Throughout this period the council has been very fortunate to have retained the services of Anne Wood who worked selflessly in difficult circumstances to ensure that the day to day running of the parish council carried on. This meant that the hundreds of queries from residents each year were dealt with efficiently and key tasks such as the preparation of papers for meetings were also meticulously carried out as was general record keeping and the complete reorganisation of the parish council office. I am pleased to say that Anne has now been confirmed as assistant clerk by way of a long overdue recognition of her hard work and invaluable contribution to Linton and Linton parish council.
Linton parish council has been equally fortunate in recruiting the shy and retiring ‘local lass’ Kate Wiseman the role of clerk and also responsible financial officer (RFO) within which she parish council successfully operates. Kate is currently undertaking the professional training required for both roles and has brought a new energy and vigour to the parish council – as well as badly needed new skills in Social Media and IT. I would also like to mention Chris Filby, our village custodian, who is out in all weathers maintaining our village green.
Membership of Linton parish council
During the same period Linton parish council has lost, through resignation for various reasons, four very knowledgeable and able parish councillors and has managed to recruit one replacement, Cllr Chris Hines (below), with some considerable and some further enquiries from residents). However, at present, Linton parish council still has vacancies for three more parish councillors and would be pleased to discuss the role with any residents who may be interested.
Working groups
If you would like to experience first-hand what the parish council actually does and the level of involvement from councillors, residents are able to be co-opted onto any of the working groups. These are informal non-decision making groups which are intended to share ideas on key issues such as traffic, planning, open spaces, allotments and many other aspects. Please see the website and speak with the clerk for more information, this really is a great way getting a taste for how your local council works, without having to make any commitment.
Projects and project management
Staff turnover has meant that some projects had to be put on hold but the Parish Council has been extremely fortunate that an RFO was recruited to oversee the finances. Linton parish council also had a substantial backlog of projects from previous years that had been previously agreed in outline but not started or completed for various reasons, these are at the forefront of the council’s agenda this year and beyond. I am also pleased to say that some of these, such as the road in the centre, have now been ticked off as has improvements to the area around Horn Lane bridge and major work on Leadwell Meadows.
South Cambridgeshire district council and Cambridgeshire county council
In common with other parish councils in South Cambridgeshire Linton is dependent on the district and county councils for the provision of many services and Linton parish council have noted the effects of cuts made on services provided by South Cambridgeshire district council and Cambridgeshire county council - which were never easy to access at the best of times. This has led to widespread frustration at the lack of speedy response to issues such as potholes, the state of the pavements and emerging issues such as blocked or overflowing drains, especially in the Linton and Linton Green areas where there are no sewers. Also Linton is like all other villages in South Cambridgeshire impacted by the failure of South Cambridgeshire district council to finalise the Local Plan which means Linton remains vulnerable to speculative planning applications.
Planning
The chair of planning will comment in detail, I will only comment that - as most residents of Linton know - the infrastructure and services available here be it schools, parking spaces, school plays, healthcare, etc are already at or near capacity and yet we are besieged by major planning applications.
Financial management
The financial reports are available for inspection and the end of year audit is currently in progress; and we again expect a clean bill of health in this critical area thanks to Cllr Graham Potter as chair of finance and Kate Wiseman our new clerk and RFO. This parish has also been successful in reducing the amount of long term funds held and it is funds requested previously by former parish councillors for work that, for one reason and another, was not undertaken) this was necessary to forestall the possibility of funds being clawed back.
Priorities
It is inevitable some necessary long term projects (especially related to protecting the unique environment and character of Linton) will always be undertaken, the aim is also to ensure that the residents currently contributing to the precept will see more of the benefits.
This is why it is pleasing to see projects for the Beacon Trust and summer activities for children and young people, Village Societies Day being supported by the parish council and continued support for village sporting clubs such as LVCC, AZTECS, LGFC and many more. Other priorities include more engagement with residents, schools and local businesses by any means possible. The parish council has also been actively involved in consultation on the proposed development along Back Road with approximately 200 residents attending, and a much improved online presence, allowing for improved interaction with a new generation, as well as an extended dialogue with schools to understand their issues and challenges. Linton parish council is still seeking to move to understand the needs and problems faced by local businesses and to explore ways of assisting them to ensure that Linton remains a thriving centre for shops and businesses.
Lastly, I would like to thank the vice chair, Brian Manley, and other members of the parish council during the year and to congratulate all those who stood in the recent local elections in Linton. The parish council looks forward to working with our new district and county councillors, not only on Linton issues but also wider concerns such as the A1307 and the City Deal options as these all have an effect upon Linton.
Chairman Cllr Paul Poulter
Annual meeting of the parish
MANY thanks to the residents who attended the annual meeting of the parish at the village hall on Monday 15th May 2017. Your support was greatly appreciated.
If there are any issues affecting our village that you would like to raise please contact the parish council office, details shown in box below.
Parish council office
THE What’s On section of our website shows a great many local activities throughout the summer. As you travel about, I hope you will enjoy the views of the verges along roadsides, verges that add so much to the overall vision of the countryside. The species present will vary throughout the seasons and reflect the underlying geology and cutting regime of a particular stretch of verge. For road safety reasons, some verges are close-mown but these may be sprinkled with the flower heads of Corn Marigold, Dandelion and other species which tend to spread their seeds over greater roads, often only the first metre is cut on a regular basis, allowing taller wild flowers to survive further back.
During early summer our verges are dominated by swathes of the white fluffy flowers of Cow Parsley or the chunkier white flowers of Hogweed. Both are members of the Carrot family and, perhaps surprisingly, thrive on the extra nutrients released by car exhausts. Although many species of insects visit their flowers, their tendency to shade out smaller plants reduces the overall variety of wayside flowers and potential food for other insects. Similarly, increased roadside nutrient levels encourage Docks and Stinging Nettles which also overshadow delicate species including blue-flowering Speedwells.
Splashes of yellow are provided by clumps of Red Campion; and pink or purple by Vetches and several species of Geranium. Thistles and scarlet Poppies appear where bare soil is exposed. The presence of a large expanse of Ox-eye daisy, with its white petals and yellow centre, is usually due to its inclusion in seed mixes sown alongside newer roads.
A number of verges have been identified as Special Roadside Verges or Roadside Nature Reserves. These are recognised for their floristic diversity or for the presence of rare species.
Tricia Moxey, trustee
Across the globe, the Kumon Maths and English Programmes advance students beyond their school level.
Contact your local Instructor for a free assessment.
Linton Study Centre
Karen Tumber 01223 893578
kumon.co.uk
Nina, Carl and Sophie welcome you to
Boyz 2 Men Barber Shop
Opening Hours:
Monday Closed
Tuesday 9:00am - 6pm
Wednesday 9:00am - 8:30pm
Thursday 9:00am - 6pm
Friday 9:00am - 6pm
Saturday 8:00am - 3pm
Fully air conditioned
Late Evening 'til 8:30pm
Traditional Hot towel shave
no need to book an appointment
Special rates for senior citizens Tuesday - Friday only
http://www.boyz2menbarbershop.co.uk/
113* High Street, Linton
Tel: 01223 894481
LINTON PARISH COUNCIL
The Village Hall, Coles Lane, Linton,
Cambridge CB21 4JS Tel: 891001
Clerk to the council - Ms Kathryn Wiseman
Email: email@example.com
Website: www.lintonparishcouncil-pc.gov.uk
Facebook: www.facebook.com/LintonPC
Office open: Monday, Tuesday, Wednesday and Friday 9am – 12noon. Closed on a Thursday.
Or by appointment
Dates for full council meetings:
15th June and 20th July 2017
All meetings held at the Cathedcon Centre commencing at 7.30pm
Property Maintenance
Steve Jackson
firstname.lastname@example.org
Carpeting, Plumbing, Tiling
Door/Window Replacement
Kitchen/Bathroom Refitting
Fencing & Decking
Painting/Decorating
Flat Pack assembly
Aerial Upgrades & installations
Satellite dishes installed
Residential Sales and Lettings
• Local Linton Office
• Cambridge Office
• Specialists: Linton & Local Villages
• 20 Years Experience
Do You Need Help With Your Property?
Call us Now: 01223 891227
www.admiralestates.linton
ABBREVIATED minutes of the Linton parish council (LPC/PC) meeting held at the Cathedral Centre on Thursday 20th April 2017.
**Present:** Cllr Paul Pouther (chairman); Cllr Brian Manley (vice-chair); Cllr End Bald; Cllr Dr Brian Cox; Cllr Simon Hill; Cllr Merric Mannassi; Cllr Dr Beatrice Ward; Cllr David Champion; Cllr Chris James; Cllr John Potter; Cllr John Smith (Clerk/Parish Councillor); Cllr Helen Batchelor and John Batchelor. Members of the public: four; Apologies for absence: Cllr Amy Smith.
**Open forum for public participation:** A resident advised that they were attending the meeting with an interest on the provision of community defibrillators. The council had been asked if these could be purchased and installed in a public place for residents to use for a cost of only £1,300.00. The original suggestion had been that these could be installed in adopted phone booths, however LPC has not adopted any to date, meaning that it would become an issue for the fire station/policing office. The resident advised that the fire station/police office could be a viable location. With the permission of the chairman this item was brought forward and discussed by council at this meeting. It was noted by council that whilst they proposed practical means for some locations in relation to maximum use, statistics of them working and location. Council decided to defer this item until the meeting of the 18th May to allow them to research and consider viable locations.
**District and county councillors’ & businesses:** The district clerks advised that they were aware that there is an issue with the planning department at South Cambridgeshire district council (SCDC) recently and attributed this to a reduced staff as many people, particularly within the planning department, are leaving for higher paid roles within the private sector. This has a knock on effect where inexperienced officers are taken on new responsibilities and it is all a learning curve, creating delays. It was also advised that once a planning application is approved it cannot be appealed which means that these decisions cannot be reviewed, with the exception of proven errors within the process, which again has no legal outcome if the applicant does not will only merit a further delay. The clerk to the district also mentioned the coming omissions sites review and explained that these are SHLAAs sites that were then, as such, inserted into the Local Plan. The sites are now being questioned by developers and landowners and are being pushed for review by the inspector. The members remain the same for the omissions sites, as currently submitted. LPC can attend and support this and SCDC can submit new arguments to the officers, a meeting is to be held on the 8th June. No county councillor attended this meeting.
**Planning applications:** Update from the meeting with Talgarlag Hovans, regarding the old van centre site, 20 Cambridge Road, originally in relation to planning application S/3076/16/FL which was subsequently withdrawn: Written report submitted.
Update of the details of the LCP application for the planning application S/0906/16/OL, up to 50 dwellings and not less than 0.45 hectares for allotments, Horseshoe Road: LPC spoke at the SCDC planning committee meeting as did the Venerable Alan Clarkson, to whom LPC would like to thank for his outstanding speech and presentation. The application was refused which is great news, however it is important to ensure that the developer will submit an appeal. It was also noted that compulsory purchase cannot be considered until the planning application processes have been completed.
Planning working group meeting report from the 22nd March 2017: Written report submitted.
Request sent to SCDC regarding information of the statutory consultees in relation to planning application S/0906/17/FL, for up to 95 new dwellings on the agricultural land north east of Back Road: The clerk advised that following discussions with Cambridge County council (CCC) highways at SCDC it has become apparent that LPC are not automatically supplied with a list of the independent statutory consultees, which would be incredibly useful for LPC and may assist in building a case. The clerk has contacted the case officer and requested this full list be provided with a copy of each report and recommendation.
**Traffic matters:** Haverhill Town Forum meeting of 20th March: Written report submitted. Noted.
Back Road speeding issues and possible improvement suggestions as nominated by a resident. Council reviewed the suggestions and noted that some items relate to the CCC highways report on the Back Road planning application S/0096/17/OL. These suggestions will be passed to the traffic working group to review as part of their formal response for speed reduction.
Consideration of the proposed Granary Lane Project from SCDC: This proposes that Back Road come to a main cycle route, how as the verges are protected and the landscape is challenging to cycle LPC are unsure how SCDC intend to implement this. Further discussion on group petitioning available regarding extending Market Square and parking issues on the 19th April 2017: Residents were there and submitted ideas, included in these were the suggestions of limited parking times during the day and the suggestion to move or reduce the bar.
With the chairman’s agreement and suspension of Standing Order 3 (d) to (k), a resident in attendance addressed the council and advised that as a resident that backs onto Market Square, he would like to propose that himself and LPC split the cost of the work for the local division of land. The resident advised that due to this, he has been unable to obtain any documentation confirming that ownership of the wall resides with himself, and as such is proposing to compromise and share the repair costs. Cllr Cox advised that when planning permission was originally sought for this property by the previous owner, the planning application referred to the ownership of the wall.
A1307 Liaison Forum meeting of 6th April update: Written report submitted. Noted. As well as a written report there was a verbal report relayed, in this concern was raised that £106 thousand has been spent on the work required down behind Kingfisher Walk and St George’s and that this is to review the proposal that the BENCH and two fence panels should be installed opposite the gateway to number 2 Kingfisher Walk as the lack of a barrier and sheer drop pose a direct health and safety issue. It was noted that this had previously been fenced however this had become dilapidated over time and not only the panels remain. BENCH advised that they do not meet their cheque book on the cost of the materials. On the first element of work, the purchase of tools, this was proposed by Cllr Bald, seconded by Cllr Ward. Agreed. On the second element of work installation of the fence panels, the chairman agreed to amend Standing Order 3 (d) to (e) to allow a resident to speak on this topic. The resident advised that she lives in The Woodlands and regularly uses this pathway due to the lack of footpath along part of Back Road and she expressed her concern as a parent about the corner in question poses a danger to children, particularly to those with young children on bikes or scooters. The resident also advised that the area is unstable and there is an immediate 15-20 foot drop straight into the river. The chairman reinstated Standing Order 3 (d) to (k) and asked to put the installation of two fence panels, with notification of what material, as this is not council land, to a vote. Cllr Ward proposed and Cllr Hine seconded with eight in favour and one against. Agreed.
**The Beacon Youth Trust:** Written report submitted. Concerns were noted by Cllr Ward that some comments of the report have disappointed him and that he and she wished to arrange a meeting with the Beacon Youth Trust to discuss these elements in more detail.
**Consideration of support and participation for Linton Village Day on 1st June:** Proposed by Cllr Bald, seconded by Cllr Manley. Agreed.
**Consideration of correspondence received:** Resident regarding broken rail and posts alongside 32 High Street and Market Square. It was decided that the resident and Cllr Ward are to contact the possible owners of the land that protect the exposed gas pipe and return the council with a report.
Connections Bus Project regarding youth work provision for the summer holidays 2017: Cllr Ward proposed, Cllr Hill seconded. Put to a vote with eight in favour and one abstention. Agreed.
St Mary’s Church regarding a Community Peace Garden in the churchyard and request for funding support. The clerk was tasked to confirm the details and review if LPC can support the request legally as this is within the churchyard and not the church itself. Agreed.
Hill Paving Ltd regarding consideration for a cross headstone in the cemetery: Proposed by Cllr Potter, seconded by Cllr Mannassi. Put to a vote with eight in favour and one against. Agreed. Linton village cricket club repayment of balance of grant not used: Noted.
**Note:** Copies of the full minutes, reports and documents referred to above can be inspected at the parish council office. |
고속차량 엑슬베어링의 과거, 현재 그리고 미래 @ Schaeffler
Axle bearing in High speed train
History
Since 1883
Today
High Speed Trains Timetable
- **Train with locomotive**
- ICEV
- ICE1
- ICE2
- **Push Pull**
- PP -EMU
- **PP -EMU**
- ICE3
- **EMU**
- Velaro E
- Velaro Rus
- Velaro CN
**Technology**
**Year**
**Speed**
**Revolution**
**Evolution**
**[km/h]**
- 250
- 300
- 380
1985 2000 2010
All rights reserved to Schaeffler Technologies AG & Co. KG, in particular in case of grant of an IP right.
Push-Pull Trains – ICEV; ICE1 and ICE2
Push Pull Train – ICEV
Maiden voyage: 26th of Nov, 1985
Maximum speed: 317 km/h
Speed record: 406,9 km/h (1st of May, 1988)
Push-Pull Trains – ICEV; ICE1 and ICE2
Push Pull Train – ICE1
In service since June 1991
ICE1 fleet: 60 (12 car) trains
Maximum speed: 280 [km/h]
Maximum service speed: 250 [km/h]
Axle load of power car: 20 [t]
Axle load of passenger car: 16 [t]
Push-Pull Trains – ICEV; ICE1 and ICE2
Push Pull Train – ICE1
Bearing designation:
Z-575615.01.TAROL150/250-B-TVP
Lamellar rings
Polyamide cage
Grease for high speed application
Power car bogie
Push-Pull Trains – ICEV; ICE1 and ICE2
Push Pull Train – ICE2
In service since 1996
ICE2 fleet: 44 (8 car) trains
Maximum speed: 280km/h
Maximum service speed: 250km/h
- One power car and one cab car for each train
- Two half trains may be connected to a long train
Push-Pull Trains – ICEV; ICE1 and ICE2
Push Pull Train – ICE2
Bearing designation:
Z-575615.01.TAROL150/250-B-TVP
Lamellar rings
Polyamide cage
Grease for high speed application
Bearing designation:
F-801420.TAROL130/230-B-TVP
Power car bogie
Every second car has two power bogies with two driven wheel sets.
EMU Trains – ICE3; Velaro E; Velaro CN & Velaro Rus
Distributed Traction – ICE3
In service since 06/2000
High speed track – Frankfurt to Cologne since 08/2002
Maximum speed: 330 km/h
Maximum speed in service: 300 km/h
ICE 3 Fleet:
• 45-(8 car) trains in single system version
• 27-(8 car) Trains in multi system version for trans border service in Europe.
EMU Trains – ICE3; Velaro E; Velaro CN & Velaro Rus
Distributed Traction – ICE3 – Bearing
bearing: 150 x 250 $V_{\text{max}}$: 250 km/n $T_{\text{air}}$: 20 °C
mileage ICE drive head 401 568 - 1 : 246 028 km
FAG Bearing designation F-808853.ZL
Due to higher speed the bearing design for ICE3 changed from TRB to CRB
Polyamide cage
sheet metal sealing
Grease for high speed application
Position of temperature pickup
EMU Trains – ICE3; Velaro E; Velaro CN & Velaro Rus
Distributed Traction – Velaro CN
In service since 08/2008
High speed track – Beijing to Tianjin
Maximum speed: 330 km/h
Maximum speed in service: 300 km/h
Velaro CN Fleet:
• 60-(8 car) to be built in Germany and in China
EMU Trains – ICE3; Velaro E; Velaro CN & Velaro Rus
Distributed Traction – Velaro CN – Bearing Comparison
CRB F-808853.ZL
edge pressure
TRB F-807811.02.TAROL130/240-B-TVP
no edge pressure
FAG Bearing designation F-807811.02.TAROL130/240-B-TVP
Polyamide cage
sheet metal sealing
Grease for high speed application
EMU Trains – ICE3; Velaro E; Velaro CN & Velaro Rus
Distributed Traction – Velaro CN
Bearing Test according EN12082 – Test equipment
Deutsche Akkreditierungsstelle GmbH
German Accreditation Body
Accreditation
The Deutsche Akkreditierungsstelle GmbH (German Accreditation Body) attests that the Schaeffler Technologies GmbH & Co. KG
Pforzheim Bahn
Georg-Schäffer-Straße 30, 97421 Schweinfurt
is competent under the terms of DIN EN ISO/IEC 17025:2005 to carry out tests in the following field:
Performance testing of bearings - Railway applications
Standard-Prüfland AN55: Nach Angaben von EMG gebaut.
EMU Trains – ICE3; Velaro E; Velaro CN & Velaro Rus
Distributed Traction – Velaro CN
Bearing Test according EN12082 – CRH3 third batch – v=380km/h – 100.000km
Temperatures of performance test
- Velocity: 380 km/h +10%
- dmin: 0.83 m
- Speed: ±2072 rpm
- Radial Load: 94.6 kN
- Axial Load: ±11.5 kN
- Number of Cycles: 73.5
- Mileage (real): 102365 km
- Mileage (Theoretically): 102410 km
- Date: 11.08.2010
Storage Location: \Users\bg.contractwerk\utlDATA\PT\GW\T-Projects\Poerfeld-Bahn\Ausbauschl2_Getaufen in Arbeit\Bahn-10004_GDO-403-339_CRH3-380_Siemens\Leistungs-
Filename: Bahn-10004_F-627611_CRH3_Laetung_HDBOR_1_testReport.pdf
Program version: 1.0
Temp [°C]
Load zone
HDB zone
left/right
330km/h 350km/h 380km/h
State of art High speed trains – Outboard axlebox
High-Speed-Trains China – "CRH3" (Velaro China)
"outboard axlebox arrangement"
v_max 418 km/h
nxdm 495 000 mm/min
F-807811.09.TAROL130x240-B-TVP
ARCANOL L055/L218
Requirements:
> temp-limits as per EN12082: 90°C at load zone
> Mayor overhauling > 1,4 Mio. km (established)
> online-temp reduction to < 70°C
> environmental temperatures -50°C ...50°C
Evidences:
> performance tests as per EN12082 up to 1,2 Mio. km on test rigs AN55 and AN77
> reference field applications – ICE Middle Coach, AGC
> low-temp-initial-torque tests for L218 and L055
> water-tightness-tests acc. UIC515-5
> inspections during SG in-house overhauling – Schaeffler CARs
State of art High speed trains – Inboard axlebox
High-Speed-Trains USA – "Bright Line"
"inboard axlebox arrangement"
v_max. 396 km/h
nxdm 593 000 mm/min
F-620591.ZL
ARCANOL L218)
temp-limits as per EN12082: 90°C for load zone.
Requirements:
> temp-limits as per EN12082: 90°C at load zone
> Mayor overhauling of axlebox bearing after 1,65Mio.km resp. 8 years (refurbishment concept)
> Replacement of axlebox bearing after 3Mio.km resp. 10 years ("use-till-scrap" concept)
> aerodynamic bogie-cover + inboard axlebox > reduced windchill impact
> temp-impact on axleboxes from break-discs + traction motor
Evidences:
> performance tests as per EN12082
> water-tightness-tests acc. UIC515-5
> simulation for heat-impact
> new design concept for high-speed-axleboxes
Axle bearing development process
Axle bearing 개발 방향
Established standard approach to determine frictional power losses
Quasi static bearing model for friction calculation:
- Internal geometry
- Speed and loads
- Lubricant properties of the base oil
Dynamic bearing model for friction calculation:
- Internal geometry
- Speed and loads
- Lubricant properties of the base oil
- Dynamic effects
Precise friction measurement:
- Bearing unit only
- Installation case see sketch
- Speed and loads
- Oil lubrication, cooling by lubricating oil
Simple approach to estimate average frictional power loss
EN12082 Performance testing:
• Longterm testing with speed and loads acc. to EN12082
• Realistic test conditions: Original housing, grease lubrication, shaft, airstream cooling, etc.
• Temperatures of the bearings
Modeling:
Measured temperatures as result of frictional power loss and heat flow.
Heating up test:
Frictional power loss is simulated by electrical heating. Temperatures are measured in EN12082 set up.
Approach the result between Test and calculation
Test machine: AN77-1
- Bearing temperature
- Input power
- Output power loss
Calculation: Bearinx
- Frictional power loss
- Expected bearing temperature
Compare test and calculation
- Consideration and compensate the influence factors
Compare the re-calculation between test and calculation
Measuring data is compared to calculation result by Bearinx
Consideration the different reason between test and calculation
Comparison after applying the new influence considered in the calculation
Further steps in determining frictional power loss
The modifications in frictional power calculation done in the simple approach are only valid for load case of EN12082 performance test. Next steps should be:
1. Measured bearing temperatures for load cases under real running conditions
2. Direct determination of power losses by measuring bearing torque at load cases of real running conditions
3. Improving and validation calculation model for different load cases and operating conditions
03 Service solution beside Bearings
Railway Condition Monitoring System (RCMS)
Railway vehicles Condition Monitoring System based on Resonance Demodulation Technology — Comprehensively monitor and diagnose the working face and wheelset tread of key components such as running part bearings and gears, and conduct early warning and accurate positioning of faults
- **Multi-Channel Parallel Processing**, Fast System Response
- **On-line Diagnosis Alarm & Offline Data Analysis**
- Intelligent Diagnosis Algorithm Based on Advanced Analysis Strategy with High Accuracy of Online Fault Diagnosis
- Auto Generating Report of Status Diagnosis, and Upload to Server
- Expandability of System Functions, Predict Bearing Remain Useful Life and Full Life Cycle Management such as Customized Monitoring Wheel Polygonization Fault, Dynamic Setting of Temperature Alarm Threshold, Predict and Evaluate Bearing Remain Useful Life, Estimate the Size of Natural Spalling Failure, Suspension System Fault Diagnosis, Stability and Comfort Monitoring, Instability and Derailment Detection, Rail wave grinding detection...
Acoustics and Thermal Monitoring System (ATMS) – Rail Transit Application
Sensor: Recognize vehicle passing and wake up the system
Acoustic Camera:
- **Real-time calculation and analysis of noise signal frequency**: Combined with equipment structure and running state, real-time diagnosis of key components
- **Sound source visualization technology**: Real-time dynamic display of noise source location within the image range
- **Intelligent cloud platform control**: Real-time management and monitoring
AEI sensor: Identification of vehicle number and transmission to industrial computer
Trackside Monitoring System based on Microphone Array
— Identification of abnormal noise and temperature, fault diagnosis
- **High integration and small occupied area**: The system uses a digital microphone array with small size acoustic acquisition unit and the wheel signal and vehicle information acquisition function are integrated.
- **Spatial filtering can be realized**: The beamforming technology can follow the wheel movement to collect noise signal, which can effectively reduce the influence of environmental noise and improve the signal-to-noise ratio of fault signal.
- **Real-time diagnostic analysis**: The frequency components of running noise can be calculated and analyzed in real time, and the state of key components can be diagnosed.
- **Real-time display of noise source location**: The sound source visualization technology is used to display the position of the noise source in the image range in real time, and different noises can be monitored in real time by referring to the frequency range.
- **Intelligent automatic control of outdoor cabinets**: The rolling shutter door can be automatically controlled according to weather conditions and vehicle operation time, and the fan can be automatically turned on to cool down according to the equipment temperature.
- **Extensible system functions**: The thermal imaging camera can be integrated to monitor the temperature of the running part graphically, and the key parts of the running part can be detected with computer vision technology.
Rail Condition Detection Car with unpowered detection scheme
— Integrating the profile, wave corrugation, eddy current and rail surface status inspection subsystems on the miniaturized running device with various working modes such as hand pushing, dragging and self-running with convenient and simple operation.
- **Detection items**: Rail profile, wave corrugation, rail cant, surface crack and depth, rail surface visibility defects
- **Detection speed**: 3-15km/h
- **Measurement mode**: Non-contact continuous measurement
- **Profile detection accuracy**: ≤±0.03mm
- **Crack measurement accuracy**: ±0.15mm (depth)
- **Deepest crack could be measured**: 5mm
- **Defect identification rate**: ≥90%
- **Defect false alarm rate**: ≤5%
- **Image acquisition accuracy**: ≤0.1mm (landscape), ≤0.6mm (portrait)
- **Detection data is automatically recorded**
Customized Case 2: Automatic inspecting for freight wheel and axle
Automatic inspection equipment for freight wheel and axle based on intelligent image recognition system, phased array ultrasonic flaw detection and NVH detection technology
— Comprehensively collecting the information of axle sign board, wheelset profile size, surface state and bearing state and making reasonable judgment on the evaluation of wheel and axle’s maintenance process
- **Sign board information recognition**: The self-learning function of image recognition component is used to read the engraving information on the surface of the sign board, the radio frequency identification component is used to obtain the manufacturing, assembly and maintenance information of the wheel and axle equipped with radio frequency electronic tags.
- **Wheelset dimension information acquisition**: The non-contact laser detection module is used to measure the profile size of two wheels, the inner distance of the wheelset and the diameter of the axle.
- **Wheel surface state inspection**: Image recognition technology is used to conduct algorithm training and adjustment of typical wheel damage or defect topography to achieve accurate recognition and grasp of abnormal features on the wheelset surface. At the same time, probes integrated with multiple ultrasonic sensors are used to evaluate the internal damage of the wheel tread under the clamping action of industrial robots.
- **Bearing axial clearance measurement**: The servo control system is used to automatically measure the bearing axial clearance and read the data.
- **Bearing internal technical status detection**: The sensor is used to collect the noise signal and vibration signal of the rolling bearing, select a specific signal analysis method to extract the time domain and frequency domain characteristics, and fuse the main parameters of vibration and acoustics for judgment, so as to give the bearing quality grade information.
We pioneer motion |
Greetings Bear Fans,
Well the school year is off to a great start. We hope that all of you had a chance to enjoy your summer doing family events. Thank you to our SGA for organizing a great Freshman/New Student orientation, which was the largest attendance we’ve ever had. Our Fall sports have also been working extremely hard preparing themselves for their seasons. #bewarethebear
This year the Activities Office is selling several Ticket and Sponsorship Packages. Sponsorships are selling from $500 and up. If interested in showing your support for the activities of PRHS, please contact my office so that I can arrange a meeting with you to discuss the opportunities. To those who have already become a sponsor please know that we are extremely grateful for your support. We are also offering a variety of ticket packages. Due to its popularity we are offering the $25.00 All Season Sport Pass to our PRHS students. This pass will get them, PRHS students, into every home game all year long. It’s truly a great deal! We also have the reserved Green Seats in the Stadium which sell for $10.00 each game. Please download the “Palmetto Ridge” App or go to our Website for information related to PRHS Activities. If you bring in your Stevie Tomato’s Receipt, we will upgrade your seat in the stadium to the reserved section if there is availability. Two seats per receipt maximum. We are very thankful to Stevie Tomato’s for their continued support of PRHS Activities.
Homecoming is fast approaching and this year we will be playing cross town rival Naples High School. The night of the game is October 4 starting at 7:00 pm. Half time of this evening we will crown our King and Queen in a formal coronation
...continues on next page
Principal’s Message continues...
ceremony. So the evening is going to be jammed pack with fun and excitement and we hope to see you there. One last thing, SGA is hosting a Grocery Cart parade and those carts will be on display that evening. To win they have to fill their carts with nonperishable food items. The cart receiving the most donated food wins, so help them out by bringing in a donated food item.
The second week of school, SGA will host our annual Club Rush, where most of our Clubs set up in the gym to show our students the many opportunities to get involved with here at PRHS. Statistics show that a student who is involved through clubs or athletics generally have a greater opportunity to be successful and graduate on time which is what our mission is all about. I hope that you encourage your student to get involved.
If your student is interested in getting involved they should be listening to announcements or can stop by the Activities Office for Coaches information. The Activities here at the Ridge are in full swing and we are looking forward to seeing many of you here on campus. “One School, One Family!”
As always, GO BEARS!
Brent Brickzin
PRHS Activities Coordinator
JROTC
The Palmetto Ridge JROTC Program would like to wish everyone a fantastic and productive SY 19-20.
Please follow us on Twitter this year “Palmetto Ridge High School JROTC @RidgeJROTC”, and keep up with all of the amazing community and school support the mighty Bear Battalion cadets will be providing this school year. Hoahhhh!
Go Bears!
Ph: 239-254-9933
WISDOM TEETH EXTRACTIONS • IV SEDATION
DENTAL IMPLANTS • ORAL BIOPSIES
BONE GRAFTING • NITROUS OXIDE SEDATION
NO REFERRAL NECESSARY
MOST INSURANCES ACCEPTED
Complimentary Wisdom Teeth Consultation
(ADA 9310. Must Present Ad. Excludes X-Ray)
SEIN MOE, D.D.S.
Diplomate, American Board of Oral & Maxillofacial Surgery
Fellow, American Association of Oral & Maxillofacial Surgeons
Northwestern University Dental School, 1992
www.collieroralsurgery.com
PRHS Welcome Back!
Student Relations Corner
It is with great excitement that we welcome our students and families back to PRHS for the 2019-2020 school year! With a new year under way, we’d like to convey a few reminders to ensure your child has a successful educational experience at PRHS.
Attendance - Attendance is one of the most vital components of a child’s academic success.
Did you know that 2 days of missed school a month equates to about 22 academic days a year, which overtime is the equivalent to an entire month of school missed? Consider the following: If a student continues this trend for a few years, 2 days a month, equates to a month of school missed over the course of a year which in turn equates to a full years’ worth of missed school over a child’s k-12 educational career.
Your student’s presence in class on time, every day is vital for their academic success. Please find below a brief overview of how to document a student’s absence and what our different attendance code language means. When your student is absent, please call: (239) 377-2400 and press 1.
Excused Absences - Excused absences are only documented with a doctors or other professional’s note.
Validated Absences - Validated absences are those in which a parent calls and notifies our school of their child’s absence (sick, family matter, travel, etc.) These absences are not considered excused and may impact a child’s credits and or truancy if deemed excessive.
Unexcused Absence - Our school has not been informed of your child’s absence. These absences are not considered excused and may impact a child’s credits and or truancy if deemed excessive.
Communication is key! Our office is here to help you and your child maximize their educational experience at PRHS, let us know how we can help you!
PBIS: We are excited to bring back El Primo’s, Einstein’s Bagels, BBQ and much more!!!
PBIS will focus its goals around our PRHS Kindness Campaign and GRIT this year.
More news to come from PBIS in the weeks to follow… Please be sure to stay tuned to our Facebook and twitter pages for upcoming events, student drawings, teacher/student recognitions and much much more!
A+ SKILLS TUTORING™
Dedicated to Student Success
No Contracts
No Up-Front Evaluation Fees
All Subjects and Achievement Levels
Addam Cohn
239-254-9807 • firstname.lastname@example.org
www.aplusskillstutoring.com
5625 Strand Blvd., Suite 504
Naples, FL 34110
TUTORING
one-to-one
2/3 to 1 group discounts
SAT/ACT
FSA*EOC
PREP
Hourly Packages Available
THOMAS E. PARENT, M.D.
Board Certified
Fellowship Trained
Orthopaedic Surgeon
Naples Medical Center
• Arthroscopic Surgery
• Hand & Shoulder Surgery
• Sports Medicine
• Stem Cell Therapy
BY APPOINTMENT
400 8th Street North
Naples, Florida 34102
Phone: (239) 649-3313
Fax: (239) 261-4475
www.millenniumphysician.com
“We Can Fix Any Water Problem”
PRO WATER
• RESIDENTIAL
• COMMERCIAL
• INDUSTRIAL
24 HOUR EMERGENCY SERVICE
239-398-6525
Message from Collier County Public Schools
Security Enhancement: No Backpacks Allowed at CCPS Athletic Events
August 29, 2018
In our continuing efforts to add layers of security to our school campuses, backpacks, cinch bags, and other large bags are no longer permitted inside CCPS sporting venues, including high school stadiums and gymnasiums. An exception will be made for medically necessary items after proper inspection. This security enhancement is effective immediately and will impact this week’s games.
This policy is that same as last year. One additional reminder, NO CLEAR BAGS are permitted.
2019 – 2020 Academic Calendar
July
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 1 | 2 | 3 | 4 | 5 | | |
| 8 | 9 | 10| 11| 12| | |
| 15| 16| 17| 18| 19| | |
| 22| 23| 24| 25| 26| | |
| 29| 30| 31| | | | |
August
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 5 | 6*| 7 | 8 | 9 | 4 | 0 |
| 12| 13*| 14| 15| 16| 19| 9 |
| 19| 20| 21| 22| 23| 14| 9 |
| 26| 27| 28*| 29| 30| 19| 14|
September
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 2 | 3 | 4 | 5 | 6 | 24| 28|
| 9 | 10| 11| 12| 13*| 29| 23|
| 16| 17| 18| 19| 20*| 34| 28|
| 23| 24| 25| 26| 27| 39| 33|
| 30| No School Teachers or Students |
October
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 1 | 2 | 3 | 4 | 3 | 37| 37|
| 7 | 8 | 9 | 10| 11| 47| 41|
| 14| 15| 16*| 17*| 18*| 52| 44|
| 21| 22| 23*| 24| 25| 57| 57|
| 28| 29| 30| 31| 61| 9 | 9 |
November
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 1 | 2 | 3 | 4 | 1 | 62| 10|
| 4 | 5 | 6 | 7 | 8 | 67| 15|
| 11| 12| 13| 14| 15| 72| 20|
| 18| 19*| 20| 21| 22| 77| 25|
| 25| 26| 27*| 28*| 29| 78| 25|
December
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 2=| 3 | 4 | 5 | 6 | 83| 30|
| 9 | 10| 11| 12| 13| 88| 35|
| 16| 17| 18| 19| 20*| 93| 40|
| 23*| 24*| 25*| 26*| 27*| 93| 40|
| 30*| 31*| | | | | |
January
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 1 | New Year’s Day (Paid Holiday) |
| 6 Teacher Plan Day; No School for Students |
| 7 Students Return |
| 26 Distribution Report Cards |
| 20 Martin Luther King Day (Paid Holiday) No School Teachers or Students |
February
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 3 | 4 | 5* | 6 | 7 | 119| 23|
| 10| 11| 12*| 13| 14| 124| 28|
| 17| 18| 19| 20| 21| 129| 32|
| 24| 25| 26*| 27| 28| 134| 37|
| 26 Early Dismissal Day |
March
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 2 | 3 | 4 | 5 | 6 | 139| 42|
| 9 | 10*| 11*| 12*| 13*| 135| 39|
| 16| 17| 18| 19| 20*| 144| 4 |
| 23| 24| 25| 26| 27| 149| 9 |
| 30| 31| | | | | |
April
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 1 | 2 | 3 | 154| 14|
| 6 | 7 | 8 | 9 | 10 | 158| 18|
| 13*| 14| 15| 16| 17 | 162| 22|
| 20| 21| 22| 23| 24*| 167| 27|
| 27| 28| 29| 30| | 171| 31|
May
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 1=| 2 | 3 | 172| 32|
| 4 | 5 | 6 | 7 | 8 | 177| 37|
| 11| 12| 13| 14| 15| 182| 42|
| 18| 19| 20| 21| 22| 187| 47|
| 25| 26| 27| 28| 29| 192| 51|
June
| M | T | W | T | F | T | S |
|---|---|---|---|---|---|---|
| | | | | | | |
| 1 | 2 | 3* | 5 | 196| 54|
| 8 | 9 | 10| 11| 12| | |
| 15| 16| 17| 18| 19| | |
| 22| 23| 24| 25| 26| | |
| 29| 30| | | | | |
Mathworks Tutoring specializes in “totally tailored tutoring” (T²) in math, language arts and other subjects. This includes FSA, SAT, ACT, ASVAB, EOC, other exams and honors/advanced classes.
We work to ensure that students:
• learn and understand content material
• organize and think through problems
• develop workable test-taking strategies
• employ effective study skills
Mathworks Tutoring has a cadre of expert tutors who are especially well-versed in providing 1-on-1 support for students at all grade levels.
Barbara G. Levine
Elem/Mid Sch Teacher
Math/Science Certified
H. Michael Mogil
National Science Edu Consultant & Tutor
239-591-2468 • email@example.com
www.weatherworks.com
Pine Ridge
6101 Pine Ridge Rd.
Naples, FL 34119
239-348-4000
Collier Boulevard
8300 Collier Blvd.
Naples, FL 34114
239-354-6000
www.PhysiciansRegional.com
Olde Naples Periodontics
Denise C. Gay
D.D.S., M.D.S. BOARD CERTIFIED
Specializing In Periodontics and Dental Implants
239-261-1401
OLDENAPLESPERIO.COM
77 8th Street South
Naples, Florida 34102
Physicians Regional Healthcare System
Proud Supporter of Palmetto Ridge High School
“Go Bears!”
Tijuana Flats
$5.99 TWO TACOS, CHIPS & DRINK EVERY DAY
1164 TAMiami TR. • LOCATED IN PUBLIX PLAZA • TIJUANAFLATs.COM
COMPLIMENTARY PORTFOLIO REVIEW
Darryl E. Young, AAMS
FINANCIAL ADVISOR
firstname.lastname@example.org
770 Tamiami Trail North • Suite 104
Naples, FL 34108
OFFICE (239) 596-7220
MOBILE (239) 784-5223
TOLL FREE (866) 596-7220
FAX (866) 462-8620
www.edwardjones.com
Member SIPC
It’s time to evaluate whether you are on track to meet your financial goals.
Inside This Issue
JROTC
PRHS Welcome Back!
Security Enhancement
Calendar
and more! |
ANIMAL MONITORING SYSTEM
TIERÜBERWACHUNGSSYSTEM
SYSTÈME DE SURVEILLANCE D'UN ANIMAL
Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Priority: 19.10.2010 US 455419 P
Date of publication of application: 28.08.2013 Bulletin 2013/35
Proprietor: ST Reproductive Technologies, LLC Navasota, TX 77868 (US)
Inventors:
- RETTEDAL, Nicholas, P.
Greeley
CO 80631 (US)
- WEILNAU, Stephen, M.
Greeley
CO 80634 (US)
COCKROFT, Scott, R.
Greeley
CO 80631 (US)
YEAGER, Billy, J.
Gilbert
AZ 85233 (US)
HORNICK, Jerry, A.
Gold Canyon
AZ 85118 (US)
Representative: Kador & Partner PartG mbB Corneliusstraße 15 80469 München (DE)
References cited:
US-A- 5 818 354 US-A- 5 984 875
US-A- 6 085 751 US-A1- 2004 155 782
US-A1- 2005 134 452 US-A1- 2008 236 500
Note: Within nine months of the publication of the mention of the grant of the European patent in the European Patent Bulletin, any person may give notice to the European Patent Office of opposition to that patent, in accordance with the Implementing Regulations. Notice of opposition shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).
Description
I. TECHNICAL FIELD
[0001] Generally, an animal monitoring device configured as a bolus for oral administration to reside in an animal’s stomach. The bolus has a substantially inert solid body which contains within an animal monitoring device. The animal monitoring device includes a radio frequency generator, an animal identification information encoder for outputting animal identification information of the particular animal. The animal monitoring device can further include sensors to detect one or more physiological and non-physiological sensed animal characteristics and a sensed animal characteristic encoder for outputting sensed animal characteristic information. The animal monitoring device intermittently transmits encoded animal identification information and encoded sensed animal characteristic information to a radio frequency reader which assembles and transmits encoded information as data packets to a reception device which allows a specialized computer to display decoded animal identification information and decoded sensed animal characteristic information as numeric values which can be accessed by a user.
II. BACKGROUND
[0002] A variety of animal monitoring devices are in use to remotely track animal location and remotely sense the temperature of animals, such as described in documents US 5 984 875 A and US 2004/155782 A1. Certain of these devices include an orally administered, inserted, or ingested bolus containing microprocessors for processing animal identification information and signals from sensors to provide encoded data representations which can be transmitted by radio-frequency to a radio-frequency receiver. However, certain problems remain unresolved which relate to the structure and function of the bolus electrical circuitry and the transmission of encoded data representations by these conventional animal monitoring devices.
[0003] One problem related to conventional bolus may be that there is no magnet located within the bolus which generates a magnetic field to collect metal materials ingested by the animal such as wire, nails, screws, tacks, barbed wire, or the like. Alternately, conventional bolus may contain within one or more magnets, but the magnetic field generated may dispose attracted metal elements in an orientation which projects outwardly from the bolus. These projecting metal elements can cause injury to the animal.
[0004] Another problem related to conventional bolus can be that the magnet has a location sufficiently close to or as a part of the components generating the radio-frequency which carries encoded data representations generated by the microcontroller or processor elements resulting in loss of encoded data representations during transmission to the radio frequency receiver.
[0005] Another problem related to conventional bolus may be that the mass of the animal in which the bolus has a location can demodulate the frequency of the radio signal such that the radio signal has a different frequency at the point of transmission than the frequency of the radio signal after passing through the mass of the animal. Accordingly, encoded data representations can be intermittently interrupted or portions or all of the transmitted encoded data representations can be lost.
[0006] As to each of these substantial problems, the animal monitoring system described herein provides a solution.
III. DISCLOSURE OF INVENTION
[0007] Accordingly, a broad object of embodiments of the invention can be to provide a bolus orally administratable for retention in the digestive tract of an animal which contains an animal monitoring device having a structure and a function which improves transmission of encoded animal identification information and encoded sensed animal characteristic information from within an animal to a radiofrequency reader.
[0008] Another broad object of embodiments of the invention can be to provide a bolus which includes one or more magnets disposed to generate one or more magnetic fields having a configuration which attracts metal objects to the external surface of the body of the bolus but avoids disposing such metal objects in outwardly projecting relation the external surface of the body of the bolus.
[0009] Another broad object of embodiments of the invention can be to provide an animal monitoring device on a printed circuit board which can be sufficiently isolated from the one or more magnets to allow transmission of encoded animal identification information and sensed animal characteristic information without interruption or loss of encoded information.
[0010] Another broad object of the invention of the invention can be to provide a network frequency match element which functions as part of the animal monitoring device to compensate for the mass of the animal such that the radiofrequency signal generated by the animal monitoring device antenna located inside the animal can be received by the radio frequency reader antenna located outside of the animal.
[0011] These objects are accomplished by an animal monitoring system according to claim 1. Naturally further objects of the invention are disclosed throughout the detailed description of the preferred embodiments of the invention and the figures.
IV. BRIEF DESCRIPTION OF THE DRAWINGS
[0012]
Figure 1 is diagram which shows a particular method
of using an embodiment of the animal monitoring system.
Figure 2 is a block diagram which shows a particular embodiment of a specialized computer in relation to a particular embodiment of a radio frequency reader and bolus.
Figure 3 is a block diagram which shows a particular embodiment of a radio frequency reader.
Figure 4 is an exploded view of a particular embodiment of the bolus.
Figure 5 is an exploded view of another particular embodiment of the bolus.
Figure 6 is block diagram of a particular embodiment of the animal monitoring device which can be contained in a various embodiments of the bolus.
Figure 7 is a bar graph which compares strength of radio frequency transmission against orientation of magnetic field of a first magnet contained in bolus.
Figure 8 is a bar graph which compares strength of radio frequency transmission against orientation of magnetic field of a first magnet contained in the bolus when magnetically coupled to a second magnet outside of the bolus.
Figure 9 is a bar graph which compares strength of radio frequency transmission with the first magnet contained in the bolus oriented to provide greatest strength of radio frequency transmission as compared to strength of radio frequency transmission with the first magnet contained in the bolus oriented to provide greatest strength of radio frequency transmission with a second magnet outside of the bolus magnetically coupled to the first magnet.
V. MODE(S) FOR CARRYING OUT THE INVENTION
[0013] Now referring primarily to Figures 1 and 2, which illustrates a general computer implemented method of using an animal monitoring system (1) to monitor one or more sensed physiological and non-physiological parameters ("animal characteristics (2)") of an animal (3). A bolus (4) can be orally administered to reside in a reticulum (5) of the animal (3)(although the bolus (4) can be implanted in the animal (3) to reside at other locations). The bolus (4) includes an animal monitoring device (6)(see for example Figures 4 and 5) including one or more sensors (9) which can sense animal characteristics (2). A microcontroller (7) having one or more processors (8) continually or intermittently transform analog or digital signals from the one or more sensors (9) to generate encoded sensed animal characteristic information (10).
The encoded sensed characteristic information (10) varies in relation to monitored change in the sensed animal characteristics (2). The animal monitoring device (6) can further generate encoded animal identification information (11) associated the individual monitored animal (3). The animal monitoring device (6) can further operate to generate and transmit a radio frequency signal (12) (also referred to as an "RF signal") which can carry encoded animal identification information (11) and encoded sensed animal characteristic information (10).
[0014] Again referring primarily to Figures 1 and 2, one or more radio frequency reader(s)(13) can be located to receive the radiofrequency signal (12) carrying the encoded animal identification information (11) and the encoded sensed animal characteristic information (10). As to particular embodiments, the one or more radiofrequency readers (13) can further operate to decode the received radiofrequency signal (12) and generate one or more bit segments (14) representing the encoded animal identification information (11) and representing the encoded sensed animal characteristic information (10)(see for example Figure 3). As to particular embodiments, the one or more radio frequency readers (13) can further operate to assemble the bit segments (14) into a data packet (15) which can be transmitted and received by a wired or wireless reception device (16). The reception device (16) can transfer the data packet (15) to a specialized computer (17) for transforming the bit segments (14) to output an animal identification value (18) and to output a sensed animal characteristic value (19). A computer user (20) can access the sensed animal characteristic value (19) associated with the animal identification value (18)(along with other information encoded by the animal monitoring device (6) or the radio frequency reader (13) or a remote second computer (21)) by use of a specialized computer (17).
[0015] Now referring primarily to Figure 2, the specialized computer (17) configured to allow access by the computer user (20) of the sensed animal characteristic values (19) associated with the animal identification value (18) is described herein in terms of functional block components, screen shots, and various process steps. It should be appreciated that such functional blocks may be realized by any number of hardware or software components configured to perform the specified functions. For example, the computer implemented animal management system (1) may employ various integrated circuit components which function without limitation as: memory elements, radio frequency signal modulators, processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.
[0016] Similarly, the software elements of the present invention may be implemented with any programming or scripting language such as C, C++, Java, COBOL, assembler, PERL, Labview or any graphical user interface programming language, extensible markup language
(XML), Microsoft’s Visual Studio .NET, Visual Basic, or the like, with the various algorithms or Boolean Logic being implemented with any combination of data structures, objects, processes, routines or other programming elements. Further, it should be noted that the present invention might employ any number of conventional wired or wireless techniques for data transmission, signaling, data processing, network control, and the like.
[0017] It should be appreciated that the particular computer implementations shown and described herein are illustrative of the invention and its best mode and are not intended to otherwise limit the scope of the present invention in any way. Indeed, for the sake of brevity, conventional data networking, application development and other functional aspects of the systems (and components of the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent exemplary functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in a practical animal monitoring system (1).
[0018] As will be appreciated by one of ordinary skill in the art, the present invention may be embodied in a non-claimed embodiment as a method, a data processing system, a device for data processing, a computer program product, or the like. Accordingly, such an embodiment may take the form of an entirely software embodiment, an entirely hardware embodiment, or an embodiment combining aspects of both software and hardware. Furthermore, such an embodiment may take the form of a computer program product on a computer-readable storage medium having computer-readable program code means embodied in the storage medium. Any suitable computer-readable storage medium may be utilized, including hard disks, CD-ROM, optical storage devices, magnetic storage devices, ROM, flash RAM, or the like.
[0019] The present invention may be described herein with reference to screen shots, block diagrams and flowchart illustrations of the data encoder-decoder system to describe computer programs, applications, or modules which can be utilized separately or in combination in accordance with various aspects or embodiments of the invention. It will be understood that each functional block of the block diagrams and the flowchart illustrations, and combinations of functional blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus for implementing the functions specified in the flowchart block or blocks.
[0020] These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
[0021] Accordingly, functional blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each functional block of the block diagrams and flowchart illustrations, and combinations of functional blocks in the block diagrams and flowchart illustrations, can be implemented by either special purpose hardware based computer systems which perform the specified functions or steps, or suitable combinations of special purpose hardware and computer instructions.
[0022] Again referring to Figure 2, the computer implemented animal monitoring system (1) can include a specialized computer (17) for receiving, processing and transforming signals from a reception device (16) to generate animal identification values (18) and sensed animal characteristic values (17) accessible by the computer user (20). The specialized computer (17) can include at least one processing unit (22), a memory element (23), and a bus (24) which operably couples components of the computer (17), including, without limitation the memory element (23) to the processing unit (22). The computer (17) may be a conventional computer, a distributed computer, or any other type of computer which may contain all or a part of the elements described or shown to accomplish the functions described herein; the invention is not so limited. The processing unit (22) can comprise without limitation one central-processing unit (CPU), or a plurality of processing units which operate in parallel to process digital information, or a digital signal processor (DSP) plus a host processor, or the like. The bus (24) can be without limitation any of several types of bus configurations such as a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The memory element (23) can without limitation be a read only memory (ROM)(25) or a random access memory (RAM)(26), or both. A basic input/output system (BIOS)(27) containing routines that assist transfer of data between the components of the specialized computer (17), for example during start-up,
can be stored in ROM (25). The computer (17) can further include a hard disk drive (28) for reading from and writing to a hard disk (not shown), a magnetic disk drive (29) for reading from or writing to a removable magnetic disk (30), and an optical disk drive (31) for reading from or writing to a removable optical disk (32) such as a CD ROM or other optical media.
[0023] The hard disk drive (28), magnetic disk drive (29), and optical disk drive (31) and the reception device (16) can be connected to the bus (24) by a hard disk drive interface (33), a magnetic disk drive interface (34), and an optical disk drive interface (35), and a reception device interface (36), respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer (17). It can be appreciated by those skilled in the art that any type of computer-readable media that can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, random access memories (RAMs), read only memories (ROMs), RFID devices or the like, may be used in the exemplary operating environment.
[0024] The computer (17) can further include an operating system (37) and an animal monitoring program (38)(AMP) which as to particular embodiments of the invention can include an animal monitoring device encoder-decoder module (39)(AMD encoder-decoder module) for programming animal identification values (18) to the animal monitoring device (AMD)(6) using an animal monitoring device programmer (40) connected to the bus (24) by an AMD interface (41). The AMD encoder-decoder module can be stored on or in the hard disk, magnetic disk (30), optical disk (32), ROM (25), in RAM (26) the specialized computer (17) or alternately the functionalities of the AMD encoder-decoder module (39) may be implemented as an application specific integrated chip (ASIC) or file programmable gate array (FPGA), or the like.
[0025] The computer user (20) can enter commands and information into the computer (17) through input devices such as a keyboard (42) and a pointing device (43) such as a mouse. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, magnetic strip of a card, or the like. These and other input devices are often connected to the processing unit (22) through a serial port interface (44) that can be coupled to the bus (24), but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A monitor (45) or other type of display device can also be connected to the bus (24) via interfaces such as a video adapter (46), or the like. In addition to the monitor (45), the computer (17) can further include a peripheral output device (51), such as speakers and printers.
[0026] A "click event" occurs when the computer user (20) operates at least one function of the AMP (38) or the animal monitoring device encoder-decoder module (39), or other program or other application function, through an action or the use of a command which for example can include pressing or releasing a left mouse button (47) while a pointer element (48) is located over a control icon (49) displayed on the monitor (45). However, it is not intended that a "click event" be limited to the press and release of the left mouse button (46) while a pointer element (45) is located over a control icon (49). Rather, the term "click event" is intend to broadly encompass any action or command by the computer user (20) through which a function of the operating system (37) or animal monitoring program (38), animal monitoring device encoder-decoder module (39), or other program or application is activated or performed, whether through clickable selection of one or a plurality of control icon(s) (49) or by computer user (20) voice command, keyboard stroke(s), mouse button, touch screen, touch pad, or otherwise. It is further intended that control icons (49) can be configured without limitation as a point, a circle, a triangle, a square (or other geometric configurations or combinations or permutations thereof), or as a checkbox, a drop down list, a menu, or other index containing a plurality of selectable options, an information field which can contain or which allows input of a string of alphanumeric characters such as a street address, zip code, county code, or natural area code, animal identification number or by inputting a latitude/longitude or projected coordinate X and Y, animal pen number, or other notation, script, character, or the like.
[0027] The computer (17) may operate in a networked environment using logical connections (50) to one or a plurality of remote second computers (21). These logical connections (50) can be achieved by a communication device (52) coupled to or a part of the computer (17). Each of the plurality of remote computers (51) can include a part or all of the elements as included in the specialized computer (17) although only a single box has been illustrated in Figure 2 for the remote second computer (51). The logical connections (50) depicted in Figure 2 can establish a local-area network (LAN) or a wide-area network (WAN). Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet (53).
[0028] When used in a LAN-networking environment, the computer (17) can be connected to the local network through a network interface (54). When used in a WAN-networking environment, the computer (17) typically includes a modem (55), or other type of communications device, for establishing communications over the wide area network, such as the Internet (53). The modem (55), which may be internal or external to the specialized computer (17), can be connected to the bus (24) via the serial port interface (44). In a networked environment, the animal monitoring program (38), or portions thereof, may be stored in any one or more of the plurality of remote second computers (51). It is appreciated that the logical connections (50) shown are exemplary and other hardware means and communications means can be utilized.
for establishing a communications link between the specialized computer (17) and one or more of the plurality of remote second computers (21).
[0029] While the computer means and the network means shown in Figure 2 can be utilized to practice the invention including the best mode, it is not intended that the description of the best mode of the invention or any preferred embodiment of the invention be limiting with respect to the utilization of a wide variety of similar, different, or equivalent computer means or network means to practice embodiments of the invention which include without limitation hand-held devices, such as personal digital assistants or camera/cell phone, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, PLCs, or the like.
[0030] Now referring primarily to Figures 1 and 3, the animal monitoring system (1) can further include one or more radio frequency readers (13)(RF readers). The RF reader (13) can receive a radio-frequency signal (12) from an AMD (6) within a bolus (4) implanted in, retained by, or held in the reticulum (5) of an animal (3). The AMD (6) within the bolus (4) can send encoded animal identification information (10) and the encoded sensed animal characteristic information (2) using the radio-frequency signal (12), as above described.
[0031] One non-limiting embodiment of the RF reader (13) as shown in Figures 1 and 3, provides a reader microcontroller (56) which includes a reader processor (57) which controls the functions of a variety of reader processor elements (58) stored in a reader memory element (59) each of which provides a response to events related to receiving the radio-frequency signal (12) from the AMD (6) within the bolus (4) carrying encoded animal identification information (11) and sensed animal characteristic information (10), or receiving reader sensor signals (60) from reader sensors (61) which monitor environmental parameters proximate the RF reader (13) such as ambient temperature; or generating data packets (15) which include all or parts of such information, or sending data packets (15) to the computer (17) or a remote second computer (21) for access by a computer user (20). A reader microcontroller (56) suitable for use with embodiments of the RF reader (13) can be obtained from Microchip Technology, Inc., 2355 West Chandler Blvd., Chandler, Arizona, Part No. PIC18F4620-I/PT, or similar or equivalent components can be suitable as a reader microcontroller (56) programmable to perform the above-described functions of the RF reader (13).
[0032] Again referring primarily to Figure 3, a reader antenna (62) can receive encoded animal identification information (10) and encoded sensed animal characteristic information (11) and other information generated by operation of the AMD (6) within the bolus (4) within an animal (3). The reader antenna (62) can be tuned to the radio-frequency signal (12) generated by the AMD (6) by a reader matching network element (63). A reader receiver (64)(or transceiver) can be controlled by a first reader processor element (65) to convert the radio-frequency signal (12) received by the reader antenna (62) from analog to digital baseband signals.
[0033] Again referring primarily to Figure 3, the reader sensor (61) can take the form of an ambient temperature sensor (66) which can be located to sense the ambient temperature (67) surrounding the RF reader (13). The ambient temperature sensor (66) can take the form of a thermistor. A suitable thermistor for use in embodiments of the RF reader (13) is available from Microchip Technology, Inc., 2355 West Chandler Blvd., Chandler, Arizona, Part No. MCP98242, and similar and equivalent parts. The ambient temperature sensor (66) can be operated under the control a second reader processor (68) which functions to regulate power to the ambient temperature sensor (66) and converts the reader sensor signal (60) from the ambient temperature sensor (66) into a digital representation of the ambient temperature (67). The second reader processor (68) can further function to encode or re-encode from time to time an amount of reader temperature calibration data (70) which allows calculation and output of an ambient temperature value (71).
[0034] Again referring primarily to Figure 3, a clock element (72) can operate under the control of a third reader processor element (73) which functions to generate a date and time signal (74) that represents a date and time value (75).
[0035] Again referring primarily to Figure 3, a fourth reader processor element (76) can function to assemble data packets (15) which include a representation of, the ambient temperature value (71) and the date and time value (75) at which the information from the AMD (6) was received by the RF reader (13). The assembled data packet (15) can be stored and retrieved from the reader memory element (59) under the control of the fourth reader processor element (76).
[0036] Again referring primarily to Figure 3, a fifth reader processor element (77) can function to provide an ether net interface (78) for an ether net controller (79) to receive requests from the computer (17) or remote computer (21) and retrieve from the reader memory element (59) one or more data packets (15) containing information relating to one or a plurality of animals (3) which entrain a bolus (4) with an AMD (6). The fifth reader processor element (77) can further function to send the retrieved data packets (15) to the ether net controller (79) for transmission to the computer (17).
[0037] Now referring primarily to Figures 4 through 6, embodiments of the animal monitoring system (1) include an inert bolus (4) orally administrable to an animal (or implantable in an animal) (3) containing the AMD (6) which includes one or more of a microcontroller (7), one or more processors (8), at least one sensor (9), and a radio frequency generator (81) including one or more of an oscillator (80), a radio frequency stabilizer (82), an antenna (83), and a power source (84) which operate to generate the radio frequency signal (12). Depending on
the embodiment, a first magnet (85) (see example shown in Figure 5) or a pair of magnets (94) (see example shown in Figure 4) can be further included in the inert bolus (4). Certain configurations of the bolus (4) can be orally administered to ruminant animals (3), such as cows, deer, and sheep, and be retained in a part of the stomach, such as the reticulum (5), as shown in Figure 1; although in non-claimed embodiments the bolus (4) can be implanted or be otherwise affixed to an animal (3).
[0038] Embodiments of the bolus (4) which are orally administered to an animal (3) can provide an inert bolus body (86) having external dimensional relations adapted to allow oral administration and retention of the bolus (4) in a part of the stomach, such as the reticulum (5) of a particular species of animal (3). As one non-limiting example, the inert bolus body (86) can include an amount of cured plastic resin (87) cast about the animal monitoring device (6) and as to particular embodiments about the pair of magnets (94) or the first magnet (85) along with any spacers. The amount of cured plastic resin (87) can for example comprise a plastic resin such as urethane resin, epoxy resin, polyester resin, or the like used in accordance with the manufacturer’s instructions. As to other embodiments, the inert bolus body (86) can comprise a sealable container (88) which defines a hollow inside space (89) which receives said animal monitoring device (6) and said first magnet (85). As to other embodiments, the sealable container having the animal monitoring device (6) received in the hollow space (89) (and as to particular embodiments further including the first magnet (85) received in the hollow space) can have the amount of plastic resin (87) cast about the animal monitoring device (6) and the first magnet (85) located within said sealable container (88).
[0039] As one illustrative example, a bolus (4) suitable for oral administration to an animal (3) can be generally cylindrical with a diameter in perpendicular cross section in the range of about one-half inch to about one inch and having a length disposed between a first bolus end (90) and a second bolus end (91) in the range of about two inches and about five inches. Particular embodiments of the bolus (4) can have a length of about three and one-half inches and a diameter in perpendicular cross section of about three-quarters of an inch. While the Figures show the bolus (4) in the constructional form of a cylinder with end caps; the invention is not so limited, and the bolus (4) can have numerous and varied external surface configurations which allow oral administration and retention within the reticulum (5) (or other part of the digestive tract) of an animal (3). Typically, retention of the bolus (4) in a part of a stomach or retention by way of implant will be for all or a substantial portion of the life of the animal (3). The inert bolus body (86) can be molded, cast, or machined from biocompatible (or biologically inert) non-magnetic materials which allow transmission of the radio frequency signal (12) from within the bolus (4) to outside of the animal (3). As examples, the inert bolus body can be made from plastics such as nylon, fluorocarbon, polypropylene, polycarbonate, urethane, epoxy, polyethylene, or the like; or metals such as stainless steel; or other materials such as glass can be utilized.
[0040] The hollow inside space (89) inside of the inert bolus body (86) can be of sufficient volume to house one or more of the microcontroller (7), the sensor (9), the oscillator (80) the radio frequency stabilizer (82), the antenna (83) and the power source (84) along with the associated circuitry. Now referring primarily to Figure 4, as to certain embodiments of the bolus body (86), the hollow inside space (89) can have sufficient volume to further house non-conductive insulators (92), and non-conductive spacers (93) to establish a particular distance between a pair of magnets (94), while as to embodiments of the invention similar to that shown in Figure 5, the hollow inside space (89) can have sufficient volume to further house a first magnet (85). As to embodiments of the bolus (4) as shown in Figures 4 and 5 or similar embodiments, the hollow inside space (89) can be configured as a cylindrical volume having a diameter of about three-eighths of an inch and about five-eighths inch and a length disposed between the first bolus end (90) and the second bolus end (91) of between about two inches and about four inches. A particular non-limiting embodiment of the hollow inside space (89) can be about one-half inch in diameter and having a length of about three inches.
[0041] As to those embodiments of the bolus (4) including a sealable container (88), as above described, the sealable container (88) can further provide at least one end cap (95) removably sealable with a first bolus end (90) or a second bolus end (91) or both ends (90)(91) of the bolus (4) to allow access to the hollow inside space (89) for location of the various components of the animal monitoring device (6). As to certain embodiments of the invention, the bolus (4) can take the form of a closed end tube having one end cap (95) or a cylindrical tube having an end cap (95) fitted to each of the first bolus end (90) and the second bolus end (92). The end cap(s) (95) can also take the form of a plug sealably inserted into one or both ends of the sealable container (88), as shown in Figures 4 and 5. Alternately, the end cap (95) and the bolus (4) can provide rotatably matable spiral threads. Additionally, the end cap (95) can take the form of a permanent seal to one or both ends of the sealable container (88) of the bolus (4) such as a castable polymer which cures to seal one or both ends of the bolus (4). The bolus (4) can also take the form of matable halves (whether longitudinal or latitudinal) which can avoid the use of end caps (95).
[0042] The bolus (4) having a hollow inside space (89) can be generated by a wide variety of procedures such as molding, casting, fabrication or the like. As one non-limiting example, a cylindrical tube having an external diameter and an internal diameter, as above described, can be divided into sections of suitable length to which the end caps can be fitted. Alternately, a bore can be made in a cylindrical solid rod having an external diameter, as above described, to provide a closed end tube with the bore having sufficient dimension to provide the hollow inside space (89). An end cap (95) or seal can be fitted to the open end of the closed end tube.
[0043] Now referring primarily to Figures 4 through 6, a printed circuit board (96) can be utilized to mechanically support and electrically connect the microcontroller (7), the sensor (9), the oscillator (80), the radio frequency stabilizer (82), and the antenna (83). The printed circuit board (96) can be configured as a disk having a circular boundary (97) and a thickness disposed between two generally planar surfaces (98)(99). The disk shaped printed circuit board (96) can be disposed with the planar surfaces (98)(99) in substantially perpendicular relation to a longitudinal axis (100) of the hollow inside space (89) when configured as a cylindrical volume, as shown in Figures 4 or 5; however, the invention is not so limited, and the components can be mounted on any suitable supporting surface in any configuration or arrangement which allows the components to function as further described below.
[0044] Again referring primarily to Figure 6, which provides a block diagram which represents the various integrated circuit components of the animal monitoring device (6) which function as processing elements, memory elements, logic elements, look-up tables, or the like, to carry out a variety of functions under the control of one or more microprocessors or other control devices, as further described below. In the particular embodiments of the invention shown in Figures 4 through 6, the microcontroller (7) can take the form of a small computer on one or more integrated circuits having one or more processors (8) which control the functions of a variety of processing elements (101) stored in a programmable memory element (102) each of which provides a response to events related to the surveillance, identification, and measurement of values in relation to an individual animal (3) or other object. A microcontroller (7) as available from Microchip Technology, Inc., 2355 West Chandler Blvd., Chandler, Arizona, Part Nos. PIC18LF14K22 or PIC18LF15K22, or similar or equivalent components, can be suitable for use with embodiments of the animal monitoring device (6).
[0045] A first processor element (103) can function to encode and continuously or intermittently output an amount of encoded animal identification information (11) which can represent a animal identification value (18) such bolus identification number (104), an animal identification value (105), or other value which associates information received from a bolus (4) to a particular animal (3) or object.
[0046] A second processor element (106) can function to intermittently encode and output an amount of encoded sensed animal characteristic information (10) representing a sensed animal characteristic (2) of an animal (3) or object. For the purposes of this invention, an animal characteristic (2) of an animal (3) or object can include any one or more of a physiological characteristics of the animal (3) such as temperature, pH, heart rate, blood pressure, partial pressures of dissolved gases, or the like; or a non-physiological parameter such as animal location, animal tilt, humidity, or the like. The second processor element (106) can in part function to receive analog signals or digital signals from a sensor (9) configured to sense a particular animal characteristic (2). As a non-limiting examples, the sensor (9)(or sensors) can be an omnidirectional tilt and vibration Sensor (PN SQ-SEN-200) distributed by Signal Quest Precision Microsensors; a betapch thermistor (PN 1K20G3) distributed by BetaTHERM Sensors; a humidity sensor (PN HCZ-D5) distributed by Ghitron Technology CO., Ltd.; an ultra miniature pressure transducer (PN COQ-062) distributed by Kulite, a proximity sensor (PN PY3-AN-3) distributed by Automation Direct.com.
[0047] Variation of the sensed animal characteristic(s)(2) can be continuously or intermittently updated by encoding or re-encoding the a digital representation of the signal generated by the sensor (9). The second processor element (106) can further function to encoded or re-encoded from time to time an amount of calibration data (128) which allows calculation and output of a sensed animal characteristic value (19) of the animal (3). As to the particular embodiment of the invention shown in Figures 4 and 5, the second processor element (106) can receive and encode signals received from a thermistor (a type of resistor whose resistance varies with change in temperature). A suitable thermistor for use in embodiment of the invention is available from Microchip Technology, Inc., 2355 West Chandler Blvd., Chandler, Arizona, Part No. MCP98242, and similar and equivalent parts.
[0048] A third processor element (107) functions to control the oscillator (80) to generate a stable radio frequency signal (12). An oscillator (80) suitable for use with the invention is available from Freescale Semiconductor, Part No. MC1319x, MC1320x, MC1321x, and MC1322x, and similar or equivalent parts. The third processor element (107) can further function to control a radio frequency stabilizer (82) which functions to offset oscillator (80) wave flux caused by changes in temperature or power to the oscillator (80). A frequency stabilizer (82) suitable for use with the invention is available from Hope Microelectronics Co., Ltd, Part No. HF433E, RF Monolithics, Inc., Part No. RF1172C, and similar or equivalent parts. In regard to the particular embodiment of the invention shown in Figures 4 and 5, the oscillator (80) and frequency stabilizer (82) can generate a radio frequency signal (12) stable between about 410MHz and about 440MHz. A particular embodiment of the invention generates a radio frequency signal (12) of about 433MHz to be received by the RF reader (13).
[0049] A fourth processor element (108) functions to control a network frequency match element (109). The network frequency match element (109) can include capacitors and resistors in combination to deliver a particular radio frequency signal (12) under the conditions of
the method utilized (for example the method above described) to the antenna (83). As a non-limiting example, the network frequency match element (109) can detune a 433 MHz radio frequency signal (12) to generate a signal of between about 418-425 MHz. The detuned signal can compensate for demodulation of the radio frequency signal (12) due to interaction with the mass of animal (3). The degree of demodulation can be substantially consistent and repeatable from animal (3) to animal (3). Accordingly, the network frequency match element (109) can be configured to compensate for the signal demodulation due to the mass of the animal (3) such that the radio frequency signal (12) transmitted outside of the mass of the animal (3) can be at about 433 MHz (or other selected frequency).
[0050] As to particular embodiments, the antenna (83) can be imprinted on the printed circuit board (96) proximate the circular boundary (97) to provide an antenna (83) of generally partial circular configuration having a length of about 37 millimeters and a width of about 1 millimeter (see for example Figures 4 and 5). The antenna (83) operates to transmit the radio frequency signal (12) at the wavelengths above described. An advantage of this configuration of antenna (83) can be that it does not require winding upon or interaction with the magnetic field (110) of the first magnet (85) or one or both of a pair of magnets (95)(or any magnet) to transmit a radio frequency signal (12). Accordingly, this configuration of antenna (83) can provide a lesser amount of interference from the magnetic field (110) of the one or more magnets (85)(95) contained in the bolus (4) resulting a lower incidence of loss of the radio frequency signal (12), less modulation of the radio frequency signal (12) which results in a greater consistency (or lesser amount of lost data) in transmission of animal identification information (11) and sensed animal characteristic information (10).
[0051] Again referring to Figures 4 and 5, the bolus (4) can further include a power source (84) located within the hollow inside space (89). The power source (84) shown in Figures 4 and 5 takes the form of a battery (111) such as a AA battery, a AAA battery, or the like. The battery (111) can be inserted or stacked within the hollow inside space (89) proximate the printed circuit board (96). A non-conductive insulator (112) can be disposed between the printed circuit board (96) and the power source (84). The power source (84) provides power to the electronic components supported on the printed circuit board (36). A first battery lead (113) connects the positive battery terminal (109) of the printed circuit board (96) to the positive pole (114) of the battery (111) (or power source) and a second battery lead (115) connects the negative battery terminal (116) of the printed circuit board (96) to the negative pole (116) of the battery (111)(or power source).
[0052] Now referring primarily to Figure 4, in particular embodiments of the invention a first non-conductive spacer (117) can be disposed in the hollow inside space (89) of the bolus (4) adjacent to the printed circuit board (96) and a second non-conductive spacer (118) can be disposed in the hollow inside space (89) of the bolus (4) adjacent the battery (111). A first of the pair of magnets (95) can be disposed adjacent the first non-conductive spacer (117) and a second of the pair of magnets (95) can be disposed adjacent the second non-conductive spacer (118). The first of the pair of magnets (94) and the second of the pair of magnets (94) can be configured as magnetic disks or cylinders each having a pair of opposed circular faces disposed a distance apart by the thickness of the magnet. By providing a pair of magnets (94) disposed a distance apart, a first magnetic field (119) generated by the first of the pair of magnets (95) and a second magnetic field (120) generated by the second of the pair of magnets ( ) can attractingly interact with metal objects (121), such as coins, washers, wire, nails, tacks, barbs from barbed wire, or the like, ingested by the animal (3) to magnetically engage these metal objects (121) with the external surface of the bolus (4) such that the metal objects (121) generally align with the longitudinal axis (100) of the bolus (4), for example, substantially the entire length of the metal object (121) can lie against the external surface of the bolus (4) as shown in Figure 4 as opposed to projecting outwardly from the external surface of the bolus (4). Depending upon the configuration of the external surface of the bolus (4), the size, power, and distance separating the first of the pair of magnets (94) and the second of the pair of magnet (95) can be adjusted to correspondingly adjust the interaction of the first magnetic field (119) and the second magnetic field (120) to act on metal objects (121), as above described. For example, in the embodiment of the invention shown in Figure 4, either the particular configuration of the first of the pair of magnets (94) and the second of a pair of magnets (94)(dimensional relations and power) or the particular configuration of the first non-conductive spacer (117) and the second non-conductive spacer (118) can be adjusted to allow metal objects (121) to interact with the external surface of the bolus (4). A second advantage of providing a pair of magnets (94) disposed a distance apart, can be that the printed circuit board (96) can be located between, and a sufficient distance from, either of the pair of magnets (94) to reduce interference with the transmission of the radio frequency signal (12).
[0053] Again referring primarily to Figure 4, the printed circuit board (96) supporting the electronic components, the non-conductive insulator (112), the non-conductive spacers (117)(118), and the pair of magnets (95) can be overwrapped with a non-conductive wrap element (122) to allow the several elements to moved as a single piece. As one non-limiting example, the non-conductive wrap element (122) can comprise a plastic tube shrinkable in dimension by application of heat to conform the external surface of the components aligned as above described. Accordingly, the overwrapped elements can be inserted into the hollow inside space (89) as a single piece and the at least one end cap (95) can be sealably engaged with first bolus end (90) or second bolus end (91) of the
bolus (4). The non-conductive wrap element (122) can have one or more apertures (123). An amount of plastic resin (87), as above described, can flow through the one or more apertures to be cast about the components of the animal monitoring device (6).
[0054] Now referring primarily to Figure 5, other embodiments of the invention can have a constructional as above described and shown in Figure 4 with the exception of the form and placement of the pair of magnets (94). In the embodiment shown in Figure 5, the pair of magnets (94) and their corresponding magnetic fields (119)(120) along with the non-conductive spacers (117)(118) can be replaced by a first magnet (85) placed adjacent the animal monitoring device (6) and as to those embodiments having a non-conductive wrap element (122) located outside of the non-conductive wrap (122). The animal monitoring device (6) along with the first magnet (85) can be located inside of the inert bolus body (86) whether within an amount of plastic resin (87) or within a sealable container (88) (whether or not the sealable container (88) is also filled with plastic resin (87). As to particular embodiments, the first magnet (85) can have a first and second opposed magnetic faces (123)(124) defining a south pole and a north pole, with the first magnetic face (123) (as to the embodiment shown the south pole) disposed in inward facing relation to the animal monitoring device (6) and the second magnetic face (124) (as to the embodiment shown the north pole) disposed in outward facing relation to said animal monitoring device (6). As to certain preferred embodiments, the first magnet (85) can have a generally rectangular shape having four sides (125) defining the area of a first magnet face (123) (south pole) and the second magnet face (124) (north pole) disposed in substantially parallel opposed relation a distance apart with the first face (123) (south pole) disposed in inward facing relation to the animal monitoring device (6).
[0055] Now referring primary to Figure 7, a bar graph plots the strength of the radio frequency (12) against the orientation of the first magnet (85) in relation to the animal monitoring device (6) located within the inert bolus body (86) (as described for embodiments similar to that shown in Figure 5). Importantly, the orientation of the first magnet (85) in relation to the animal monitoring device (6) can result in a substantial difference in the strength of the received radio frequency signal (12) outside of the bolus (4). Placement of the first magnet (85) with the second magnetic face (124) (north pole) facing outward in relation to the animal monitoring device (6) (north pole designated as "north up" in Figure 7) increases the strength of the received radio frequency signal (12) from the animal monitoring device (6) outside of the bolus (4) as compared to having the first magnetic face (123) (south pole) facing outward in relation to the animal monitoring device (6) (south pole designated as "south up" in Figure 7). Depending upon the type and kind of the first magnet (85), the method in accordance with embodiments of the invention, defines the first magnetic face (123) as the magnetic face which in inward facing relation to the animal monitoring device (6) increases strength of the radio frequency signal (12) received at the radio frequency reader (13). The first magnetic face (123) may define the south pole as described; however, the invention is not so limited, and the first magnetic face (123) may also define the north pole of the first magnet (85), the method selecting the first magnetic face (123) as that face which in the inward facing relation to the animal monitoring device (6) produces the greater strength of radio signal frequency outside of the bolus (4).
[0056] Additionally, having placed the first magnetic face (123) (south pole) facing inwardly to increase strength of the received radio signal (12), the first magnet (85) can be rotated to through 180 degrees to find the orientation which further increases the strength of the radio frequency signal (12) outside of the bolus (4). As shown by Figure 7, the first magnetic (85) having the first magnetic face (123) (south pole) facing inwardly in relation to the animal monitoring device (6) and the elongate body of the first magnet (85) substantially aligned with the longitudinal axis (100) of the animal monitoring device (6) is oriented at zero degrees of rotation in relation to the longitudinal axis (100) (as shown in Figure 5). As to this embodiment of the invention, this orientation can produce a substantially increased strength of received radio signal frequency (12) outside of the bolus (4) as compared to having the opposed ends (126)(127) oriented at 180 degrees of rotation in relation to the longitudinal axis (100) (not shown).
[0057] Now referring primarily, to Figure 8, embodiments of the invention can further include a second magnet (130) having a location outside of the bolus (4). The second magnet (130) can be orally administered to an animal (3) in similar fashion to the bolus (4). The second magnet (130) can comprise a conventional magnet orally administered to animals (3) to magnetically capture metal objects (121) within the rumen of the animal (3). Particular embodiments of the second magnet (127) can have dimensional relations the same or similar to the first magnet (85) located inside the inert bolus body (86). Interestingly, as shown in Figure 8, magnetic coupling of the second magnet (130) to the first magnet (85) within the bolus (4) can increase the strength of the radio frequency signal (12) outside of the bolus (4), regardless of orientation of the first magnet (85) within the bolus (4), even though the first magnet face (123) (south pole) inwardly facing and in zero degree relation to the longitudinal axis (100) of the animal monitoring device (6) already had the greatest strength of radio frequency signal (12) outside of the bolus (4) (shown as "north up" in Figure 7).
[0058] The results set out in the example shown by Figures 7 and 8, was achieved by submerging the bolus (4) of the embodiment shown in Figure 5, and as above described, in an amount of saline solution prepared by dissolving about 27 grams of sodium chloride per liter of water. The bolus (4) submerged in the saline solution was placed about 25 feet from the RF reader (13) to approximate receiving a signal from a bolus (4) within the rumen of a ruminant animal (3) at 75 feet. The bolus (4) between trials was unaltered, except for the orientation of the first magnet (85) in relation to the animal monitoring device (6) contained inside the inert bolus body (86). The first magnet (85) was disposed in a first trial with the north face facing outwardly from the animal monitoring device, and in a second trial with the south face facing outwardly from the animal monitoring device (6). The designation of the first magnetic face (123) of the first magnet (85) was defined by the magnetic face which facing inwardly generates the greatest radio frequency signal (12) received by the RF reader (13). Accordingly, as to the particular embodiment of the invention shown in Figure 5, the south face of the first magnetic (85) faces inwardly toward the animal monitoring device (6) and defines the first magnetic face (123), while the north pole of the first magnet (85) faces outwardly in relation to the animal monitoring device (6) and defines the second magnetic face (124). The first face (123) being defined by the south pole of the first magnet (85), a third trial was conducted in which the first magnet (85) was rotated 180 degrees in relation to the longitudinal axis (100) of the animal monitoring device (6) in reversed relation to the zero degree position. The strength of the radio frequency signal (12) received by the RF reader (13) was determined and the first magnet was placed in zero degree or 180 degree relation to the animal monitoring device (6). The results of the trials are set out in the bar graph shown in Figure 7.
[0059] The results set out in the example shown by Figure 8, were achieved by submerging the bolus (4) of the embodiment shown in Figure 5 and above described in an amount of saline solution prepared by dissolving about 27 grams of sodium chloride per liter of water. The bolus (4) submerged in the saline solution was placed about 25 feet from the RF reader (13) to approximate receiving a signal from a bolus (4) within the rumen of a ruminant animal (3). As to each trial shown in Figure 7, and described above, an additional trial was conducted by submerging a second magnet (127) in the saline solution in which the bolus (4) containing the first magnet (85) was submerged. In each trial, the second magnet (127) was allowed to magnetically couple the first magnet (85) and the strength of the radio frequency signal (12) was determined. The results being summarized in the bar graph shown in Figure 8. Interestingly, as shown by Figure 9, magnetic coupling of the second magnet (127) with the first magnet (85) increased the strength of the radio frequency signal (12).
[0060] The radio frequency signal (12) strength calculated based on the reads gathered by the RF reader (13) during a period of 15 minutes and then multiplied by the signal to noise ratio to produce a RF value utilized to compare strength of radio frequency. As one illustrative example, for a particular bolus if the reads are 2 during the 15 minute period and the signal to noise ratio is 90.7 then the RF value is 181.4.
[0061] As can be easily understood from the foregoing, the basic concepts of the present invention may be embodied in a variety of ways within the scope of the appended claims. The invention involves numerous and varied embodiments of animal monitoring system including the best mode.
[0062] As such, the particular embodiments or elements of the invention disclosed by the description or shown in the figures or tables accompanying this application are not intended to be limiting, but rather exemplary of the numerous and varied embodiments generically encompassed by the invention or equivalents encompassed with respect to any particular element thereof. In addition, the specific description of a single embodiment or element of the invention may not explicitly describe all embodiments or elements possible; many alternatives are implicitly disclosed by the description and figures.
[0063] It should be understood that each element of an apparatus or each step of a method may be described by an apparatus term or method term. Such terms can be substituted where desired to make explicit the implicitly broad coverage to which this invention is entitled. As but one example, it should be understood that all steps of a method may be disclosed as an action, a means for taking that action, or as an element which causes that action. Similarly, each element of an apparatus may be disclosed as the physical element or the action which that physical element facilitates. As but one example, the disclosure of "an animal monitor" should be understood to encompass disclosure of the act of "monitoring an animal" -- whether explicitly discussed or not -- and, conversely, were there effectively disclosure of the act of "monitoring an animal", such a disclosure should be understood to encompass disclosure of "an animal monitor" and even a "means for animal monitoring." Such alternative terms for each element or step are to be understood to be explicitly included in the description.
[0064] In addition, as to each term used it should be understood that unless its utilization in this application is inconsistent with such interpretation, common dictionary definitions should be understood to included in the description for each term as contained in the Random House Webster's Unabridged Dictionary, second edition, each definition hereby incorporated by reference.
[0065] Moreover, for the purposes of the present invention, the term "a" or "an" entity refers to one or more of that entity; for example, "a memory element" refers to one or more memory elements. As such, the terms "a" or "an", "one or more" and "at least one" can be used interchangeably herein. Furthermore, a compound "selected from the group consisting of" refers to one or more of the elements in the list that follows, including combinations of two or more of the elements.
[0066] All numeric values herein are assumed to be modified by the term "about", whether or not explicitly indicated. For the purposes of the present invention, ranges may be expressed as from "about" one particular value to "about" another particular value. When such a
range is expressed, another embodiment includes from the one particular value to the other particular value. The recitation of numerical ranges by endpoints includes all the numeric values subsumed within that range. A numerical range of one to five includes for example the numeric values 1, 1.5, 2, 2.75, 3, 3.80, 4, 5, and so forth. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint. When a value is expressed as an approximation by use of the antecedent "about," it will be understood that the particular value forms another embodiment. The term "about" generally refers to a range of numeric values that one of skill in the art would consider equivalent to the recited numeric value or having the same function or result.
[0067] Moreover, for the purposes of the present invention, the term "a" or "an" entity refers to one or more of that entity unless otherwise limited. As such, the terms "a" or "an", "one or more" and "at least one" can be used interchangeably herein.
[0068] Thus, the applicant(s) should be understood to claim at least: i) each of the animal monitoring devices herein disclosed and described, ii) the related methods disclosed and described, iii) similar, equivalent, and even implicit variations of each of these devices and methods, iv) those alternative embodiments which accomplish each of the functions shown, disclosed, or described, v) those alternative designs and methods which accomplish each of the functions shown as are implicit to accomplish that which is disclosed and described, vi) each feature, component, and step shown as separate and independent inventions, vii) the applications enhanced by the various systems or components disclosed, viii) the resulting products produced by such systems or components, ix) methods and apparatuses substantially as described hereinbefore and with reference to any of the accompanying examples, x) the various combinations and permutations of each of the previous elements disclosed.
[0069] The background section of this patent application provides a statement of the field of endeavor to which the invention pertains. This section may also incorporate or contain paraphrasing of certain United States patents, patent applications, publications, or subject matter of the claimed invention useful in relating information, problems, or concerns about the state of technology to which the invention is drawn toward. It is not intended that any United States patent, patent application, publication, statement or other information cited or incorporated herein be interpreted, construed or deemed to be admitted as prior art with respect to the invention.
[0070] The claims set forth in this specification, if any, are hereby incorporated by reference as part of this description of the invention, and the applicant expressly reserves the right to use all of or a portion of such incorporated content of such claims as additional description to support any of or all of the claims or any element or component thereof, and the applicant further expressly reserves the right to move any portion of or all of the incorporated content of such claims or any element or component thereof from the description into the claims or vice-versa as necessary to define the matter for which protection is sought by this application or by any subsequent application or continuation, division, or continuation-in-part application thereof, or to obtain any benefit of, reduction in fees pursuant to, or to comply with the patent laws, rules, or regulations of any country or treaty, and such content incorporated by reference shall survive during the entire pendency of this application including any subsequent continuation, division, or continuation-in-part application thereof or any reissue or extension thereon.
Claims
1. An animal monitoring system (1), comprising:
a) an inert bolus body (86) adapted to allow oral administration to a ruminant animal (3);
b) an animal monitoring device (6) having a location inside said inert bolus body, including:
i) at least one sensor (9) which generates a signal which varies in relation to change in a sensed animal characteristic (2);
ii) a sensor signal encoder (106) which encodes said signal generated by said at least one sensor as encoded sensed animal characteristic information;
iii) a radio frequency signal generator (81) which generates a radio frequency signal capable of carrying said encoded sensed animal characteristic information;
iv) an antenna (83) which transmits said radio frequency signal; and
v) a power source (84) which supplies power to said animal monitoring device;
and
c) a first magnet (85) having a location inside said inert bolus body, wherein said radio frequency generator comprises at least an oscillator (80) which generates said radio frequency signal, the system further comprising a radio frequency stabilizer (82) which operates to maintain said radio frequency signal within a radio frequency range, characterized in the said system further comprises a network frequency match element (109) which compensates for demodulation of said radio frequency signal passing through the mass of said ruminant animal.
2. The animal monitoring system of claim 1, wherein said first magnet has a pair of opposed faces defining a north pole and a south pole, said south pole disposed in inward facing relation to said animal mon3. The animal monitoring system of claim 2, wherein said first magnet has a generally rectangular shape having four sides defining the area of a first magnet face and a second magnet face disposed in substantially parallel opposed relation a distance apart, said first face disposed in inward facing relation to said animal monitoring device, said first face defining said south pole.
4. The animal monitoring system of claim 1, wherein one or more said sensed animal characteristics is selected from the group consisting of: temperature, pH, heart rate, blood pressure, and partial pressures of dissolved gases.
5. The animal monitoring system of claim 4, wherein one or more said sensor is selected from the group consisting of a tilt sensor, a vibration sensor, temperature sensor, a blood pressure sensor, a dissolved gases sensor, a pH sensor, and a heart rate sensor.
6. The animal monitoring system of claim 1, further comprising an animal identification information encoder which encodes animal identification information associated with said sensed animal characteristic as encoded animal identification information.
7. The animal monitoring system of claim 1, wherein said a radio frequency stabilizer maintains said radio frequency signal in the range of about 410MHz and about 440MHz.
8. The animal monitoring system of claim 1, further comprising a microcontroller which controls one or more of said sensor signal encoder, said animal identification information encoder, said radio frequency signal generator.
9. The animal monitoring system of claim 8, further comprising a printed circuit board which supports and electrically connects one or more of said microcontroller, said sensor signal encoder, said animal identification information encoder said radio frequency signal generator, and said antenna.
10. The animal monitoring system of claim 9, wherein said printed circuit board has a circular boundary and said antenna comprises an imprinted antenna having a generally circular configuration disposed proximate said circular boundary of said printed circuit board.
11. The animal monitoring system of claim 1, further comprising a second magnet adapted to allow oral administration to a ruminant animal separate from said inert bolus body containing said animal monitoring device and said first magnet.
12. The animal monitoring system of claim 11, wherein said second magnet magnetically coupled to said first magnet increases transmission of said radio frequency signal capable of carrying said encoded animal identification information and said encoded sensed animal characteristic information.
**Patentansprüche**
1. Tierüberwachungssystem (1), umfassend:
a) einen inerten Boluskörper (86), der geeignet ist, um eine orale Verabreichung an ein wiederkäuenes Tier (3) zu gestatten;
b) eine Tierüberwachungsvorrichtung (6), die einen Ort innen in dem inerten Boluskörper aufweist, umfassend:
i) mindestens einen Sensor (9), der ein Signal erzeugt, das in Bezug auf eine Veränderung von einem wahrgenommenes Tiermerkmal (2) variiert;
ii) einen Sensorsignalcoder (106), der das Signal, das von dem mindestens einen Sensor erzeugt wird, als codierte, wahrgenommene Tiermerkmalsinformation codiert;
iii) einen Radiofrequenzsignalgenerator (81), der ein Radiofrequenzsignal erzeugt, das in der Lage ist, die codierte, wahrgenommene Tiermerkmalsinformation zu transportieren;
iv) eine Antenne (83), die das Radiofrequenzsignal überträgt; und
v) eine Energiequelle (84), die der Tierüberwachungsvorrichtung mit Energie versorgt; und
c) einen ersten Magneten (85), der einen Ort innen in dem inerten Boluskörper aufweist, wobei der Radiofrequenzgenerator mindestens einen Oszillator (80) umfasst, der das Radiofrequenzsignal erzeugt, wobei das System ferner einen Radiofrequenzstabilisator (82) umfasst, der wirkt, um das Radiofrequenzsignal innerhalb eines Radiofrequenzbereichs zu halten,
dadurch gekennzeichnet, dass das System ferner ein Netzfrequenzübereinstimmungselement (109) umfasst, das die Demodulation des Radiofrequenzsignals, welches die Masse des wiederkäuenenden Tiers passiert, kompensiert.
2. Tierüberwachungssystem nach Anspruch 1, wobei der erste Magnet ein Paar von gegenüberliegenden Flächen aufweist, die einen Nordpol und einen Südpol definieren, wobei der Südpol in Bezug auf die Tierüberwachungsvorrichtung nach innen gerichtet ist und wobei der Nordpol in Bezug auf die Tierüberwachungsvorrichtung nach außen gerichtet ist.
3. Tierüberwachungssystem nach Anspruch 2, wobei der erste Magnet eine im Wesentlichen rechteckige Form aufweist, die vier Seiten aufweist, die den Bereich von einer ersten Magnetfläche und einer zweiten Magnetfläche, die in Bezug zueinander im Wesentlichen parallel und gegenüberliegend und in einem Abstand angeordnet sind, definieren, wobei die erste Fläche in Bezug auf die Tierüberwachungsvorrichtung nach innen gerichtet ist und wobei die erste Fläche den Südpol definiert.
4. Tierüberwachungssystem nach Anspruch 1, wobei eines oder mehrere von den wahrgenommenen Tiermerkmalen aus der Gruppe ausgewählt ist, bestehend aus: Temperatur, pH, Herzfrequenz, Blutdruck und Partialdrücke von gelösten Gasen.
5. Tierüberwachungssystem nach Anspruch 4, wobei einer oder mehrere von dem Sensor aus der Gruppe ausgewählt ist, bestehend aus einem Neigungssensor, einem Vibrationssensor, Temperatursensor, einem Blutdrucksensor, einem Sensor für gelöste Gase, einem pH-Sensor und einem Herzfrequenzsensor.
6. Tierüberwachungssystem nach Anspruch 1, ferner umfassend einen Tieridentifikationsinformationscodierer, der eine Tieridentifikationsinformation codiert, die mit dem wahrgenommenen Tiermerkmal als codierte Tieridentifikationsinformation assoziiert ist.
7. Tierüberwachungssystem nach Anspruch 1, wobei ein Radiofrequenzstabilisator das Radiofrequenzsignal in dem Bereich von etwa 410 MHz und etwa 440 MHz hält.
8. Tierüberwachungssystem nach Anspruch 1, ferner umfassend einen Mikrocontroller, der ein oder mehrere von dem Sensorsignalcodierer, dem Tieridentifikationsinformationscodierer und dem Radiofrequenzsignalgenerator steuert.
9. Tierüberwachungssystem nach Anspruch 8, ferner umfassend eine gedruckte Leiterplatte, die einen oder mehrere aus dem Mikrocontroller, dem Sensorsignalcodierer, dem Tieridentifikationsinformationscodierer, dem Hochfrequenzsignalgenerator und der Antenne trägt und elektrisch verbindet.
10. Tierüberwachungssystem nach Anspruch 9, wobei die gedruckte Leiterplatte eine kreisförmige Begrenzung aufweist und die Antenne eine aufgedrückte Antenne umfasst, die eine im Wesentlichen kreisförmigen Konfiguration aufweist, die nahe der kreisförmigen Begrenzung der gedruckten Leiterplatte angeordnet ist.
11. Tierüberwachungssystem nach Anspruch 1, ferner umfassend einen zweiten Magneten, der geeignet ist, um eine orale Verabreichung, getrennt von dem inerten Bonuskörper, der die Tierüberwachungsvorrichtung und den ersten Magneten umfasst, an ein wiederkehrendes Tier zu gestatten.
12. Tierüberwachungssystem nach Anspruch 11, wobei der zweite Magnet, der magnetisch mit dem ersten Magneten verbunden ist, die Übertragung von dem Radiofrequenzsignal erhöht, das in der Lage ist, die codierte Tieridentifikationsinformation und die codierte, wahrgenommene Tiermerkmalsinformationen zu transportieren.
Revendications
1. Système de surveillance d'animal (1), comprenant :
a) un corps de bolus inerte (86) adapté pour permettre une administration orale à un animal ruminant (3) ;
b) un dispositif de surveillance d'animal (6) ayant un emplacement à l'intérieur dudit corps de bolus inerte, incluant :
i) au moins un capteur (9) qui génère un signal qui varie par rapport à un changement dans une caractéristique d'animal détectée (2) ;
ii) un codeur de signaux de capteur (106) qui code ledit signal généré par ledit au moins un capteur sous forme d'informations caractéristiques d'animal détectées codées ;
iii) un générateur de signaux radiofréquence (81) qui génère un signal radiofréquence capable de porter lesdites informations caractéristiques d'animal détectées codées ;
iv) une antenne (83) qui transmet ledit signal radiofréquence ; et
v) une source d'énergie (84) qui fournit de l'énergie audit dispositif de surveillance d'animal ; et
c) un premier aimant (85) ayant un emplacement à l'intérieur dudit corps de bolus inerte, dans lequel ledit générateur radiofréquence comprend au moins un oscillateur (80) qui génère ledit signal radiofréquence, le système comprenant en outre un stabilisateur de radiofréquence (82) qui sert à maintenir ledit signal radiofréquence à l'intérieur d'une plage de radiofréquence, caractérisé en ce que ledit système comprend en outre un élément d'adaptation de fréquence de réseau (109) qui compense la démodulation dudit signal radiofréquence passant à travers la masse dudit animal ruminant.
2. Système de surveillance d'animal selon la revendication 1, dans lequel ledit premier aimant a une paire de faces opposées définissant un pôle nord et un pôle sud, ledit pôle sud étant disposé dans une relation orientée vers l'intérieur par rapport audit dispositif de surveillance d'animal, ledit pôle nord étant disposé dans une relation orientée vers l'extérieur par rapport audit dispositif de surveillance d'animal.
3. Système de surveillance d'animal selon la revendication 2, dans lequel ledit premier aimant a une forme généralement rectangulaire ayant quatre côtés définissant la surface d'une première face d'aimant et d'une seconde face d'aimant disposées dans une relation opposée sensiblement parallèle écartées d'une certaine distance, ladite première face étant disposée dans une relation orientée vers l'intérieur par rapport audit dispositif de surveillance d'animal, ladite première face définissant ledit pôle sud.
4. Système de surveillance d'animal selon la revendication 1, dans lequel une ou plusieurs desdites caractéristiques d'animal détectées est/sont sélectionnée(s) dans le groupe constitué de : la température, le pH, le rythme cardiaque, la pression sanguine et les pressions partielles de gaz dissous.
5. Système de surveillance d'animal selon la revendication 4, dans lequel un ou plusieurs desdits capteurs est/sont sélectionné(s) dans le groupe constitué d'un capteur d'inclinaison, d'un capteur de vibrations, d'un capteur de température, d'un capteur de pression sanguine, d'un capteur de gaz dissous, d'un capteur de pH et d'un capteur de rythme cardiaque.
6. Système de surveillance d'animal selon la revendication 1, comprenant en outre un codeur d'informations de surveillance d'animal qui code les informations d'identification d'animal associées à ladite caractéristique d'animal détectée sous forme d'informations d'identification d'animal codées.
7. Système de surveillance d'animal selon la revendication 1, dans lequel ledit un stabilisateur de radiofréquence maintient ledit signal radiofréquence dans une plage d'environ 410 MHz à environ 440 MHz.
8. Système de surveillance d'animal selon la revendication 1, comprenant en outre un microcontrôleur qui contrôle un ou plusieurs dudit codeur de signaux de capteur, dudit codeur d'informations d'identification d'animal, dudit générateur de signaux radiofréquence.
9. Système de surveillance d'animal selon la revendication 8, comprenant en outre une carte de circuit imprimé qui supporte et connecte électriquement un ou plusieurs dudit microcontrôleur, dudit codeur de signaux de capteur, dudit codeur d'informations d'identification d'animal, dudit générateur de signaux radiofréquence et de ladite antenne.
10. Système de surveillance d'animal selon la revendication 9, dans lequel ladite carte de circuit imprimé possède une limite circulaire et ladite antenne comprend une antenne imprimée ayant une configuration généralement circulaire disposée à proximité de ladite limite circulaire de ladite carte de circuit imprimé.
11. Système de surveillance d'animal selon la revendication 1, comprenant en outre un second aimant adapté pour permettre une administration orale à un animal ruminant séparée dudit corps de bolus inerte contenant ledit dispositif de surveillance d'animal et ledit premier aimant.
12. Système de surveillance d'animal selon la revendication 11, dans lequel ledit second aimant couplé magnétiquement audit premier aimant augmente la transmission dudit signal radiofréquence capable de porter lesdites informations d'identification d'animal codées et lesdites informations caractéristiques d'animal détectées codées.
FIG. 1
FIG. 3
READER BLOCK DIAGRAM
INPUT POWER
VOLTAGE REGULATOR
BATTERY CHARGER
BATTERY
DISPLAY LIGHTS
ETHERNET CONTROLLER
MICROCONTROLLER
COMPUTER NETWORK
TEMPERATURE SENSOR
MEMORY
REAL TIME CLOCK
TRANSCEIVER
MATCHING NETWORK
RF READER ANTENNA
FIG. 5
EP 2 629 602 B1
FIG. 6
FREQUENCY STABILIZER
OSCILLATOR
TEMPERATURE SENSOR
MATCHING NETWORK
ANTENNA
12
80
81
83
109
104, 105, 106, 107, 108
102, 8
7
11
10
9
9
FIG. 7
RF VALUE
| Without Second Magnet |
|-----------------------|
| NORTH UP - 0° |
| NORTH UP, REVERSED - 180° |
| SOUTH UP - 0° |
| SOUTH UP, REVERSED - 180° |
FIG. 8
RF VALUE
| With Second Magnet |
|---------------------|
| NORTH UP - 0° |
| NORTH UP, REVERSED - 180° |
| SOUTH UP - 0° |
| SOUTH UP, REVERSED - 180° |
FIG. 9
RF VALUE
| Without Second Magnet | With Second Magnet |
|-----------------------|--------------------|
| NORTH UP - 0° | NORTH UP - 0° |
| | |
REFERENCES CITED IN THE DESCRIPTION
This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.
Patent documents cited in the description
- US 5984875 A [0002]
- US 2004155782 A1 [0002] |
Dear {{voornaam}},
This is the Scientists4Future NL newsletter of June 2020, this month coordinated by S4F Team Nijmegen. Please consider forwarding this newsletter to a friend or colleague. If this email has been forwarded to you and you'd like to join our mailing list, please click here. To unsubscribe, use the link at the bottom of this email.
This newsletter:
- Editorial Scientists4Future Nijmegen
- We stand united against racism and injustice
- Survey: how can we best reach you?
- KNMI finds regional differences in drought trends in the Netherlands
- European Commission consults the public on EU2030 climate ambitions
- Get those tiles out of your garden!
- Out of your comfort zone: Wikiversity is looking for your input
- Truly interdisciplinary: RE-PEAT festival
- Webinars on sea level research in the Netherlands
- Cycling For Climate: discover our new coastline
- MIT launches climate primer website
- Climate outreach during COVID-19
- Upcoming events
- Get involved!
---
**Editorial Scientists4Future Nijmegen**
Dear readers,
We're halfway through June and leave the summer's prelude and, with it, the breeding season for many species of birds behind us. We're also entering the year's driest months again. In the Netherlands, the Royal Dutch Institute for Meteorology (KNMI) already registered record-high droughts in some parts of the country and water company Vitens asked its recipients to cut down on water use. Of course, now that we're all forced to work from home, domestic water usage has increased significantly and will continue to put stress on the potable water supply for months to come. In fact, in our changing climate, droughts become more recurring and have an increasing impact on our daily life. We highlight recent findings of the KNMI in this newsletter.
In the middle of the #BlackLivesMatter demonstrations, we realize more than ever that striving for true equality is an extremely multifaceted problem. And right now, the growing demand for and inadequate distribution of access to potable water is adding to this inequality. Today, this inadequacy is often perceived as an issue restricted to low-GDP countries which already contradicts one fundamental human right: the access to clean water for everyone. However, as our climate changes, affluent countries may soon find themselves facing similar challenges.
Such messages give us the incentive to rethink our attitude towards potable water. Should we be flushing our toilets, nourishing our gardens, and washing our dishes with pristine drinking water? And what are the alternatives? How do we make the change to fair and more sustainable water management? If you want to start small, look into the plethora of rainwater collection systems for home use. You can find them cheap, expensive, simple, fancy, filtered, or unfiltered but most importantly, they provide you with an alternative to wasteful use of drinking water.
Happy water saving!
S4F Team Nijmegen
---
**We stand united against racism and injustice**
In the midst of ongoing racism and at a time of deep division in society, the Scientists4Future NL community reaffirms its commitment to diversity, equity and inclusivity in the sciences, and in actively creating and supporting a community that reflects this commitment.
We take this moment to emphasise that "climate change will amplify existing risks and create new risks for natural and human systems. Risks are unevenly distributed and are generally greater for disadvantaged people and communities in countries at all levels of development" [1]. We stress that racism is an institutional construct that continues to lurk within our university institutions [2]. (View the full statement)
---
**Survey: how can we best reach you?**
After the recent Online Climate Strike, S4F-NL wondered why so few of our fellow scientist joined us in this action. Lack of time? No interest? Not enough information? Hence, we designed a survey with which we would like to learn more about how to best engage with our followers, i.e. the scientific community. The results will allow us to streamline our efforts in informing you about our activities. We invite you to take 5 minutes of your time to answer our questions. Take me to the survey.
---
**KNMI finds regional differences in drought trends in the Netherlands**
In a recent article by the KNMI and University of Utrecht, researchers found regional differences in drought trends in the Netherlands. We asked the first author Sjoukje Philip to comment on these findings:
Dry summers such as the one in 2018 happen more often now in the Netherlands inland than in the past, due to climate change. This is the conclusion that emerged from recent research [1] jointly conducted by KNMI and University of Utrecht. In the Netherlands drought tendency is officially monitored at KNMI by assessing the balance in precipitation and (potential) evaporation over the months April to September. In this study, Apr-Sep averaged trends in precipitation potential evaporation and temperature observations are analysed, and, where possible, an attribution to climate change was performed, comparing climate conditions around 1950 to now. New in this study is that the Netherlands is divided into two regions: the coastal region and the inland region. In earlier research the climate change signal in drought had been concealed due to country-wide averaging and internal cancellation of signals, although trends in precipitation were known to be different in these regions [2,3].
Accordingly, the new study finds a trend towards more Apr-Sep precipitation in the coastal region but no significant trend inland. As can be expected, temperature and potential evaporation are increasing in both regions, however, these trends are stronger in the inland region. Both lack of precipitation and evaporation play a role in drought, such that in the inland region there is a trend towards Apr-Sep averaged drying, driven by trends in temperature and potential evaporation, whereas in the coastal region no significant trend in drying is evident because of increasing summer precipitation.
Climate models indicate that the inland trends in temperature and potential evaporation can be attributed to climate change, although models show a lower trend in temperature than observations. For the inland region this means that we can formally at least partially attribute the trend towards more drought to climate change. In the coastal area, climate models do not reproduce a trend towards more precipitation and are thus incompatible with observations, with the implication that no formal attribution statement can be made.
---
**European Commission consults the public on EU2030 climate ambitions**
The European Commission is carrying out an online public consultation, inviting stakeholders and citizens to express their view on the EU 2030 climate ambition increase and on the action and policy design necessary for deeper greenhouse gas emission reductions. The information is gathered via the online form but one can also submit concise position papers, policy briefs, sectoral roadmaps, or studies. The online consultation will be open until June 23rd, 2020 (more information).
---
**Get those tiles out of your garden!**
"Get those tiles out of your garden!" With this cry, the Rotterdam municipality introduced a financial support program for residents to remove tiles from their garden. Currently, the average garden in the Netherlands is packed with tiles. They are low in maintenance, but not very sustainable. During summer these places absorb significantly more heat. Compared to gardens with a lot of vegetation, this can differ with up to 7°C. Secondly, during downpours, which are expected to occur more often in the future, insufficient water drainage leads to regular flooding of streets and basements. This green subsidy can provide a proper solution to both of these major issues. Several other municipalities also initiated subsidy programs to make your garden a greener place. Do you want to see whether your municipality participates? Check this website.
---
**Out of your comfort zone: Wikiversity is looking for your input**
Are you interested in "super wicked problems"? Are you an expert in a discipline related to the environmental crisis, including psychology, economy, or otherwise? The Wikiversity.org community is looking for input and testers of the 'problem analysis' templates. The idea is to collaboratively draft environmental emergency plans which not only combat the effects of problems but also propose fundamental changes. The templates invite to state causes of a problem (down to the root cause), formulate and reconsider the goal to be reached if the problem would be solved, and propose and discuss measures to reach that goal. (template information).
---
**Truly interdisciplinary: RE-PEAT festival**
RE-PEAT is a new youth-led project that draws attention to the ecological and cultural value of peatlands. Re-Peat is collating an anthology of individual stories, drawings, poems and images that describe personal connections to peatlands to be sent to the Members of European Parliament before the upcoming Common Agricultural Policy (CAP) meeting later this summer. Currently, the CAP offers subsidies to farmers who practise drainage-based agriculture but does not provide subsidies for farmers who want to grow wet crops on restored peat.
Recently, Re-Peat held an interdisciplinary 24-hr global online peat festival. Amazing peatland art practices, photographs, scientific & policy discussions, field demonstrations, and live reports from peat bogs from every inhabited continent made this a wonderfully mesmerising experience. If you missed it, recordings will be available online. We are looking forward to a re-peat!
Re-Peat website: re-peat.earth
Peat-fest recordings: https://re-peat.mn.co/feed
Anthology submissions: firstname.lastname@example.org
Contact for more info: email@example.com
---
**Webinars on sea level research in the Netherlands**
The TU Delft Climate Institute is organizing a series of webinars about the ongoing sea level research in the Netherlands. The webinars target (under-)graduate students, fellow scientists, and research institutes, as well as anyone interested with a scientific and/or technical background. The talks will be held via Zoom. Presentations will last for about 30 minutes and are followed by another 30 minutes of public discussion (more information).
---
**Cycling For Climate: discover our new coastline**
According to the predictions of the Dutch Meteorological Institute, the sea level will rise anywhere between 26 to 83 cm consequences for the Netherlands and may significantly shift our coastline. To draw attention to these consequences of climate change, the participants of this Cycling For Climate imaginary new coastline of almost 400 km in one day. An impressive feat to showcase the strength of cooperation, take place on the 22nd of June (more information).
---
**MIT launches climate primer website**
The Massachusetts Institute of Technology (MIT) has launched a new website aiming at a union of uncertainty in our projections, engages in a discussion on risk and risk management linked to uncertainty in our projections, and concludes by presenting different options for taking action. It even contains a quiz to test your own knowledge. Basing the website on facts rather than politics, the authors express their hope that the website will inspire others to take action (more information).
---
**Climate outreach during COVID-19**
Climate advocates should consciously think about how the corona crisis changes climate communication, both in terms of timing and sensitivity. What does the evidence say? Find out both of these and summarize on their key UK findings in a 10-minute guide (climate outreach webinar).
---
**Upcoming events**
- **June 19, 2020** - Utrecht University webinar point: Tim Lenton (University of Exeter) "Tipping points"
- **June 22, 2020** - Cycling For Climate Classic (more information)
- **June 25, 2020** - Sea level research webinar: David Steffelbauer (TU Delft) "Detecting non-stationary sea level trends"
- **June 29, 30, 2020** - Astronomy for Future: development, global citizenship, and climate change (more information)
- **July 9, 2020** - Sea level research webinar: Tim Hermans (NIOZ) "Ocean dynamical downscaling for regional sea level rise projections" (more information)
---
**Local groups**
Currently, there local groups active in Amsterdam, Delft, Nijmegen, and Utrecht. If you wish to get involved (or start your own local group) contact us and we'll get back to you shortly.
Finally, check out our website, or follow us on Facebook, Instagram, or Twitter where we'll be sharing national and international news regarding the role of scientists in times of the climate crisis. |
REFLECTION
Today we reflect on the first and most important responsibility of discipleship, namely, evangelisation, the goal of which is the proclamation of the reign of God. God seems to choose the most unlikely people to preach to others. It does not matter who brings the good news, but who receives it. Jonah the prophet was sent to outsiders, even enemies. The disciples were fishermen who spoke to the people of their own country. God’s salvation is intended for all, and it seems to make little difference who brings this good news.
Today’s readings call for repentance. The grace of God requires a new way of living, a life of faith and commitment. The gospel invites us into the age of fulfilment, a salvific reign of truth, compassion and kindness. It is a way of life that leads to justice.
There is an urgency in these readings. This world in its present form is passing away, and God’s call demands a total response. Like the disciples, we must leave the familiarity of our former ways and follow the call that we have heard in the depths of our hearts. We are called first to enter the reign of God and then to spread it. As ambassadors of God, we bring the good news of salvation wherever we are and in whatever we do. Called by God, we now begin to live in a totally different way, guided by the values of the reign of God rather than those of the world that is passing away.
© Dianne Bergant CSA
SACRED HEART SCHOOL commences this Wednesday 27th January. A very warm welcome to staff and students. We hope 2021 holds many wonderful opportunities and blessings throughout the school year. Opening School Mass will take place this Friday 29th January at 9.00am in the Amare Centre [School Hall]. All welcome.
Dear fellow parishioners, primary school families will know me, as will the Riverview Church community - my name is Veronica.
As I reflect today, I realise that my life has been steered and changed by my involvement in our Parish. Just in this new year, I have discovered a podcast called “The Bible in a Year” (created by American Catholic Priest, Fr Mike Schmitz) which I would like to share with you, my fellow parishioners (more about this later in this letter).
Learning about *the faith* has been a strong motivation for me to be involved in this parish. I can trace the ways I have learnt back to when I began my teaching career as a Year 5 teacher at our primary school in 1982. Teaching religion has been very rewarding, as explaining bible messages to small children, enables the kernel of truth, found within Catholic teachings to be discerned and summarised. Talking out loud to a class of students reinforced my beliefs. I missed this practical and hands on sharing of faith when I finished classroom teaching but was drawn back to assist with Children’s Liturgy (at the Riverview church as my own children belonged there) and then we started up the Catholic Friendship Club (CFC) which has involved many families from the primary school. This is a fortnightly activity for our students with faith formation tied to fun and food.
I would like to point out two other significant ways that our parish has been instrumental in fostering my faith on a personal level. The first goes back roughly 20 years when we had the opportunity to form small study groups using the Little Rock programme. I remember being amazed by what I learnt (with other young mothers) by following a weekly format for “The Acts of the Apostles” with the commentary attached. The other significant contribution is the annual Lenten prayer groups. Each year we have the same readings for Lent; though the same gospel, there is a different focus each year and a new way for understanding these readings. Thank you to those involved in these two wonderful formation activities.
And now, as I go about my day I try to incorporate faith based podcasts into my morning walks… I have enjoyed and still listen to the 10 minutes a day “Pray as You Go” App, created by the Jesuit community and just recently “The Bible in a Year” App which uses a helpful rearrangement of the books of the bible to make the reading of the bible a unified story by combining the different parts of the bible as they fit into 14 narratives… So, the gospels are not left to the end but are interspersed into other books as *the bible story* is told. I have just begun (up to day 7) and so far Genesis has been interspersed with Psalms and chapters from Job and the Proverbs.
Maybe, this idea for reading the bible appeals to you…. give it a go and see what you think.
God bless, Veronica.
---
**DIARY DATES**
| Tuesday | 26 Jan | 9.30am | NO Craft Group |
**MASS FOR AUSTRALIA DAY WILL BE CELEBRATED THIS TUESDAY**
26th JANUARY AT 9.00AM.
The usual 9.00am Mass on Monday will not be held.
**GROUP 1** - As there five Sundays this month please note that Group 1 will rostered on for next weekend 30/31 January
This weekend we welcome into the Catholic Church through the Sacrament of Baptism Eva Johns & Alanna Wood.
**ONLINE BOOKING FOR MASSES**
Please register for the Mass you wish to attend at this address https://boovalparish.eventbrite.com.au/ or you can go to our website and click on the link. You can still also book by phone on 3282 1888 or email: firstname.lastname@example.org Bookings close 2.00pm Friday.
**COMMUNITY GARDEN** - The community gardeners meet every Saturday between 7am and 9am. Everyone is welcome to come along and help establish the garden.
“Follow me and I will make you fishers of men.” - Mark 1:17
Jesus’ call to “Follow me” is a call to all Christians! The call is in the here and now, in our present circumstances, not when we think we are “ready” or have everything in order. Good stewardship of our God-given gifts means that things aren’t always going to go according to our schedule and that God has a much better plan in store for each of us.
**A VOCATION VIEW:** There was a man sent by God, who came as witness to testify to the Light, so that through this person all people might believe. Is your name in the Gospel today?
---
*The Booval Catholic Parish acknowledge the Jagera, Yuggera and Ugarapul people, the Traditional Custodians who have walked upon and cared for this land for thousands of years.*
PLEASE PRAY FOR THOSE WHO HAVE RECENTLY DIED: Kathy Hutchinson, Michael Herron (Neville’s brother-in-law), Cyril Kreis.
PLEASE PRAY FOR THOSE WHOM WE REMEMBER: James & Noela Shute, Antony George, Monica Missingham, Brian Meara, Kirtley & Dolan families, Bob Cole, Dorothy Walsh, Donna Gabriel, Tracie Files, Rita VanderGelst, Tom Brown, Bernie Kinnane, Chris Garton Ken Adkins, Frank Van Gestel, Lorraine Murphy, David Samson, Jack & Eileen McMahon, Bob, John & Mark Skippington, Thomas Boyle, Margery Boyle, Ellie & Ossie Cody, Rob Kane, Mike Cody, Helen Cody, Roy Folan, Alfred & Stella Grech, Moira Murphy, Melinda & Arthur Murphy, Bernice & Clement Goan, Arthur, Brian & Dane Bracey, Grahame Coultas.
LET US PRAY for those who are sick or recovering from surgery: Elenor Tedenborg, Declan Simmonato, Marie Noon, Irina Coady, Kathy Harding, Rebecca Smith, Fr John Scarrott, Grace Thompson, “Special Intention”, Lilian Morrison, Joana Ambrad-Isip, Cecily Spillane (Evie’s mum), Rod Perry (brother of Joan Jones), Joyce Samson, Josephine Smith, Jac Chave, Dolores Morgan, Desolie King, Graeme Peters, Trish Rowlands, Rae Clark, Jill Wright, 2 Special Intentions, Kath Cole, Eileen Adkins, Erin, Benjamin Palmer (Chris’ Garton’s Grandson), Hannah Smith, Julia Telford, Karen Purdie.
We also include in our prayers all the aged and sick to whom we take Communion.
We pray for the safety and good judgement of those serving in our defence forces overseas.
(Please contact us when you feel that it is appropriate to remove the name of your friend or loved one from our Parish Prayer List)
BOOK OF REMEMBRANCE ANNIVERSARIES 24 January - 30 January
Patricia Paton, Elsie McCormack, Mikayla McKeaten, Rose Marks, Allan Swift, Irene Winterhoff, Gladys Rosentretor, Fernando Coelho, Malcolm Edmunds, Alexander Elliott, Victor Hawkins, Lorna Larter, Marie Mathews, (Mick) Harold Richards, Ethel Shearer, Patricia Waterson, John Doolan, Eleanor Hennessy (Doolan) Harry Sciberras, Vince Maude, Maureen Mangan, Sheila Burke, Kevin Hallahan, William Lane, Patricia Gee, Mary Barrow, Ken Sutherland, Ursula Hennelly, James Sherlock, Robert Bennett, Marge Daum, John Flynn, Patrick Gee, Kathleen Gorman, Regine Hakuzwimana, Myles McEniery, John Markey.
NEXT WEEK’S LITURGY – GROUP 1
YEAR B - Fourth Sunday in Ordinary Time
1st Reading: Deut 18:15-20 2nd Reading: 1 Cor 7:32-35 Gospel: Mark 1:21-28
| 30/31 January | 5.00pm Vigil | 7.00am - Riverview | 9.00am | 5.00pm |
|---------------|--------------|-------------------|-------|--------|
| READERS | Evelyn Le Bherz | Neville Stagg | Katherine Toohill | Denise Retchford |
| | Marea Teakle | Bonnie Mulroney | Trish Watter | Virgil Anthony |
| | Gabrielle Vieth | —— | Scott Andrews | Sahana Anthony |
| CHURCH CLEANING | Toetau Family | | | |
Children’s Section
Use the words in the fish to complete the sentence below...
Jesus called Peter and Andrew, and James and John to follow him. Draw Jesus standing on the beach calling out to James and John in their boat.
ENTRANCE HYMN
Sing a New Song
Sing a new song unto the Lord;
Let your song be sung
from mountains high.
Sing a new song unto the Lord,
Singing alleluia.
Shout with gladness dance for joy.
O come before the Lord.
And play for Him on glad tambourines,
and let your trumpet sound.
Glad my soul for I have seen
the glory of the Lord.
The trumpet sounds;
the dead shall be raised,
I know my Savior lives.
By D. Schutte © 1972 – OCP Publications
GLORIA: [Sung]
© M. Haugen – GIA Publications
PSALM RESPONSE Psalm 24
Teach me your ways, O Lord.
Teach me your ways, O Lord.
© Gerard O’Dempsey OFMCap.
GOSPEL ACCLAMATION
Alleluia, alleluia, alleluia, alleluia
Alleluia, alleluia, alleluia, alleluia
© M. Haugen – GIA Publications
NICENE CREED
I believe in one God,
the Father almighty,
maker of heaven and earth,
of all things visible and invisible.
I believe in one Lord Jesus Christ,
the Only Begotten Son of God,
born of the Father before all ages.
God from God, Light from Light,
true God from true God,
begotten, not made, consubstantial with the Father;
through him all things were made.
For us men and for our salvation
he came down from heaven,
and by the Holy Spirit was incarnate of the Virgin Mary, and became man.
For our sake he was crucified under Pontius Pilate,
he suffered death and was buried,
and rose again on the third day
in accordance with the Scriptures.
He ascended into heaven
and is seated at the right hand
of the Father.
He will come again in glory
to judge the living and the dead
and his kingdom will have no end.
I believe in the Holy Spirit, the Lord,
the giver of life,
who proceeds from the Father and the Son,
who with the Father and the Son is adored
and glorified,
who has spoken through the prophets.
I believe in one, holy, catholic and apostolic Church.
I confess one Baptism for the forgiveness
of sins and I look forward to the resurrection of the dead
and the life of the world to come. Amen.
LORD’S PRAYER [Sung]
LAMB OF GOD [Sung]
© M. Haugen – GIA Publications
Mass Parts: Excerpts from the Roman Missal © 2010 ICEL
COMMUNION
Here At This Table
Come and be filled here at this table.
Food for all who hunger
and drink for all who thirst.
Drink of his love, wine of salvation.
You shall live forever
in Jesus Christ the Lord.
You who labor for justice,
You who labor for peace,
You who steady the plow
In the field … … of the Lord … … .
You with lives full of pain,
You who sorrow and weep,
You, beloved of Christ,
Come to him … … , come to him … … !
You, the aged among us,
Holy, faithful and wise,
May the wisdom you share
Form our lives … … and our world … … !
By J S Whitaker; M Whitaker & J M Whitaker © 1996, 2000 – OCP Publications.
OFFERTORY Quiet Music
HOLY, HOLY [Sung]
© M. Haugen – GIA Publications
ACCLAMATION [Sung]
We proclaim your Death, O Lord,
and profess your Resurrection
until you come again,
until you come again.
© M. Haugen – GIA Publications
AMEN [Sung]
© M. Haugen – GIA Publications
RECESSIONAL
Go Now, You are Sent Forth
Go now, you are sent forth
To live what you proclaim;
To show the world you follow Christ
In fact, not just in name.
Go now, you are sent forth
As God’s ambassador;
By serving Him in those we meet
We love him more and more.
Go now, you are sent forth
And Christ goes with you, too.
Today you help his kingdom come
In everything you do.
© L Watt – LED . |
Fibroblast growth factor-2-mediated protection of cardiomyocytes from the toxic effects of doxorubicin requires the mTOR/Nrf-2/HO-1 pathway
Navid Koleini\textsuperscript{1,2}, Barbara E. Nickel\textsuperscript{1}, Jie Wang\textsuperscript{2}, Zeinab Roveimiab\textsuperscript{1}, Robert R. Fandrich\textsuperscript{1,3}, Lorrie A. Kirshenbaum\textsuperscript{1,2}, Peter A. Cattini\textsuperscript{2} and Elissavet Kardami\textsuperscript{1,2,3}
\textsuperscript{1}Institute of Cardiovascular Sciences, Albrechtsen Research Centre, Winnipeg, Manitoba, Canada
\textsuperscript{2}Department of Physiology and Pathophysiology, University of Manitoba, Winnipeg, Manitoba, Canada
\textsuperscript{3}Department of Human Anatomy and Cell Sciences, University of Manitoba, Winnipeg, Manitoba, Canada
Correspondence to: Elissavet Kardami, email: firstname.lastname@example.org
Keywords: fibroblast growth factor 2 isoforms, doxorubicin cardiotoxicity, heme oxygenase 1 cardioprotection, Nrf-2 activation, mTOR signaling
Received: June 22, 2017 Accepted: August 04, 2017 Published: August 24, 2017
Copyright: Koleini et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 (CC BY 3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
ABSTRACT
Background: Cardiotoxic side effects impose limits to the use of anti-tumour chemotherapeutic drugs such as doxorubicin (Dox). There is a need for cardioprotective strategies to prevent the multiple deleterious effects of Dox. Here, we examined the ability of administered fibroblast growth factor-2 (FGF-2), a cardioprotective protein that is synthesized as high and low molecular weight (Hi-, Lo-FGF-2) isoforms, to prevent Dox-induced: oxidative stress; cell death; lysosome dysregulation; and inactivation of potent endogenous protective pathways, such as the anti-oxidant/detoxification nuclear factor erythroid-2-related factor (Nrf-2), heme oxygenase-1 (HO-1) axis.
Methods and Results: Brief pre-incubation of neonatal rat cardiomyocyte cultures with either Hi- or Lo-FGF-2 reduced the Dox-induced: oxidative stress; apoptotic/necrotic cell death; lysosomal dysregulation; decrease in active mammalian target of Rapamycin (mTOR). FGF-2 isoforms prevented the Dox-induced downregulation of Nrf-2, and promoted robust increases in the Nrf-2-downstream targets including the cardioprotective protein HO-1, and p62/SQSTM1, a multifunctional scaffold protein involved in autophagy. Chloroquine, an autophagic flux inhibitor, caused a further increase in p62/SQSTM1, indicating intact autophagic flux in the FGF-2-treated groups. A selective inhibitor for HO-1, Tin-Protoporphyrin, prevented the FGF-2 protection against cell death. The mTOR inhibitor Rapamycin prevented FGF-2 protection, and blocked the FGF-2 effects on Nrf-2, HO-1 and p62/SQSTM1.
Conclusions: In an acute setting, Hi- or Lo-FGF-2 protect cardiomyocytes against multiple Dox-induced deleterious effects, by a mechanism dependent on preservation of mTOR activity, Nrf-2 levels, and the upregulation of HO-1. Preservation/activation of endogenous anti-oxidant/detoxification defences by FGF-2 is a desirable property in the setting of Dox-cardiotoxicity.
INTRODUCTION
Doxorubicin (Dox) is a potent chemotherapeutic drug used against many types of cancers, but is associated with numerous side effects, including an increased risk for acute and chronic cardiotoxicity leading to cardiomyopathy and heart failure [1]. Dox toxicity has been studied extensively, and multiple mechanisms are implicated including: excessive production of reactive oxygen and nitrogen species, interference with iron metabolism, mitochondrial damage, intercalation with nuclear DNA and binding to Topoisomerase II (Top-II), activation of pro-cell death pathways involving increased expression of p53 and BCL2/adenovirus E1B 19 kDa protein-interacting protein 3 (Bnip-3). In addition, dysregulation of autophagy, mitophagy and lysosomal biogenesis all contribute to Dox toxicity, causing dysfunction and loss of cardiomyocytes [1–3]. While various types of anti-oxidant therapies have been effective in preventing or attenuating Dox-cardiotoxicity in animal models, clinical trials have not provided conclusive evidence [4]. Dox-induced heart disease may be managed by using drugs such as dexrazoxane, which is believed to act by chelating iron and/or by preventing Dox-Top-II interaction, and by drugs traditionally used in the treatment for heart failure [1, 5]. There is a need for additional strategies aimed at prevention or treatment of Dox-cardiotoxicity. Endogenously expressed cardioprotective factors, such as fibroblast growth factor-2 (FGF-2), as well as endogenous cytoprotective pathways merit consideration in this context.
FGF-2 is a multifunctional protein which is expressed as high (>20 kDa, Hi-FGF-2) and low molecular weight (18 kDa, Lo-FGF-2) isoforms, products, respectively, of leucine (CUG)- or methionine (AUG)-initiated translation of the same messenger (m) RNA [6]. Hi-FGF-2 is the predominant isoform found in the human, rat and mouse heart, and like Lo-FGF-2 is detected in the intracellular as well as extracellular environment [7, 8]. Lo-FGF-2 is well documented to be cardioprotective, preventing myocardial loss and contractile dysfunction during myocardial infarction and in ischemia-reperfusion scenarios [9, 10]. Lo-FGF-2 was also shown to protect neonatal rat cardiomyocytes from Dox-induced cell death *in vitro* [11]. A non-mitogenic mutant Lo-FGF-2 has also been shown to protect isolated mouse hearts against acute Dox-induced decrease in contractility [12]. Less is known regarding the role of the Hi-FGF-2 isoform in the heart. Extracellular-acting, cell-released Hi-FGF-2 induces cardiomyocyte hypertrophy and may contribute to maladaptive chronic remodeling [7, 8]. Overexpression of Hi-, but not Lo-, FGF-2 promoted apoptosis in cardiomyocytes via an intracrine pathway [13]. There is, however, no information on the effect of exogenously administered Hi-FGF-2, compared to Lo-FGF-2, on Dox-induced cardiomyocyte damage and cell death.
Here we present evidence that Hi- or Lo- FGF-2 are equally protective against multiple aspects of acute Dox-induced toxicity in neonatal rat cardiomyocytes *in vitro*. Furthermore, the FGF-2 isoform-induced protection requires activation of endogenous cytoprotective antioxidant pathways such as the Nrf-2/HO-1 axis.
RESULTS
Effect of FGF-2 isoforms on Dox-induced cardiomyocyte toxicity *in vitro*
Dox is known to induce apoptotic and necrotic death in cardiomyocytes in *in vitro* models, and *in vivo*. To recapitulate the effects of Dox *in vitro*, cardiomyocytes were exposed to 0.5 μM Dox, as used in our previous study [11].
Pre-incubation with recombinant Lo- or Hi-FGF-2 (10 ng/ml) for 30 minutes protected cardiomyocytes from Dox toxicity by a number of measures, assessed at 24 hours post-Dox (Figure 1). Pilot dose-response studies indicated that both FGF-2 isoforms displayed the same level of protection in the 1-100 ng/ml range, so we used the 10 ng/ml concentration for all further experiments. Based on the Live-Dead assay (Figure 1A-1B), Dox caused a significant, over 3-fold, increase in the percentage of dead cells when compared to control cultures. This effect was prevented by either Lo- or Hi-FGF-2 pre-treatment. Relative levels of LDH in the culture medium were measured, as an indicator of disruption of cardiomyocyte plasma membrane integrity. Released LDH was increased by Dox treatment, while pre-treatment with either FGF-2 isoform attenuated this increase (Figure 1C). Also, FGF-2 isoform abolished the Dox-induced upregulation in active (17 kDa) caspase-3, the tumour suppressor p53, and Bnip-3 protein levels (Figure 1D-1G), consistent with prevention of Dox-induced apoptotic and necrotic cell death. Bnip-3 immunoreactive bands migrated at 20-30 kDa, likely representing different degrees of post-translational modifications, similar to previous reports [14]; all anti-Bnip3 bands were included in our calculations. Dox caused formation of mitochondrial permeability transition pores (mPTP), as observed by the Calcein-Cobalt mPTP assay, and both FGF-2 isoforms prevented mPTP formation. Representative images are shown in Supplementary Figure 1. In addition, both FGF-2 isoforms were able to limit the Dox-induced decrease in ATP, and increase in ADP levels (Figure 1H), consistent with protective effects at the mitochondrial level. Lysates from attached cells were used for western blot-based determinations of Dox toxicities (p53, caspase 3, Bnip-3 upregulation) and the effect of both FGF-2 isoforms. We have not included detached cells which were occasionally observed in the Dox-treated samples only. As a consequence measurements of Dox toxicities by western blotting may be an underestimate of the magnitude of
Figure 1: FGF-2 isoforms prevent Doxorubicin-induced toxicity in cardiomyocytes. Panels A-H show the effects of Doxorubicin (Dox) exposure for 24 hours in the presence and absence of Lo- or Hi-FGF-2 pre-incubation, as indicated. (A) and (B) show, respectively, representative images of myocytes stained with the Live-Dead assay: Calcein-AM (green, live cells) / Ethidium homodimer (red, dead cells), and the corresponding graph with the percentage of cell death in attached cells. (C) LDH released in culture medium, assessed by Absorbance at 490 nm (n=6). (D), (E), (F) show, respectively, relative protein levels of cleaved (active) caspase-3 (19 kDa), p53 (53 kDa), and Bnip-3 (~30 kDa). Data for the graphs were obtained from the corresponding western blots shown in panel (G); images of the same membranes stained for Ponceau S are also included, and served to adjust for minor variations in protein loading. (H) ATP and ADP levels relative to controls, as indicated (n=6). Data is plotted as mean ± SEM and statistically significant differences are shown by brackets between groups; a P<0.05 was considered significant.
damage to the whole cell population; however this does not change the central observation that the FGF-2 isoforms are protective.
Dox at 0.5 μM was also found to be toxic for MCF-7 cells (a human breast cancer cell-line), as measured by Calcein-AM assay. However, unlike primary cardiomyocytes, the MCF-7 cells were not protected against Dox toxicity by either FGF-2 isoforms under the conditions tested (Supplementary Figure 2).
Overall, these observations show that both Hi- and Lo-FGF-2 can exert acute cardiomyocyte protection from Dox-induced cell death with apoptotic and necrotic features.
**Effect of FGF-2 on cardiomyocyte antioxidant/detoxification responses (Nrf-2 and downstream targets)**
As anticipated, Dox increased levels of reactive oxygen species (ROS) in cardiomyocytes as measured by fluorescence intensity of DCF-DA (Figure 2A). This effect was attenuated by either Hi- or Lo-FGF-2 (Figure 2A). The transcription factor Nrf-2 is a master regulator of the endogenous anti-oxidant response [15]. Dox caused a reduction in the RNA and protein levels of Nrf-2; either FGF-2 isoform not only prevented this Dox-induced reduction, but also resulted in a 2-fold increase in Nrf-2 transcripts compared to controls in the presence of Dox (Figure 2C). The Dox-induced decrease in total Nrf-2 protein was prevented by either FGF-2 isoform. Relative levels of Nrf-2 protein in the presence of Hi-FGF-2 (but not Lo-FGF-2) in the Dox-treated groups were significantly higher than those of the control group; this represents the only difference between Hi- and Lo-FGF-2 activities in the present study. Please note that data shown for Nrf-2 protein represent measurements from the 100 kDa immunoreactive band, corresponding to the previously published electrophoretic migration for Nrf-2 [16]. An immunoreactive, faster migrating band was also present and may represent a truncated or modified Nrf-2. The faster band displayed the same pattern of response as the 100 kDa band, but was not included in our measurements.
Nrf-2 binds to the antioxidant response element in the promoter region of numerous target genes, including HO-1 and p62/SQSTM1. In the presence of Dox both FGF-2 isoforms significantly increased mRNA and protein levels for HO-1 relative to control cells (Figure 2B, 2C). Dox alone had no significant effect on HO-1 protein levels. In the absence of Dox the FGF-2 isoforms had no effect on relative Nrf-2 and HO-1 protein levels (Supplementary Figure 3). HO-1 is a 32 kDa protein; the antibodies to HO-1 detected faint immunoreactive bands at 32-34 kDa in control samples, and a 32 kDa band in the Dox/FGF-2 - exposed samples. It is possible that the 34 kDa band represents a modified HO-1. Both immunoreactive bands were included in our densitometric measurements.
The FGF-2 isoforms elicited significant increases in p62/SQSTM1 mRNA and protein over controls, in the presence of Dox (Figure 3A–3C). The p62/SQSTM1 is an autophagy adaptor that binds ubiquitinated items destined to be eliminated by autophagy and mitophagy [17], and represents another target of Nrf-2 [18]. Increased accumulation of p62/SQSTM1 is often interpreted as defective/blocked autophagic flux [19]. To determine if autophagic flux was blocked we examined the effect of a lysosomal/autophagy flux inhibitor, Chloroquine (CQ) on p62/SQSTM1 accumulation in the FGF-2/Dox groups. As shown in Figure 3A-3B, CQ caused additional accumulation of p62/SQSTM1 in control and FGF-2/Dox-treated groups, indicative of functional autophagic flux in these groups. CQ had no effect on p62/SQSTM1 accumulation in the presence of Dox, consistent with reports of impaired flux in this group [2, 20].
Impaired autophagic flux can be due to defects in lysosomal biogenesis and/or function [2]. To further document an effect of Dox on lysosomes, relative levels of the mRNA for the transcription factor-EB (TFEB, a master transcription factor for lysosomal biogenesis) and of the lysosomal protein LAMP1 were assessed. Dox significantly reduced relative levels of TFEB mRNA and LAMP1 protein, and these effects were prevented by FGF-2 isoforms (Figure 4A & 4B).
**The role of mTOR and HO-1 in the FGF-2-induced protection from Dox**
Exposure of cardiomyocytes to Dox resulted in a significant decrease in the active (p-Ser2448)-mTORC1/total mTORC1 ratio, an effect that was limited by either Lo- or Hi-FGF-2 pre-treatments (Figure 5A and Supplementary Figure 4A). A selective inhibitor for mTORC1, Rapamycin, was used to examine whether mTOR activity mediated the protective effects of Hi- and/or Lo-FGF-2. Cells were treated with Rapamycin for 30 min prior to stimulation by FGF-2 for an additional 30 min, and then exposed to Dox for 24 hours. Rapamycin alone had no effect on cell survival in the absence or presence of Dox, but abrogated both Hi- and Lo-FGF-2 induced protection, as measured by the Calcein-AM viability assay (Figure 5B). Rapamycin prevented the FGF-2-induced restoration of relative Nrf-2 protein levels (Figure 5C and 5E). In addition, Rapamycin prevented the robust upregulation of HO-1 protein in the FGF-2/Dox groups (Figure 5D and 5E). Finally, Rapamycin prevented the protective effect of FGF-2 isoforms against Dox-induced upregulation of Caspase 3/7 activity, LDH release, and Dox-mediated upregulation of p53 as well as cleaved (active) caspase-3 (Figure 6A & 6B, and Supplementary Figure 4C). Another inhibitor of the mTOR pathway, Torin-1, was also able to prevent cardiomyocyte protection, and HO-1 upregulation by FGF-2 isoforms in the presence of Dox (Supplementary
Figure 2: FGF-2 isoforms attenuate the effects of Doxorubicin on reactive oxygen species (ROS), Nrf-2 and its downstream target heme oxygenase 1 (HO-1). Panel (A) Relative ROS as measured by the fluorescence intensity of the 2',7'-dichlorofluorescin diacetate (DCFDA), n=8, in the absence or presence of Dox and FGF-2 isoform pre-treatment, as indicated. (B) Western blots for Nrf-2, and HO-1, as indicated. Nrf-2 migrates as a 100 kDa band, while HO-1 is at 32 kDa. Quantitative assessments of immunoreactive bands are included in panel C. Panel (C) relative Nrf-2 and HO-1 protein (n=3) as well as corresponding mRNA levels, assessed by q-PCR, (n=4), as indicated. For the mRNA or protein determinations, cardiomyocytes were exposed to Dox for, respectively, 8 or 24 hours, in the presence and absence of Lo- or Hi-FGF-2 pre-treatment. For western blot analysis, the densitometry values of the probed proteins were corrected using the densitometry values of the whole lane Ponceau S stain of the same membrane. For qPCR analysis, all target RNA levels were normalized to rat RNA polymerase II (RP2) levels. Data is plotted as mean ± SEM. Bracket indicate groups whose values are show statistically significant differences, P<0.05)
Figure 5). Thus, mTOR activity is required for protection from Dox-induced apoptotic and necrotic cell death, and HO-1 upregulation, by FGF-2.
A selective inhibitor of HO-1, Tin-Protoporphyrin (Tin-PP) was then used to determine if the protective effects of FGF-2 isoforms were mediated by HO-1. Tin-PP blocked FGF-2 protection from Dox-induced increase in caspase 3/7 activity, LDH release, p53 upregulation, and active caspase 3 (Figure 6A,B, and Supplementary Figure 4C).
**DISCUSSION**
Novel findings presented in this work are: (1) Hi-FGF-2 increases the resistance of cardiomyocytes to (acute) Dox-induced cell death and lysosomal dysregulation in a similar manner to Lo-FGF-2; (2) FGF-2 isoforms stimulate upregulation of Nrf-2 and its downstream targets HO-1 and p62/SQSTM1 in the presence, but not absence, of Dox; (3) mTOR activity is required for FGF-2 induced protection and Nrf-2, HO-1, and p62/SQSTM1 upregulation; and (4) HO-1 mediates FGF-2 protection (Figure 7).
(i) **Hi-FGF-2 protects from Dox toxicity in an acute setting.** It is well established that Dox cardiotoxicity includes mitochondrial damage, apoptotic and necrotic cell death as well as lysosomal and autophagic dysregulation [21]. In our *in vitro* model, Dox upregulated ROS, decreased cellular ATP, promoted LDH release, upregulated pro-cell death markers such as p53, Bnip-3 and active caspase 3 and caused formation of mPTP. Dox also downregulated active mTOR, which is a master regulator of growth and an inhibitor of autophagy initiation; its downregulation is expected to trigger autophagy initiation [2]. At the same time Dox caused lysosome-associated changes indicative of dysregulation, by decreasing expression of TFEB the master transcription factor for lysosomal biogenesis, and of LAMP1 protein, consistent with decreases in lysosomal numbers. Lysosomal dysregulation contributes to blocked autophagic clearance/flux in the presence of Dox resulting in proteotoxicity according to previous reports [20, 22], and confirmed in our system. Overall our model recapitulated
---
**Figure 3:** In the presence of Doxorubicin, FGF-2 isoforms promote p62/SQSTM1 upregulation which is further increased by Chloroquine (CQ). **Panel (A)** Western blot showing p62/SQSTM1 immunoreactivity, in the absence and presence of the lysosomal/autophagy flux inhibitor CQ, in cardiomyocytes exposed or not to Dox and FGF-2 pre-treatment, as indicated. Corresponding densitometric data are shown in panel B. **Panel (B),** relative protein levels of p62/SQSTM1, in response to CQ. Densitometry of the Ponceau S (scan of the whole lane) was used to adjust for minor loading variations. **Panel (C),** relative levels of p62/SQSTM1 mRNA in cardiomyocytes exposed, or not, to Dox and FGF-2 isoforms, as indicated; n=4. Data is plotted as mean ± SEM. For the mRNA or protein determinations, cardiomyocytes were exposed to Dox for, respectively, 8 or 24 hours, in the presence and absence of Lo- or Hi-FGF-2 pre-treatment. Rat RNA polymerase II (RP2) levels were used to normalize the target mRNA. Statistically significant differences (P<0.5) between groups are indicated by brackets.
multiple components of Dox-induced cardiotoxicity, supporting its validity in examining potential protective manipulations and associated mechanisms. It is of interest that FGF-2 isoforms did not protect a breast cancer cell line (MCF-7) from Dox toxicity, suggesting the possibility, in need of further investigation, that an FGF-2-based therapy may not affect the toxicity of anthracyclines against at least some types of cancer cells. It is not clear why MCF-7 cells were not protected by FGF-2, although others have reported broadly similar findings [23]. One may speculate that although MCF-7 cells do express FGF-2 receptor 1 (FGFR1), [24] the receptor may be already fully activated by MCF-7-produced endogenous FGFs, or that it may display an aberrant pattern of activation as has been reported [25].
Both FGF-2 isoforms were found to prevent or attenuate all of the deleterious effects of Dox on cardiomyocytes. In the case of Lo-FGF-2, our results are broadly consistent with previous studies showing the protective effect of Lo-FGF in multiple scenarios of cardiomyocyte injury, as reviewed in [9], including Dox toxicity [11]. Although less information exists regarding the effects of extracellular-acting Hi-FGF-2, our previous *in vivo* studies documented that direct administration of Hi-FGF to the heart exerts short-term (one day) protection from cardiac ischemic injury and cell death. Interestingly, the protective effects of Hi-FGF-2, unlike those of Lo-FGF-2, were not sustainable at longer time points post-ischemia [26]. Another study showed that administered Hi-FGF-2 exerts acute post-conditioning-like cardioprotection.
**Figure 4:** **FGF-2 isoforms prevent the Dox-induced downregulation of transcription factor EB (TFEB) and lysosomal associated membrane protein-1 (LAMP-1).** Panel (A) Relative mRNA levels of TFEB in cardiomyocytes exposed to Dox for 8 hours in the presence and absence of pre-incubation with FGF-2 isoforms. Rat RNA polymerase II (RP2) levels were used to normalize the target mRNA. (B) Western blot for LAMP-1, and corresponding graph after 24-hour exposure to Dox in the presence and absence of Lo- or Hi-FGF-2 pre-incubation. Ponceau S stain of the same membrane, used to correct for variations in loading. Brackets indicate groups displaying statistically significant differences.
against ischemia-reperfusion cardiac dysfunction and cell death [27]. Our present study reinforces the notion of acute cardiomyocyte protection by administered Hi-FGF-2, this time in the context of Dox-induced cardiotoxicity *in vitro*. It remains to be determined whether Hi- and/or Lo-FGF-2, can exert sustained protection from Dox cardiotoxicity.
**Figure 5:** The mTOR pathway mediates the FGF-2 induced effects on Nrf-2 and HO-1. **Panel (A)** Relative levels of active (phospho-Ser2448 mTOR)/total mTOR (n=4). The corresponding western blot is shown in Supplementary Figure 4A. **(B)** Cardiomyocyte viability as estimated by the Calcein-AM (fluorescence intensity) assay, in the absence (empty columns) or presence (shaded columns) of Rapamycin, Dox, and FGF-2 isoforms (n=8), as indicated. **(C)** and **(D)**. Relative protein levels for, respectively, Nrf-2 and HO-1 in the absence (empty columns) or presence (shaded columns) of Rapamycin (n=3). Data is plotted as mean ± SEM and statistical differences are shown by brackets where significant P<0.05. The corresponding western blot is shown in **Panel (E)**.
(ii) The Nrf-2/HO-1 pathway is a major endogenous cytoprotective mechanism activated under conditions of oxidative stress. Under non-stressed conditions, Nrf-2 is sequestered in the cytosol via its interaction with Keap-1 which also facilitates proteasomal degradation of Nrf-2. In response to oxidative stress, Nrf-2 dissociates from Keap1, translocates to the nucleus, and stimulates expression of HO-1 as well as multiple genes belonging to several anti-oxidant cell detoxification pathways [15, 28–30]. While exposure of neonatal cardiomyocytes to moderate, non-toxic, oxidative stress was reported to upregulate Nrf-2 and Nrf-2-target genes.
**Figure 6:** The mTOR and Heme Oxygenase-1 (HO-1) activities mediate the pro-cell survival effects of FGF-2 isoforms. **Panels (A)** and **(B).** Relative caspase 3/7 activity, and LDH release, respectively, in cardiomyocytes exposed to Dox, and FGF-2 isoform pre-treatment, in the absence or presence of Rapamycin (mTOR inhibitor) and Tin-Protoporphyrin (Tin-PP), as indicated; n=4. **Panels (C)** and **(D).** p53, or cleaved 19 kDa caspase-3 levels, respectively, in cardiomyocytes exposed to Dox, and FGF-2 isoform pre-treatment, in the absence or presence of Rapamycin and Tin-PP, as indicated, n=3. Corresponding western blots are included in Supplementary Figure 4C. Ponceau S stain of the same membrane, used to adjust for minor variations in loading. Data is plotted as mean ± SEM and brackets denote groups presenting statistically significant (P<0.05) differences.
including HO-1 [31], excessive oxidative stress induced by Dox likely overwhelms the antioxidant defenses of cardiomyocytes. Indeed, we found that Dox downregulated Nrf-2 and failed to upregulate HO-1. Previous studies have shown that dysregulation of the Nrf-2/HO-1 axis contributes to Dox-induced cardiotoxicity: a lack of Nrf-2 exacerbated Dox-induced cardiotoxicity, while induction of increased expression of Nrf-2 was protective in vivo [32–34]. FGF-2 isoforms, as shown here, prevented the Dox-induced effects on Nrf-2, and promoted robust upregulation of Nrf-2 targets such as HO-1 (and P62/SQSTM1), often used to demonstrate Nrf-2 activity. Our data indicate that FGF-2 protection from Dox toxicity is mediated by restoring or boosting the Nrf-2/HO-1 axis.
The competitive HO-1 inhibitor, Tin-PP, prevented the FGF-2 induced beneficial effects against cell death and damage, offering further support to this notion. The cardioprotective effect of HO-1 has been documented by multiple groups, as reviewed recently [35]. Lack of HO-1 sensitizes cardiac cells to various types of stress stimuli, including Dox and ischemia/reperfusion injury [36–38], while enhancing cardiac HO-1 expression is sufficient to blunt Dox and reperfusion damage [36, 39–43]. In view of its potent protective effects, there is strong interest in identifying drugs capable of boosting endogenous HO-1 and its downstream metabolites, such as carbon monoxide (CO) to enhance cardiac resistance to toxic conditions [35, 39, 44, 45]. In this context, FGF-2 administration to the heart may be considered as a means to upregulate HO-1 under conditions of oxidative stress.
In contrast to other cytoprotective agents like sulforaphane [32, 33] and even FGF-1 in astrocytes [46], FGF-2 isoforms did not upregulate Nrf-2 / HO-1 protein levels under normal non-stressed conditions in cardiomyocytes, but only did so in the presence of Dox. Thus, the ability of FGF-2 pre-treatment to prevent the Dox-induced Nrf-2 loss (mRNA and protein), and to even upregulate HO-1 protein accumulation substantially above control levels likely requires additional, Dox-induced signal(s). One can speculate that since we found that FGF-2 decreased but did not eliminate ROS production by Dox, residual ROS was able to activate the Nrf-2/HO-1 anti-oxidant line of defense.
(iii) p62/SQSTM1 is a multifunctional scaffold protein and another well known target of Nrf-2. We found that it was robustly upregulated by FGF-2 in the presence but not absence of Dox [47]. The p62/SQSTM1 accumulates at sites of autophagosome formation, and facilitates tethering of ubiquitinated cargo at the autophagosome [17]. As p62/SQSTM1 is expected to become degraded upon completion of autophagy (fusion of autophagosome with the lysosome, and degradation of cargo), its accumulation above control levels, as observed in the FGF-2/Dox groups, could be interpreted as the result of blocked autophagic flux. However, this does not appear to be the case. Firstly, we observed significant increases in p62 mRNA, consistent with Nrf-2-mediated transcription, indicative of increased de novo synthesis. Secondly, the autophagy flux inhibitor CQ elicited further, significant, increases in p62 protein accumulation, showing that autophagic flux was not blocked in the FGF-2/Dox groups. By the same criteria, autophagic flux was found to be blocked in the Dox-groups, consistent with previous reports [20]. Therefore our work indicates that FGF-2 pre-treatment corrected autophagic dysregulation,
**Figure 7:** The proposed mechanism of cardioprotection against Doxorubicin by FGF-2 isoforms. FGF-2 mediated protection against Dox is shown to be via restoration of Nrf-2 and robust upregulation of its target HO-1 in cardiomyocytes. mTOR reactivation is essential for FGF-2 mediated protection and restoration/upregulation of Nrf-2/HO-1.
and possibly proteotoxic cell death, caused by Dox. It is possible that increased p62 levels, in an environment of functional autophagic flux, might better facilitate elimination of damaged cargo through autophagic clearance. In general agreement with our findings, it has been reported that Lo-FGF-2 protects cardiac cells against ischemia/reperfusion injury by p62/SQSTM1-mediated enhancement of ubiquitinated protein clearance [48].
(iv) The mTOR pathway is activated downstream of growth factor signaling [2]. mTOR is the master regulator of protein, nucleotide, and lipid synthesis and turnover and controls cell growth, differentiation, autophagy and metabolism of the cell [49]. Transgenic mice expressing a constitutively active form of mTOR are resistant to Dox cardiotoxicity, highlighting the crucial role of mTOR signaling in cardioprotection [50]. A serine/threonine protein kinase, mTOR is the active subunit of two different complexes: complex 1 (mTORC1) and complex2 (mTORC2).
The mTORC1 complex, which is inhibited by Rapamycin, promotes anabolic processes, and suppresses autophagy, while the mTORC2 complex is activated primarily downstream of PI3 kinase and is associated with cell proliferation and survival [49].
Dox-induced inhibition of mTOR in cardiomyocytes contributes to Dox toxicity [50–52]. We showed that FGF-2 attenuated the Dox-induced decrease in activated mTOR, and in turn Rapamycin (mTORC1 inhibitor), and Torin 1 (mTORC1 and mTORC2 inhibitor), prevented the FGF-2-triggered protective effects. Rapamycin abrogated the FGF-2-induced effects on Nrf-2 and its downstream targets HO-1 and p62/SQSTM1; while FGF-2 isoforms prevented the Dox-induced downregulation of active mTOR. Taken together our data indicate that the mTORC1 pathway is required for the FGF-2-induced effects on Nrf-2, HO-1, and p62/SQSTM1. A direct link may exist between mTOR and the FGF/FGFR1 axis. It is possible that mTOR may become phosphorylated/activated by direct interaction with FGFR1 and associated signals: in vascular smooth muscle cells mTOR was shown to interact directly with FGFR1 via Fibroblast Growth Factor Receptor Substrate 2 [53]. Further studies are required to address this issue.
Prolonged inhibition of mTOR by Rapamycin is reported to be protective against Dox-toxicity, by upregulating autophagy [54]. Rapamycin, as used in our system, was not protective by itself, likely because of the brevity of pre-treatment.
It is of interest that p62/SQSTM1 can also promote Nrf-2 activation by a positive feedback mechanism. It was demonstrated that mTOR phosphorylates p62/SQSTM1, thus strengthening the interaction of the latter with Keap1. This results in dissociation and stabilization of Nrf-2, which can then translocate to the nucleus and activate expression of target genes such as HO-1 [55]. It will be important to determine whether the increased expression/accumulation of p62/SQSTM1 reported here contributed to Nrf-2 stabilization and activity.
MATERIALS AND METHODS
This study was done according to the NIH Guide for the Care and Use of Laboratory Animals (NIH Publication, 8th Edition. Revised 2011). Approval was given by the Protocol Management and Review Committee of the University of Manitoba.
Cultures
Hearts obtained from one day-old pups were used to isolate ventricular cardiomyocytes, which were plated at a density of $5 \times 10^4$ cells/cm$^2$ on collagen (Corning, #354236) coated dishes in the presence of 20% fetal bovine serum (FBS) in Hams F-10 culture medium, and allowed to attach overnight as described [57, 58]. The next day cells were incubated with low-serum medium (0.5% FBS, 1% insulin, 1% transferrin/selenium, 1% ascorbic acid, and 1% bovine serum albumin) in Dulbecco’s modified Eagle’s medium (DMEM) for 24 hours. Subsequently, cardiomyocyte cultures were treated, or not, with FGF-2 isoforms (10 ng/ml) for 30 minutes, followed by administration of doxorubicin (Dox; 0.5 $\mu$M). Cells were exposed to Dox for 8 or 24 hours, for extraction of RNA or protein, respectively. Inhibitors were added to the cells 30 minutes prior to FGF-2 exposure. Myocyte purity was assessed by immunofluorescence, staining myocytes for alpha-actinin which highlights the striated nature of these cells. Cultures consisted of 95 % alpha-actinin-positive cells (cardiomyocytes), and this relative composition did not change with treatments for the duration of our experiments.
Reagents
Recombinant rat Hi- or Lo-FGF-2 were produced in-house using plasmids described in [59]. Briefly, Lo- or Hi-FGF-2 sequence was inserted into the EcoR1 site of pET19b resulting in a Histidine Tag at the N-terminal of the fusion protein. The pET vector was transformed into the expression host, Escherichia Coli (BL21(DE3)pLysS), by heat shock. The cultures were grown in the presence of 50 ug/ml carbenicilin and 34 ug/ml chloramphenicol. Overnight ExpressTM Autoinduction system (Novagen) was used according to manufacturer’s instructions to induce protein expression without the need to monitor cell growth. Immobilized metal affinity chromatography (IMAC) using Nickel-sepharose (Ni-sepharose, High Performance from GE healthcare, # 17-5268-01) was used according to the manufacturer’s instructions to purify proteins containing a Histidine tag. To reduce non-specific binding to beads all buffers contained 5 mM 2-mercaptoethanol and 10% glycerol. NP-40 (0.1%)
was added to the binding buffer and the first wash buffer. Imidazole was removed from purified eluates by dialysis against PBS or 0.1 M NaHCO3/0.5 M NaCl. Protein concentration of the recombinant protein was calculated from the absorbance at 280 nm wavelength and the coefficient for absorbance 0.1% (rat Lo-FGF-2 Coef for Abs = 0.86, Hi-FGF-2 = 00.735). Dox was purchased from Pfizer. Chloroquine (Sigma-Aldrich, c6628) was used at a final concentration of 5 μM. Rapamycin (Cayman, CAS 53123-88-9), Tin-PP (SantaCruz, CAS 14325-05-4), and Torin1 (APExBIO, A8312) were used at concentrations of 100 nM, 10 μM, and 100 nM respectively. The following antibodies were purchased from Cell Signaling: Cleaved Caspase-3 (1:1000, #9661), p53 (1:1000, #2524), Bnip3 (1:1000, #3769), p62/SQSTM1 (1:1000, #5114), p-Ser2448-mTOR(1:1000, #2971), mTOR (1:1000, #2972). Antibodies to Nrf-2 and HO-1 were, respectively, from Proteintec(1:2000, 16396-1-AP), and Abcam: HO-1 (1:7500, ab68477). Donkey anti-rabbit (1:5,000, Jackson Immunoresearch, #711-035-152) and anti-mouse (1:5,000, Jackson Immunoresearch, #715-035-150) antibodies conjugated to horseradish peroxidase were used as secondary antibodies. Antigen-antibody complexes were detected by Pierce™ ECL Plus Western Blotting Substrate (Thermofisher, #80196).
**Calcein-AM/Ethidium homodimer viability assay**
To measure cardiomyocyte viability, cells were rinsed twice with phosphate buffered saline (PBS) at 37 °C, then incubated with Calcein-AM (2 μM, C3100, Thermofisher) and ethidium homodimer (2.5 μM, E1169, Thermofisher) in PBS for 30 minutes. Images were obtained with an LSM 5 PASCAL fluorescence microscope. The fluorescence intensity of Calcein-AM (485/535 nm) and ethidium homodimer (530/620 nm) were also measured using a Microplate Fluorometer (SPECTRAMAX, GEMINI XS).
**Protein (western) immunoblotting**
Cells were snap-frozen using liquid nitrogen and stored at -80 °C. Cells were scraped, lysed, boiled and sonicated (Vibra cell) in sodium dodecyl sulphate (SDS)/polyacrylamide gel electrophoresis (PAGE) sample buffer (1%(w/v) SDS) supplemented (1:100) with protease inhibitor cocktail (Sigma-Aldrich, #8304) and phosphatase inhibitor cocktail set II and IV (Calbiochem, #524625 and #524628). Cell homogenates were then centrifuged briefly at 21,000 g to remove cellular debris. A bicinchoninic acid assay was used to measure protein concentration in the supernatants. Following SDS-PAGE, proteins were transferred to polyvinylidene fluoride (PVDF) membranes. The PVDF membranes were stained for 5 min by 0.01% (w/v) Ponceau S (Sigma-Aldrich, P3504) in 0.15% trichloroacetic acid to assess overall protein transfer. Non-specific binding was blocked by incubation in 10% milk/TBS-T or 5% BSA/TBS-T for 1 hour at room temperature. Ponceau S (total protein estimate) was used to correct for loading variations. Densitometry values (arbitrary density units) for a particular band were divided by the corresponding densitometry values (arbitrary units) of the Ponceau S staining of the whole lane. The Y axis in all graphs from western blot data shows relative protein levels of the groups within the graph.
**ATP/ADP assay**
Luminescent ATP Detection Assay Kit (Abcam, ab113849) was used to measure the levels of ATP, ADP, and ATP/ADP ratio according to the manufacturer’s protocol.
**Real-time reverse transcriptase polymerase chain reaction (qPCR)**
Cardiomyocyte RNA extraction followed by qPCR was done as previously described [60]. All target RNA levels were normalized to rat RNA polymerase II (RP2) levels and shown as relative ratios. Specific primers used for amplification are listed below:
**Commercial kit-based assays**
Kits were used according to the manufacturers’ instructions. Image-iT™ LIVE Mitochondrial Transition Pore Assay Kit (I35103) was used to study mitochondrial permeability transition pore opening (mPTP). DCF-DA (2’7’-dichlorodihydrofluorescein diacetate (Thermofisher, D399) was used to measure levels of ROS in cardiomyocytes. The Pierce™ LDH Cytotoxicity Assay Kit (Thermofisher, 88953) was used to measure relative levels of LDH released by the cells, as an indicator of plasma membrane damage. Caspase-Glo® 3/7 Assay kit (Promega, G8090) was used to assess activation of caspases 3 and 7, as indicators of apoptosis.
**Data analysis and statistics**
Each experiment used one preparation of primary cardiomyocytes, isolated from the hearts of 36 rat pups, and distributed into several plates (either 96-well plates, or 6-well-plates of 35 mm diameter/well), that were subsequently grouped based on treatment. Group size (N=number of individual wells/plates per group) varied between 3-8, unless otherwise stated. Each experiment was repeated in its entirety at least 3 times, with independently generated cardiomyocyte cultures, again with N=3-5. Data were reproducible between experiments using different myocyte isolations. The results shown in figures represent one complete representative experimental
series. GraphPad Prism 6 was used to analyze data in each experimental series. One-way or two-way ANOVA (Fisher’s LSD test as post-hoc) were used as appropriate. P value <0.05 was considered significant. Data is shown as mean plus or minus standard error mean (SEM).
**CONCLUSION**
Dox-induced cardiotoxicity remains a major concern for patients receiving chemotherapy. There is strong interest in identifying compounds and developing drugs aimed at stimulating Nrf-2 and HO-1 upregulation, as a means to elicit tissue protection from toxic stimuli including Dox [35, 56]. In this context, FGF-2 isoforms, capable of activating/boosting endogenous cardioprotective (anti-oxidant/detoxification) pathways through the mTOR-Nrf-2-HO-1 signal transduction pathway, as shown here, could be considered as naturally occurring proteins to be harnessed in strategies to elicit cardioprotection from Dox.
**Abbreviations**
- Dox: Doxorubicin
- FGF-2: Fibroblast growth factor-2
- Lo-FGF-2: Low molecular weight fibroblast growth factor-2
- Hi-FGF-2: High molecular weight fibroblast growth factor-2
- mTOR: mammalian target of Rapamycin
- Nrf-2: nuclear factor erythroid-2-related factor
- HO-1: heme oxygenase-1
- ROS: reactive oxygen species
- Top-II: Topoisomerase-II
- Bnip-3: BCL2/adenovirus E1B 19 kDa protein-interacting protein 3
- mPTP: mitochondrial permeability transition pore
- TFEB: Transcription factor EB
- LAMP1: Lyosomal associated membrane protein-1
- Tin-PP: Tin Protoporphyrin
- FGFR1: Fibroblast Growth Factor Receptor 1
- FRS-2: Fibroblast Growth Factor Receptor Substrate 2
- RP2: RNA polymerase II
**Author contributions**
NK contributed to study conception and design, acquisition of data, analysis and interpretation of data, and drafting/revising the manuscript. BEN and RRF contributed to acquisition of data, analysis and interpretation of data, and critical revision of the manuscript. JW and ZR contributed to acquisition of data. LAK and PAC contributed to portions of study conception and design, analysis and interpretation of data, and critical revision of the manuscript. EK contributed to study conception and design, analysis and interpretation of data, and drafting/revising the manuscript.
**CONFLICTS OF INTEREST**
None declared.
**FUNDING**
This work was funded by the Canadian Institutes for Health Research (FRN-74733). NK is the recipient of the Alfred E. Deacon doctoral scholarship, Institute of Cardiovascular Sciences, *St. Boniface* Hospital *Albrechtsen Research Centre*. JW is the recipient of a Research Manitoba Ph.D. Graduate Studentship.
**REFERENCES**
1. Mitry MA, Edwards JG. Doxorubicin induced heart failure: Phenotype and molecular mechanisms. Int J Cardiol Heart Vasc. 2016; 10: 17-24. https://doi.org/10.1016/j.ijcha.2015.11.004.
2. Koleini N, Kardami E. Autophagy and mitophagy in the context of doxorubicin-induced cardiotoxicity. Oncotarget. 2017; 8: 46663–46680. https://doi.org/10.18632/oncotarget.16944.
3. Dhingra R, Margulets V, Chowdhury SR, Thliveris J, Jassal D, Fernyhough P, Dorn GW 2nd, Kirshenbaum LA. Bnip3 mediates doxorubicin-induced cardiac myocyte necrosis and mortality through changes in mitochondrial signaling. Proc Natl Acad Sci U S A. 2014; 111: E5537-44. https://doi.org/10.1073/pnas.1414665111.
4. Simunek T, Sterba M, Popelova O, Adamcova M, Hrdina R, Gersl V. Anthracycline-induced cardiotoxicity: overview of studies examining the roles of oxidative stress and free cellular iron. Pharmacol Rep. 2009; 61: 154-71. https://doi.org/10.1016/S1734-1140(09)70018-0.
5. Hamo CE, Bloom MW, Cardinale D, Ky B, Nohria A, Baer L, Skopicki H, Lenihan DJ, Gheorghiaade M, Lyon AR, Butler J. Cancer Therapy-Related Cardiac Dysfunction and Heart Failure: Part 2: Prevention, Treatment, Guidelines, and Future Directions. Circ Heart Fail. 2016; 9: e002843. https://doi.org/10.1161/CIRCHEARTFAILURE.115.002843.
6. Liao S, Bodmer J, Pietras D, Azhar M, Doetschman T, Schultz Jel J. Biological functions of the low and high molecular weight protein isoforms of fibroblast growth factor-2 in cardiovascular development and disease. Dev Dyn. 2009; 238: 249-64. https://doi.org/10.1002/dvdy.21677.
7. Santiago JJ, McNaughton LJ, Koleini N, Ma X, Bestvater B, Nickel BE, Fandrich RR, Wigle JT, Freed DH, Arora RC, Kardami E. High molecular weight fibroblast growth factor-2 in the human heart is a potential target for prevention of cardiac remodeling. PLoS One. 2014; 9: e97281. https://doi.org/10.1371/journal.pone.0097281.
8. Santiago JJ, Ma X, McNaughton LJ, Nickel BE, Bestvater BP, Yu L, Fandrich RR, Netticadan T, Kardami E. Preferential accumulation and export of high molecular weight FGF-2 by rat cardiac non-myocytes. Cardiovasc Res. 2011; 89: 139-47. https://doi.org/10.1093/cvr/cvr261.
9. Kardami E, Detillieux K, Ma X, Jiang Z, Santiago JJ, Jimenez SK, Cattini PA. Fibroblast growth factor-2 and cardioprotection. Heart Fail Rev. 2007; 12: 267-77. https://doi.org/10.1007/s10741-007-9027-0.
10. Liao S, Porter D, Scott A, Newman G, Doetschman T, Schultz Jel J. The cardioprotective effect of the low molecular weight isoform of fibroblast growth factor-2: the role of JNK signaling. J Mol Cell Cardiol. 2007; 42: 106-20. https://doi.org/10.1016/j.yjmcc.2006.10.005.
11. Wang J, Nachtigal MW, Kardami E, Cattini PA. FGF-2 protects cardiomyocytes from doxorubicin damage via protein kinase C-dependent effects on efflux transporters. Cardiovasc Res. 2013; 98: 56-63. https://doi.org/10.1093/cvr/cvt011.
12. Sontag DP, Wang J, Kardami E, Cattini PA. FGF-2 and FGF-16 protect isolated perfused mouse hearts from acute doxorubicin-induced contractile dysfunction. Cardiovasc Toxicol. 2013; 13: 244-53. https://doi.org/10.1007/s12012-013-9203-5.
13. Ma X, Dang X, Claus P, Hirst C, Fandrich RR, Jin Y, Grothe C, Kirshenbaum LA, Cattini PA, Kardami E. Chromatin compaction and cell death by high molecular weight FGF-2 depend on its nuclear localization, intracellular ERK activation, and engagement of mitochondria. J Cell Physiol. 2007; 213: 690-8. https://doi.org/10.1002/jcp.21139.
14. Shi RY, Zhu SH, Li V, Gibson SB, Xu XS, Kong JM. BNIP3 interacting with LC3 triggers excessive mitophagy in delayed neuronal death in stroke. CNS Neurosci Ther. 2014; 20: 1045-55. https://doi.org/10.1111/cns.12325.
15. Zhou S, Sun W, Zhang Z, Zheng Y. The role of Nrf2-mediated pathway in cardiac remodeling and heart failure. Oxid Med Cell Longev. 2014; 2014: 260429. https://doi.org/10.1155/2014/260429.
16. Lau A, Tian W, Whitman SA, Zhang DD. The predicted molecular weight of Nrf2: it is what it is not. Antioxid Redox Signal. 2013; 18: 91-3. https://doi.org/10.1089/ars.2012.4754.
17. Katsuragi Y, Ichimura Y, Komatsu M. p62/SQSTM1 functions as a signaling hub and an autophagy adaptor. Febs j. 2015; 282: 4672-8. https://doi.org/10.1111/febs.13540.
18. Jain A, Lamark T, Sjottem E, Larsen KB, Awuh JA, Overvatn A, McMahon M, Hayes JD, Johansen T. p62/SQSTM1 is a target gene for transcription factor NRF2 and creates a positive feedback loop by inducing antioxidant response element-driven gene transcription. J Biol Chem. 2010; 285: 22576-91. https://doi.org/10.1074/jbc.M110.118976.
19. Klionsky DJ, Abdelmohsen K, Abe A, Abedin MJ, Abelniohiv H, Acevedo Arozena A, Adachi H, Adams CM, Adams PD, Adeli K, Adhiketty PJ, Adler SG, Agam G, et al. Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition). Autophagy. 2016; 12: 1-222. https://doi.org/10.1080/15548627.2015.1100356.
20. Li DL, Wang ZV, Ding G, Tan W, Luo X, Criollo A, Xie M, Jiang N, May H, Kyrychenko V, Schneider JW, Gillette TG, Hill JA. Doxorubicin Blocks Cardiomyocyte Autophagic Flux by Inhibiting Lysosome Acidification. Circulation. 2016; 133: 1668-87. https://doi.org/10.1161/CIRCULATIONAHA.115.017443.
21. Bartlett JJ, Trivedi PC, Puliniikunnil T. Autophagic dysregulation in doxorubicin cardiomyopathy. J Mol Cell Cardiol. 2017; 104: 1-8. https://doi.org/10.1016/j.yjmcc.2017.01.007.
22. Bartlett JJ, Trivedi PC, Yeung P, Kienesberger PC, Puliniikunnil T. Doxorubicin impairs cardiomyocyte viability by suppressing transcription factor EB expression and disrupting autophagy. Biochem J. 2016; 473: 3769-89. https://doi.org/10.1042/BCJ20160385.
23. Coleman AB, Metz MZ, Donohue CA, Schwarz RE, Kane SE. Chemosensitization by fibroblast growth factor-2 is not dependent upon proliferation, S-phase accumulation, or p53 status. Biochem Pharmacol. 2002; 64: 1111-23.
24. Lehtola L, Partanen J, Sistonen L, Korhonen J, Warri A, Harkonen P, Clarke R, Alitalo K. Analysis of tyrosine kinase mRNAs including four FGF receptor mRNAs expressed in MCF-7 breast-cancer cells. Int J Cancer. 1992; 50: 598-603.
25. Luqmani YA, Graham M, Coombes RC. Expression of basic fibroblast growth factor, FGFR1 and FGFR2 in normal and
malignant human breast, and comparison with other normal tissues. Br J Cancer. 1992; 66: 273-80.
26. Jiang ZS, Jeyaraman M, Wen GB, Fandrich RR, Dixon IM, Cattini PA, Kardami E. High- but not low-molecular weight FGF-2 causes cardiac hypertrophy in vivo; possible involvement of cardiotophin-1. J Mol Cell Cardiol. 2007; 42: 222-33. https://doi.org/10.1016/j.yjmcc.2006.09.002.
27. Jiang ZS, Wen GB, Tang ZH, Srisakuldee W, Fandrich RR, Kardami E. High molecular weight FGF-2 promotes postconditioning-like cardioprotection linked to activation of protein kinase C isoforms, as well as Akt and p70 S6 kinases. [corrected]. Can J Physiol Pharmacol. 2009; 87: 798-804. https://doi.org/10.1139/Y09-049.
28. Fourquet S, Guerois R, Biard D, Toleano MB. Activation of NRF2 by nitrosative agents and H2O2 involves KEAP1 disulfide formation. J Biol Chem. 2010; 285: 8463-71. https://doi.org/10.1074/jbc.M109.051714.
29. Zhang DD, Hannink M. Distinct cysteine residues in Keap1 are required for Keap1-dependent ubiquitination of Nrf2 and for stabilization of Nrf2 by chemopreventive agents and oxidative stress. Mol Cell Biol. 2003; 23: 8137-51. https://doi.org/10.1128/MCB.23.22.8137-8151.2003
30. Gorrini C, Harris IS, Mak TW. Modulation of oxidative stress as an anticancer strategy. Nat Rev Drug Discov. 2013; 12: 931-47. https://doi.org/10.1038/nrd4002.
31. Purdom-Dickinson SE, Lin Y, Dedek M, Morrissy S, Johnson J, Chen QM. Induction of antioxidant and detoxification response by oxidants in cardiomyocytes: evidence from gene expression profiling and activation of Nrf2 transcription factor. J Mol Cell Cardiol. 2007; 42: 159-76. https://doi.org/10.1016/j.yjmcc.2006.09.012.
32. Li B, Kim DS, Yadav RK, Kim HR, Chae HJ. Sulforaphane prevents doxorubicin-induced oxidative stress and cell death in rat H9c2 cells. Int J Mol Med. 2015; 36: 53-64. https://doi.org/10.3892/ijmm.2015.2199.
33. Singh P, Sharma R, McElhanon K, Allen CD, Megyesi JK, Benes H, Singh SP. Sulforaphane protects the heart from doxorubicin-induced toxicity. Free Radic Biol Med. 2015; 86: 90-101. https://doi.org/10.1016/j.freeradbiomed.2015.05.028.
34. Wang LF, Su SW, Wang L, Zhang GQ, Zhang R, Niu YJ, Guo YS, Li CY, Jiang WB, Liu Y, Guo HC. Tert-butylhydroquinone ameliorates doxorubicin-induced cardiotoxicity by activating Nrf2 and inducing the expression of its target genes. Am J Transl Res. 2015; 7: 1724-35.
35. Otterbein LE, Foresti R, Motterlini R. Heme Oxygenase-1 and Carbon Monoxide in the Heart: The Balancing Act Between Danger Signaling and Pro-Survival. Circ Res. 2016; 118: 1940-59. https://doi.org/10.1161/circresaha.116.306588.
36. Hull TD, Boddu R, Guo L, Tisher CC, Traylor AM, Patel B, Joseph R, Prabhu SD, Suliman HB, Piantadosi CA, Agarwal A, George JF. Heme oxygenase-1 regulates mitochondrial quality control in the heart. JCI Insight. 2016; 1: e85817. https://doi.org/10.1172/jci.insight.85817.
37. Juhasz B, Varga B, Czompa A, Bak I, Lekli I, Gesztelyi R, Zsuga J, Kemeny-Beke A, Antal M, Szendrei L, Tosaki A. Postischemic cardiac recovery in heme oxygenase-1 transgenic ischemic/reperfused mouse myocardium. J Cell Mol Med. 2011; 15: 1973-82. https://doi.org/10.1111/j.1582-4934.2010.01153.x.
38. Yoshida T, Maulik N, Ho YS, Alam J, Das DK. H(mox-1) constitutes an adaptive response to effect antioxidant cardioprotection: A study with transgenic mice heterozygous for targeted disruption of the Heme oxygenase-1 gene. Circulation. 2001; 103: 1695-701. https://doi.org/10.1161/01.CIR.103.12.1695.
39. Kim DS, Chae SW, Kim HR, Chae HJ. CO and bilirubin inhibit doxorubicin-induced cardiac cell death. Immunopharmacol Immunotoxicol. 2009; 31: 64-70. https://doi.org/10.1080/08923970802354762.
40. Liu L, Zhang X, Qian B, Min X, Gao X, Li C, Cheng Y, Huang J. Over-expression of heat shock protein 27 attenuates doxorubicin-induced cardiac dysfunction in mice. Eur J Heart Fail. 2007; 9: 762-9. https://doi.org/10.1016/j.ejheart.2007.03.007.
41. Piantadosi CA, Carraway MS, Babiker A, Suliman HB. Heme oxygenase-1 regulates cardiac mitochondrial biogenesis via Nrf2-mediated transcriptional control of nuclear respiratory factor-1. Circ Res. 2008; 103: 1232-40. https://doi.org/10.1161/01.RES.0000338597.71702.ad.
42. Bak I, Czompa A, Juhasz B, Lekli I, Tosaki A. Reduction of reperfusion-induced ventricular fibrillation and infarct size via heme oxygenase-1 overexpression in isolated mouse hearts. J Cell Mol Med. 2010; 14: 2268-72. https://doi.org/10.1111/j.1582-4934.2010.01142.x.
43. Hinkel R, Lange P, Petersen B, Gottlieb E, Ng JK, Finger S, Horstkotte J, Lee S, Thormann M, Knorr M, El-Aouni C, Boekstegers P, Reichart B, et al. Heme Oxygenase-1 Gene Therapy Provides Cardioprotection Via Control of Post-Ischemic Inflammation: An Experimental Study in a Pre-Clinical Pig Model. J Am Coll Cardiol. 2015; 66: 154-65. https://doi.org/10.1016/j.jacc.2015.04.064.
44. Wang G, Hamid T, Keith RJ, Zhou G, Partridge CR, Xiang X, Kingery JR, Lewis RK, Li Q, Rokosh DG, Ford R, Spinale FG, Riggs DW, et al. Cardioprotective and antiapoptotic effects of heme oxygenase-1 in the failing heart. Circulation. 2010; 121: 1912-25. https://doi.org/10.1161/circulationaha.109.905471.
45. Suliman HB, Carraway MS, Ali AS, Reynolds CM, Welty-Wolf KE, Piantadosi CA. The CO/HO system reverses inhibition of mitochondrial biogenesis and prevents murine doxorubicin cardiomyopathy. J Clin Invest. 2007; 117: 3730-41. https://doi.org/10.1172/jci32967.
46. Vargas MR, Pehar M, Cassina P, Martinez-Palma L, Thompson JA, Beckman JS, Barbeito L. Fibroblast growth factor-1 induces heme oxygenase-1 via nuclear factor erythroid 2-related factor 2 (Nrf2) in spinal cord astrocytes:
consequences for motor neuron survival. *J Biol Chem.* 2005; 280: 25571-9. https://doi.org/10.1074/jbc.M501920200.
47. Jain A, Rusten TE, Katheder N, Elvenes J, Bruun JA, Sjottem E, Lamark T, Johansen T. p62/Sequestosome-1, Autophagy-related Gene 8, and Autophagy in Drosophila Are Regulated by Nuclear Factor Erythroid 2-related Factor 2 (NRF2), Independent of Transcription Factor TFEB. *J Biol Chem.* 2015; 290: 14945-62. https://doi.org/10.1074/jbc.M115.656116.
48. Wang ZG, Wang Y, Huang Y, Lu Q, Zheng L, Hu D, Feng WK, Liu YL, Ji KT, Zhang HY, Fu XB, Li XK, Chu MP, et al. bFGF regulates autophagy and ubiquitinated protein accumulation induced by myocardial ischemia/reperfusion via the activation of the PI3K/Akt/mTOR pathway. *Sci Rep.* 2015; 5: 9287. https://doi.org/10.1038/srep09287.
49. Saxton RA, Sabatini DM. mTOR Signaling in Growth, Metabolism, and Disease. *Cell.* 2017; 169: 361-71. https://doi.org/10.1016/j.cell.2017.03.035.
50. Zhu W, Soonpaa MH, Chen H, Shen W, Payne RM, Liechty EA, Caldwell RL, Shou W, Field LJ. Acute doxorubicin cardiotoxicity is associated with p53-induced inhibition of the mammalian target of rapamycin pathway. *Circulation.* 2009; 119: 99-106. https://doi.org/10.1161/circulationaha.108.799700.
51. Wu Y, Wang J, Yu X, Li D, Han X, Fan L. Sevoflurane ameliorates doxorubicin-induced myocardial injury by affecting the phosphorylation states of proteins in PI3K/Akt/mTOR signaling pathway. *Cardiol J.* 2017. https://doi.org/10.5603/CJ.a2017.0018.
52. Cao Y, Shen T, Huang X, Lin Y, Chen B, Pang J, Li G, Wang Q, Zohrabian S, Duan C, Ruan Y, Man Y, Wang S, et al. Astragalus polysaccharide restores autophagic flux and improves cardiomyocyte function in doxorubicin-induced cardiotoxicity. *Oncotarget.* 2017; 8: 4837-4848. https://doi.org/10.18632/oncotarget.13596.
53. Chen PY, Friesel R. FGFR1 forms an FRS2-dependent complex with mTOR to regulate smooth muscle marker gene expression. *Biochem Biophys Res Commun.* 2009; 382: 424-9. https://doi.org/10.1016/j.bbrc.2009.03.040.
54. Xu X, Bucala R, Ren J. Macrophage migration inhibitory factor deficiency augments doxorubicin-induced cardiomyopathy. *J Am Heart Assoc.* 2013; 2: e000439. https://doi.org/10.1161/JAHA.113.000439.
55. Ichimura Y, Waguri S, Sou YS, Kageyama S, Hasegawa J, Ishimura R, Saito T, Yang Y, Kouno T, Fukutomi T, Hoshii T, Hirao A, Takagi K, et al. Phosphorylation of p62 activates the Keap1-Nrf2 pathway during selective autophagy. *Mol Cell.* 2013; 51: 618-31. https://doi.org/10.1016/j.molcel.2013.08.003.
56. Nikam A, Ollivier A, Rivard M, Wilson JL, Mebarki K, Martens T, Dubois-Randé JL, Motterlini R, Foresti R. Diverse Nrf2 Activators Coordinated to Cobalt Carbonyls Induce Heme Oxygenase-1 and Release Carbon Monoxide *in vitro* and *in vivo*. *J Med Chem.* 2016; 59: 756-62. https://doi.org/10.1021/acs.jmedchem.5b01509.
57. Doble BW, Kardami E. Basic fibroblast growth factor stimulates connexin-43 expression and intercellular communication of cardiac fibroblasts. *Mol Cell Biochem.* 1995; 143: 81-7. https://doi.org/10.1007/BF00925930.
58. Doble BW, Chen Y, Bosc DG, Litchfield DW, Kardami E. Fibroblast growth factor-2 decreases metabolic coupling and stimulates phosphorylation as well as masking of connexin43 epitopes in cardiac myocytes. *Circ Res.* 1996; 79: 647-58. https://doi.org/10.1161/01.RES.79.4.647
59. Pasumarthi KB, Kardami E, Cattini PA. High and low molecular weight fibroblast growth factor-2 increase proliferation of neonatal rat cardiac myocytes but have differential effects on binucleation and nuclear morphology. Evidence for both paracrine and intracrine actions of fibroblast growth factor-2. *Circ Res.* 1996; 78: 126-36.
60. Sofronescu AG, DeFillieux KA, Cattini PA. FGF-16 is a target for adrenergic stimulation through NF-kappaB activation in postnatal cardiac cells and adult mouse heart. *Cardiovasc Res.* 2010; 87: 102-10. https://doi.org/10.1093/cvr/cvq025. |
H1/2020
Half-year Financial Report
1-6/2020
Corona crisis impacted revenue and EBITA, still improvement year-on-year, strong cash flow
1 April – 30 June 2020
- **Revenue**: EUR 518.5 (512.3) million, up by 1.2 percent, 3.3 percent in local currencies. Organic growth was -4.8 percent. Services business revenue up by 3.1 percent, 5.8 percent in local currencies.
- **Adjusted EBITDA**: EUR 18.5 (10.0) million, or 3.6 (2.0) percent of revenue.
- **Adjusted EBITA**: EUR 4.8 (-3.2) million, or 0.9 (-0.8) percent of revenue.
- **EBITA**: EUR 8.4 (-4.1) million, or 1.6 (-0.8) percent of revenue.
- **Operating cash flow before financial and tax items**: EUR 48.2 (29.1) million.
- **Earnings per share, undiluted**: EUR 0.01 (-0.06) per share.
- New EUR 35.0 million hybrid bond in May, redemption of old EUR 66.1 million hybrid notes in June
- Strong liquidity position
1 January – 30 June 2020
- **Order backlog**: EUR 1,739.7 (1,704.7) million, up by 2.1 percent.
- **Revenue**: EUR 1,060.1 (1,026.7) million, up by 3.2 percent, 5.2 percent in local currencies. Organic growth was -2.3 percent. Services business revenue up by 7.8 percent, 10.3 percent in local currencies.
- **Adjusted EBITDA**: EUR 44.7 (37.1) million, or 4.2 (3.6) percent of revenue.
- **Adjusted EBITA**: EUR 17.0 (10.6) million, or 1.6 (1.0) percent of revenue.
- **EBITA**: EUR 18.4 (5.2) million, or 1.7 (0.5) percent of revenue.
- **Operating cash flow before financial and tax items**: EUR 104.3 (59.2) million.
- **Earnings per share, undiluted**: EUR 0.02 (-0.04) per share.
- **Net debt/EBITDA**: 0.1x (0.8x).
Unless otherwise noted, the figures in brackets refer to the corresponding period in the previous year.
* Based on calculation principles confirmed with the lending parties.
### KEY FIGURES
| EUR million | 4–6/20 | 4–6/19 | Change | 1–6/20 | 1–6/19 | Change | 1–12/19 |
|-------------|--------|--------|--------|--------|--------|--------|---------|
| Order backlog | 1,739.7 | 1,704.7 | 2.1% | 1,739.7 | 1,704.7 | 2.1% | 1,670.5 |
| Revenue | 518.5 | 512.3 | 1.2% | 1,060.1 | 1,026.7 | 3.2% | 2,123.2 |
| Adjusted EBITDA | 18.5 | 10.0 | 83.9% | 44.7 | 37.1 | 20.5% | 120.4 |
| Adjusted EBITDA margin, % | 3.6 | 2.0 | | 4.2 | 3.6 | | 5.7 |
| EBITDA | 22.1 | 9.1 | 142.2% | 46.2 | 31.7 | 45.7% | 103.0 |
| EBITDA margin, % | 4.3 | 1.8 | | 4.4 | 3.1 | | 4.8 |
| Adjusted EBITA | 4.8 | -3.2 | | 17.0 | 10.6 | 60.6% | 67.2 |
| Adjusted EBITA margin, % | 0.9 | -0.6 | | 1.6 | 1.0 | | 3.2 |
| EBITA | 8.4 | -4.1 | | 18.4 | 5.2 | 253.5% | 49.8 |
| EBITA margin, % | 1.6 | -0.8 | | 1.7 | 0.5 | | 2.3 |
| Operating profit | 5.0 | -7.7 | | 11.5 | -2.4 | | 35.3 |
| Operating profit margin, % | 1.0 | -1.5 | | 1.1 | -0.2 | | 1.7 |
| Result for the period | 2.1 | -7.1 | | 3.7 | -4.1 | | 22.6 |
| Earnings per share, undiluted, EUR | 0.01 | -0.06 | | 0.02 | -0.04 | | 0.14 |
| Operating cash flow before financial and tax items | 48.2 | 29.1 | 65.8% | 104.3 | 59.2 | 76.2% | 143.7 |
| Cash conversion (LTM), % | | | | 160.7 | 169.9 | | 139.5 |
| Working capital | | | | -161.3 | -80.8 | -99.6% | -100.9 |
| Interest-bearing net debt | | | | 138.8 | 158.9 | -12.7% | 168.4 |
| Net debt/EBITDA* | | | | 0.1 | 0.8 | | 1.4 |
| Gearing, % | | | | 72.5 | 77.3 | | 73.6 |
| Equity ratio, % | | | | 18.6 | 20.8 | | 21.5 |
| Personnel, end of period | | | | 15,902 | 14,681 | 8.3% | 16,273 |
* Based on calculation principles confirmed with the lending parties.
Ari Lehtoranta, President and CEO:
"Like we estimated in our first quarter report, the impacts of the corona crisis were more visible to our business in the second quarter. The wellbeing of our employees, customers and other stakeholders continued to be our first priority. Fortunately, all of our infected employees have recovered from Covid-19. Most of our operating countries were locked down especially in April-May, at which time there were more of our workforce absent as well as more work site delays and closures. The government restrictions and the impacts to our business started to clearly ease up in June. In the latter part of the quarter, several governments announced also material stimulus packages to accelerate the return of the economies to more normal business conditions.
Our order backlog increased by 2.1 percent to EUR 1,739.7 (1,704.7) million. The coronavirus pandemic had an impact on both revenue and profitability in the second quarter, but there was still improvement year-on-year. Our second quarter revenue was EUR 518.5 (512.3) million, up by 1.2 percent or 3.3 percent in local currencies. Measured in local currencies, the Services business revenue grew by 5.8 percent, while the Projects business revenue declined by 0.6 percent in the second quarter. The Services business accounted for 61.9 (60.8) percent of Group revenue. Our adjusted EBITA improved to EUR 4.8 (-3.2) million, or 0.9 (-0.6) percent of revenue. Our EBITA improved to EUR 8.4 (-4.1) million, or 1.6 (-0.8) percent of revenue, being positively impacted by a one-off capital gain. In Services, our ad-hoc orders were lower in April-May, followed by a recovery in June. In Projects, the corona pandemic impacted our productivity. The Projects business profitability was also affected by completion of the last few old projects, ramping down the large projects business in Denmark and the inflexibility to adjust personnel costs with temporary lay-offs in Central Europe.
Our cash flow generation and liquidity position continued to be strong in the second quarter. Our operating cash flow before financial and tax items improved to EUR 48.2 (29.1) million. Cash flow was positively impacted by postponing authority payments to the value of EUR 29.6 million, which will be paid out in July–November. At the end of the second quarter, our interest-bearing net debt amounted to EUR 138.8 (158.9) million, or EUR 9.9 (24.7) million excluding lease liabilities. The net debt/EBITDA ratio was 0.1x (0.8x). Our cash and cash equivalents increased to EUR 130.2 (103.6) million. The integration of our most recent acquisitions and the divestment of certain parts of our industrial operations in Finland progressed according to plan.
At present the main issue is how quickly our European economies will recover and return back to the growth track. Naturally we hope that there will not be a second wave with the virus spread leading to new lockdown measures. I am so far pleased with our ability to manage the negative impacts of the crisis. We have executed contingency and cost-saving actions since March and furthermore benefited from having rooted performance management throughout the organisation during the Fit phase of our strategy.
Due to the poor visibility and the extraordinary circumstances, Caverion withdrew its guidance for 2020 in April. At present it is still difficult to forecast how deep and long the current downturn will be and what will be the speed of the economic recovery. For Caverion, the business volume and the amount of new order intake will be key determinants to our performance in the second half of this year. Nevertheless, we assume a pick-up in our Adjusted EBITA in the second half of 2020 compared to the first half of 2020.
We continued our most important development efforts in the areas of digitalisation, sustainability and energy efficiency in the first half of the year. We have been pleased to recognise that a significant amount of the economic stimulus packages is directed towards sustainable investments enabling smart buildings and cities. This is the area where we have our strategic focus. We are well positioned to support our customers’ sustainability targets. Our own sustainability KPI targets will be published this year. We will come back to this in more detail at our Sustainability Morning to be held in connection with our Q3/2020 report in Helsinki on 5 November 2020. Our target is to come out of the crisis as a stronger company than entering it."
OUTLOOK FOR 2020
Market outlook for Caverion’s services and solutions
A large part of Caverion’s services is vital in keeping critical services and infrastructure up-and-running. This includes ensuring the continued functioning of energy and transportation infrastructure, health facilities, pharmaceutical and food industries, food retail and logistics as well as facilities and services used by public authorities. An important share of these services needs to be performed regardless of the coronavirus pandemic. The economic stimulus packages provided by governments and the EU are expected to increase infrastructure, health care and different types of sustainable investments in Caverion’s operating area.
In Caverion’s operating countries, the lockdown measures of the first wave of the corona pandemic impacted Caverion’s business mainly between mid-March and the end of May, after which they have been gradually dismantled and their impact has reduced. At the end of the second quarter, the corona pandemic was well contained in most Caverion countries, while at the global level the pandemic continued spreading. A possible second wave of the coronavirus spread could lead to renewed lockdown measures also in Caverion countries and again increase the negative business impacts. Any further restrictions such as limiting industrial operations and shutdowns or temporary close-downs of premises or construction sites could have an impact on Caverion’s business.
The corona crisis has led to a global downturn, but it is still unclear how deep and how long the downturn will be and what will be the speed of the economic recovery. The business volume and the amount of new order intake are important determinants to Caverion’s performance in the second half of 2020, but both are still difficult to predict at present. While the digitalisation and sustainability megatrends are in many ways favourable to Caverion, a global downturn will most likely negatively impact the general level of demand and the pricing environment also for Caverion’s offering. Most likely the demand for new construction projects will decrease, but there may also be an impact on smaller ad-hoc services and projects.
The corona crisis and the resulting downturn may also promote additional demand and new opportunities for some of Caverion’s solutions going forward. Remotely controlled buildings are helping customers to save time and money, but also enable to operate the buildings more safely. Special requirements also apply to ventilation and air-conditioning systems, increasing the demand for ventilation related upgrades based on new guidelines and requirements.
Despite the coronavirus and its economic effects, the overall megatrends in the industry, such as the increase of technology in built environments, energy efficiency requirements, increasing digitalisation and automation as well as urbanisation remain strong and are expected to promote demand for Caverion’s services and solutions over the coming years. Especially the sustainability trend is expected to continue strong. Increasing awareness of sustainability is supported by both EU-driven regulations and national legislation setting higher targets and actions for energy efficiency and carbon-neutrality.
Services
The corona crisis and the resulting economic slowdown are in general expected to impact the demand environment negatively in Services, especially in ad-hoc services. However, Caverion’s Services business is by nature more stable and resilient through business cycles than the Projects business. As technology in buildings increases, the need for new services and digital solutions is expected to increase. Customer focus on core operations continues to open up outsourcing and maintenance as well as technical building management opportunities for Caverion. In some cases, the demand for smaller ad-hoc work in empty buildings may also increase. There is a continued interest for services supporting sustainability, such as energy management. In Cooling, there is a technical change ongoing from F-gases into CO2-based refrigeration, providing increased need for upgrades and modernisations.
Projects
The corona crisis and the resulting economic slowdown are in general expected to impact the demand environment negatively in Projects. Most likely the demand for new construction projects will decrease, but on the other hand, renovation construction is expected to continue increasing. The current circumstances also allow doing repairs and many types of installation projects for unoccupied properties and sites. From the trends perspective, the requirements for increased energy efficiency, better indoor climate and tightening environmental legislation continue to drive demand over the coming years. Stimulus packages are also expected to positively impact general demand in the Projects business.
Guidance for 2020
Caverion announced on 14 April 2020 that it withdraws its guidance for 2020 due to the increased uncertainty around the market outlook as a result of the coronavirus pandemic.
Caverion may provide an updated guidance for 2020 once the visibility improves and more reliable estimates can be made.
INFORMATION SESSION, WEBCAST AND CONFERENCE CALL
Caverion will hold a news conference and webcast on the Half-year Financial Report on Thursday, 6 August 2020, at 10.00 a.m. Finnish time (EEST) at Hotel Kämp, Klauvikatu 2, Helsinki, Finland. The news conference can also be viewed live on Caverion’s website at www.caverion.com/investors.
It is also possible to participate in the event through a conference call by calling the assigned number +44 (0)330 336 9105 at 9:55 a.m. (Finnish time, EEST) at the latest. The participant code for the conference call is “5730948/Caverion”. More practical information on the news conference can be found on Caverion's website, www.caverion.com/investors.
Financial information to be published in 2020
Q3 Interim Report will be published on 5 November 2020. Financial reports and other investor information are available on Caverion’s website www.caverion.com/investors. The materials may also be ordered by sending an e-mail to email@example.com.
CAVERION CORPORATION
For further information, please contact:
Martti Ala-Härkönen, Chief Financial Officer, Caverion Corporation, tel. +358 40 737 6633, firstname.lastname@example.org
Milena Hæggström, Head of Investor Relations and External Communications, Caverion Corporation, tel. +358 40 5581 328, email@example.com
Distribution: Nasdaq Helsinki, principal media, www.caverion.com
GROUP FINANCIAL DEVELOPMENT
Key figures
**Order backlog (EUR million)**
- Q2/2018: 1,597
- Q2/2019: 1,705
- Q2/2020: 1,740
**Revenue (EUR million)**
- Q2/2018: 565
- Q2/2019: 512
- Q2/2020: 518
**Adjusted EBITA (EUR million)**
- Q2/2018: 11.2
- Q2/2019: -3.2
- Q2/2020: 4.8
**Operating cash flow before financial and tax items (EUR million)**
- Q2/2018: -15
- Q2/2019: 29
- Q2/2020: 48
**Net debt (EUR million)**
- Q2/2019: 159
- Q4/2019: 168
- Q2/2020: 139
**Working capital (EUR million)**
- Q2/2019: -81
- Q4/2019: -101
- Q2/2020: -161
**Revenue by business unit % of revenue 1-6/2020**
- Services business unit 63%
- Projects business unit 37%
**Revenue by division % of revenue 1-6/2020**
- Sweden 20%
- Finland 19%
- Germany 17%
- Norway 15%
- Industry 13%
- Austria 9%
- Denmark 4%
- Other countries 3%
**Personnel by division at the end of June 2020**
- Finland 19%
- Sweden 18%
- Industry 17%
- Norway 15%
- Germany 14%
- Other countries 8%
- Austria 5%
- Denmark 4%
- Group Services 1%
Comparative figures for 2018 have not been restated according to IFRS 16.
Operating environment in the second quarter and during the first half of 2020
The overall market and demand situation continued to weaken in April due to the coronavirus pandemic. There were more of our workforce absent as well as more work site delays and closures especially in April-May. Most of Caverion’s operating countries were also locked down in the early part of the second quarter, while government restrictions and the impacts to Caverion’s business started to clearly ease up in June. On a positive note, Caverion did not experience any major constraints from the supply chain perspective.
In order to minimise the negative financial impacts from the pandemic on its operations, Caverion continued its cost saving actions and adapted its resources. In most of the operating countries, the key flexibility measures were the use of temporary layoffs and the reduction of subcontracting. Due to the increased uncertainty around the market outlook as a result of the coronavirus pandemic, the President and CEO and the top management of Caverion also decided to voluntarily lower their compensation. The President and CEO of Caverion lowered his monthly base salary by 20 percent for six months and postponed the payment of his bonus payment for the financial year 2019 by six months. The Board of Directors of Caverion decided also on 30 April 2020, upon management’s suggestion, to postpone the commencement of PSP 2020-2022 incentive plan, latest until the beginning of the year 2021.
Services
The impacts of the coronavirus pandemic were more visible between mid-March and the end of May, at which time there were site access restrictions and less ad-hoc work orders, negatively impacting revenue and profitability. Government restrictions and the impacts to Caverion’s business started to clearly ease up in June. In division Industry, the corona situation also postponed several annual spring and summer shutdowns in Finland until autumn, which is estimated to be the next high season for industrial shutdowns. Pricing environment also tightened in Services during the second quarter.
There was still a general increasing interest for services supporting sustainability, such as energy management and advisory services.
Projects
The impacts of the coronavirus pandemic were more visible between mid-March and the end of May, while government restrictions and the impacts to Caverion’s business started to clearly ease up in June. There were more of our workforce absent as well as more work site delays and closures.
The demand for new construction projects was negatively impacted by the corona pandemic, however less for renovation construction. Pricing environment generally tightened in Projects during the second quarter. Stimulus packages did not yet impact general demand.
Order backlog
Order backlog at the end of June increased by 2.1 percent to EUR 1,739.7 million from the end of June in the previous year (EUR 1,704.7 million). At comparable exchange rates the order backlog increased by 3.2 percent. Order backlog increased in Services compared to the previous year.
Revenue
April–June
| EUR million | 4-6/2020 | 4-6/2019 | Change | Change in local currencies | Organic growth * | Currency impact | Acquisitions and divestments impact |
|-------------|----------|----------|--------|---------------------------|------------------|----------------|-----------------------------------|
| Services | 321.1 | 311.3 | 3.1% | 5.8% | -7.1% | -2.6% | 12.9% |
| Projects | 197.4 | 201.0 | -1.8% | -0.6% | -1.2% | -1.2% | 0.6% |
| Group total | 518.5 | 512.3 | 1.2% | 3.3% | -4.8% | -2.1% | 8.1% |
* Revenue change in local currencies, excluding acquisitions and divestments
Revenue for April–June was EUR 518.5 (512.3) million, an increase of 1.2 percent compared to the previous year. Organic growth was -4.8 percent. Revenue was impacted by fluctuations in currency exchange rates and includes the Maintpartner and Huurre acquisitions as of December 2019. At the previous year’s exchange rates, revenue was EUR 529.2 million and increased by 3.3 percent compared to the previous year. Changes in Swedish krona and Norwegian krone had a negative effect amounting to EUR 0.3 million and EUR 9.9 million, respectively.
Revenue increased in Finland, Germany, Industry and Other countries while it decreased in other divisions.
The revenue of the Services business unit increased and was EUR 321.1 (311.3) million in April–June, an increase of 3.1 percent, or 5.8 percent in local currencies. The revenue of the Projects business unit was EUR 197.4 (201.0) million in April–June, a decrease of 1.8 percent, or 0.6 percent in local currencies.
The Services business unit accounted for 61.9 (60.8) percent of Group revenue, and the Projects business unit for 38.1 (39.2) percent of Group revenue in April–June.
| EUR million | 1-6/2020 | 1-6/2019 | Change | Change in local currencies | Organic growth * | Currency impact | Acquisitions and divestments impact |
|-------------|----------|----------|--------|---------------------------|------------------|----------------|-----------------------------------|
| Services | 664.0 | 615.7 | 7.8% | 10.3% | -1.8% | -2.4% | 12.0% |
| Projects | 396.1 | 411.0 | -3.6% | -2.5% | -3.0% | -1.2% | 0.5% |
| Group total | 1,060.1 | 1,026.7 | 3.2% | 5.2% | -2.3% | -1.9% | 7.4% |
* Revenue change in local currencies, excluding acquisitions and divestments
Revenue for January–June was EUR 1,060.1 (1,026.7) million, an increase of 3.2 percent compared to the previous year. Organic growth was -2.3 percent. Revenue was impacted by fluctuations in currency exchange rates and includes the Maintpartner and Huurre acquisitions as of December 2019. At the previous year’s exchange rates, revenue was EUR 1,079.8 million and increased by 5.2 percent compared to the previous year. Changes in Swedish krona and Norwegian krone had a negative effect amounting to EUR 2.9 million and EUR 16.3 million, respectively.
Revenue increased in Sweden, Finland, Germany and Industry, while it decreased in other divisions.
The revenue of the Services business unit increased and was EUR 664.0 (615.7) million in January–June, an increase of 7.8 percent, or 10.3 percent in local currencies. The revenue of the Projects business unit was EUR 396.1 (411.0) million in January–June, a decrease of 3.6 percent, or 2.5 percent in local currencies.
The Services business unit accounted for 62.6 (60.0) percent of Group revenue, and the Projects business unit for 37.4 (40.0) percent of Group revenue in January–June.
Distribution of revenue by Division and Business Unit
| Revenue, EUR million | 4–6/2020 | % | 4–6/2019 | % | Change | 1–6/2020 | % | 1–6/2019 | % | Change | 1–12/2019 | % |
|----------------------|----------|-----|----------|-----|--------|----------|-----|----------|-----|--------|------------|-----|
| Sweden | 104.3 | 20.1| 107.9 | 21.1| -3.3% | 215.3 | 20.3| 214.5 | 20.9| 0.4% | 435.4 | 20.5|
| Finland | 103.8 | 20.0| 92.6 | 18.1| 12.1% | 203.1 | 19.2| 181.3 | 17.7| 12.0% | 384.3 | 18.1|
| Norway | 70.9 | 13.7| 89.5 | 17.5| -20.8% | 156.8 | 14.8| 188.6 | 18.4| -16.8% | 359.6 | 16.9|
| Germany | 91.4 | 17.6| 83.8 | 16.3| 9.2% | 180.3 | 17.0| 166.0 | 16.2| 8.6% | 355.5 | 16.7|
| Austria | 42.6 | 8.2 | 50.5 | 9.9 | -15.5% | 90.6 | 8.5 | 91.7 | 8.9 | -1.1% | 200.1 | 9.4 |
| Industry | 66.5 | 12.8| 46.4 | 9.1 | 43.2% | 134.7 | 12.7| 98.0 | 9.5 | 37.4% | 205.3 | 9.7 |
| Denmark | 20.7 | 4.0 | 25.8 | 5.0 | -19.5% | 46.1 | 4.3 | 53.2 | 5.2 | -13.5% | 109.5 | 5.2 |
| Other countries* | 18.2 | 3.5 | 15.9 | 3.1 | 14.2% | 33.1 | 3.1 | 33.4 | 3.3 | -0.9% | 73.6 | 3.5 |
| Group, total | 518.5 | 100 | 512.3 | 100 | 1.2% | 1,060.1 | 100 | 1,026.7 | 100 | 3.2% | 2,123.2 | 100 |
| Services | 321.1 | 61.9| 311.3 | 60.8| 3.1% | 664.0 | 62.6| 615.7 | 60.0| 7.8% | 1,274.9 | 60.0|
| Projects | 197.4 | 38.1| 201.0 | 39.2| -1.8% | 396.1 | 37.4| 411.0 | 40.0| -3.6% | 848.3 | 40.0|
*Other countries include the Baltic countries, Poland (until 28 February 2019) and Russia.
Profitability
EBITA and operating profit
April–June
Adjusted EBITA for April–June amounted to EUR 4.8 (-3.2) million, or 0.9 (-0.6) percent of revenue and EBITA to EUR 8.4 (-4.1) million, or 1.6 (-0.8) percent of revenue.
In Services, the ad-hoc orders were lower in April–May, followed by a recovery in June. In Projects, the corona pandemic impacted productivity. The Projects business profitability was also affected by completion of the last few old projects, ramping down the large projects business in Denmark and the inflexibility to adjust personnel costs with temporary lay-offs in Central Europe.
The operating profit (EBIT) for April–June improved to EUR 5.0 (-7.7) million, or 1.0 (-1.5) percent of revenue.
In the adjusted EBITA calculation, the capital gains from divestments and the transaction costs related to divestments and acquisitions totalled EUR –7.2 million. The write-downs, expenses and/or income from separately identified major risk projects amounted to EUR 3.0 million. The Group’s restructuring costs amounted to EUR 0.8 million, the majority of which related to Sweden and Germany. Other items totalled EUR –0.2 million.
Costs related to materials and supplies increased to EUR 131.0 (127.8) million and external services decreased to EUR 96.5 (99.6) million in April–June. Personnel expenses increased by 2.9 percent from the previous year and amounted to a total of EUR 224.9 (218.6) million for April–June, explained by the recent acquisitions. Personnel expenses decreased from the previous year excluding the effect from these acquisitions. Division Sweden received a grant from the government during the second quarter for short-term layoffs and sick-leave compensation amounting to about EUR 2.4 million. This has been presented in income statement as a reduction of personnel costs. Other operating expenses decreased to EUR 52.5 (58.0) million. Other operating income was EUR 8.5 (0.8) million. The capital gain from the sale of a subsidiary in Russia is reported under other operating income for the period and it amounted to EUR 7.3 million, mainly consisting of cumulative translation differences. The transaction had no cash flow impact. The figures for 2019 do not include the costs of the companies acquired in late 2019.
Depreciation, amortisation and impairment amounted to EUR 17.2 (16.9) million in April–June. Of these EUR 13.7 (13.3) million were depreciations on tangible assets and EUR 3.4 (3.6) million amortisations on intangible assets. Of the depreciations, the majority related to right-of-use assets in accordance with IFRS 16 amounting to EUR 12.1 (12.0) million. The amortisations related to allocated intangibles on acquisitions and IT.
EBITA is defined as Operating profit + amortisation and impairment on intangible assets. Adjusted EBITA = EBITA before items affecting comparability (IAC). Items affecting comparability (IAC) in 2020 are material items or transactions, which are relevant for understanding the financial performance of Caverion when comparing the profit of the current period with
that of the previous periods. These items can include (1) capital gains and/or losses and transaction costs related to divestments and acquisitions; (2) write-downs, expenses and/or income from separately identified major risk projects; (3) restructuring expenses and (4) other items that according to Caverion management’s assessment are not related to normal business operations. In 2019 and 2020, major risk projects only include one risk project in Germany reported under category (2). In 2019, legal and other costs related to the German anti-trust fine and a compensation from the previous owners of a German subsidiary related to the cartel case were reported under category (4). In 2020, costs related to a subsidiary in Russia sold during the second quarter have been reported under category (4).
**January–June**
Adjusted EBITA for January–June amounted to EUR 17.0 (10.6) million, or 1.6 (1.0) percent of revenue and EBITA to EUR 18.4 (5.2) million, or 1.7 (0.5) percent of revenue.
The operating profit (EBIT) for January–June improved to EUR 11.5 (-2.4) million, or 1.1 (-0.2) percent of revenue.
In the adjusted EBITA calculation, the capital gains from divestments and the transaction costs related to divestments and acquisitions totalled EUR -6.9 million. The write-downs, expenses and/or income from separately identified major risk projects amounted to EUR 3.1 million. The Group’s restructuring costs amounted to EUR 2.0 million, the majority of which related to Sweden, Industry, Denmark and Germany. Other items totalled EUR 0.3 million.
Costs related to materials and supplies increased to EUR 259.3 (252.1) million and external services decreased to EUR 191.0 (194.2) million in January–June. Personnel expenses increased by 6.0 percent from the previous year and amounted to a total of EUR 465.5 (439.2) million for January–June, explained by the recent acquisitions. Personnel expenses decreased from the previous year excluding the effect from these acquisitions. Division Sweden received a grant from the government during the second quarter for short-term layoffs and sick-leave compensation amounting to about EUR 2.4 million. This has been presented in income statement as a reduction of personnel costs. Other operating expenses decreased to EUR 107.1 (110.8) million. Other operating income was EUR 9.0 (1.3) million. The capital gain from the sale of a subsidiary in Russia is reported under other operating income for the period and it amounted to EUR 7.3 million, mainly consisting of cumulative translation differences. The transaction had no cash flow impact. The figures for 2019 do not include the costs of the companies acquired in late 2019.
Depreciation, amortisation and impairment amounted to EUR 34.8 (34.2) million in January–June. Of these EUR 27.9 (26.5) million were depreciations on tangible assets and EUR 6.9 (7.6) million amortisations on intangible assets. Of the depreciations, the majority related to right-of-use assets in accordance with IFRS 16 amounting to EUR 24.5 (23.9) million. The amortisations related to allocated intangibles on acquisitions and IT.
EBITA is defined as Operating profit + amortisation and impairment on intangible assets. Adjusted EBITA = EBITA before items affecting comparability (IAC). Items affecting comparability (IAC) in 2020 are material items or transactions, which are relevant for understanding the financial performance of Caverion when comparing the profit of the current period with that of the previous periods. These items can include (1) capital gains and/or losses and transaction costs related to divestments and acquisitions; (2) write-downs, expenses and/or income from separately identified major risk projects; (3) restructuring expenses and (4) other items that according to Caverion management’s assessment are not related to normal business operations. In 2019 and 2020, major risk projects only include one risk project in Germany reported under category (2). In 2019, legal and other costs related to the German anti-trust fine and a compensation from the previous owners of a German subsidiary related to the cartel case were reported under category (4). In 2020, costs related to a subsidiary in Russia sold during the second quarter have been reported under category (4).
### Adjusted EBITA and items affecting comparability (IAC)
| EUR million | 4–6/2020 | 4–6/2019 | 1–6/2020 | 1–6/2019 | 1–12/2019 |
|-------------|----------|----------|----------|----------|-----------|
| EBITA | 8.4 | -4.1 | 18.4 | 5.2 | 49.8 |
| EBITA margin, % | 1.6 | -0.8 | 1.7 | 0.5 | 2.3 |
| **Items affecting comparability (IAC)** | | | | | |
| - Capital gains and/or losses and transaction costs related to divestments and acquisitions | -7.2 | 0.3 | -6.9 | 2.5 | 4.8 |
| - Write-downs, expenses and income from major risk projects* | 3.0 | | 3.1 | 1.6 | 17.1 |
| - Restructuring costs | 0.8 | 0.5 | 2.0 | 1.0 | 4.6 |
| - Other items** | -0.2 | 0.1 | 0.3 | 0.2 | -9.0 |
| Adjusted EBITA | 4.8 | -3.2 | 17.0 | 10.6 | 67.2 |
| Adjusted EBITA margin, % | 0.9 | -0.6 | 1.6 | 1.0 | 3.2 |
* Major risk projects include only one risk project in Germany in 2019 and 2020.
** Including the German anti-trust fine related legal and other costs, a compensation from the previous owners of a German subsidiary related to the cartel case and costs related to a subsidiary in Russia sold during the second quarter
### Result before taxes, result for the period and earnings per share
Result before taxes amounted to EUR 5.0 (-5.9) million, result for the period to EUR 3.7 (-4.1) million, and earnings per share to EUR 0.02 (-0.04) in January–June. Net financing expenses in January–June were EUR 6.5 (3.4) million. This includes an interest cost on lease liabilities amounting to EUR 2.3 (2.6) million and an exchange rate loss from an internal loan denominated in euros in Russia amounting to EUR 1.0 million.
In April–June, result before taxes improved to EUR 2.7 (-9.9) million, result for the period to EUR 2.1 (-7.1) million, and earnings per share to EUR 0.01 (-0.06).
The Group’s effective tax rate was 26.0 (30.8) percent in January–June.
### Capital expenditure, acquisitions and disposals
Gross capital expenditure on non-current assets totalled EUR 12.2 (8.3) million in January–June, representing 1.2 (0.8) percent of revenue. Investments in information technology totalled EUR 5.5 (4.6) million. IT investments continued to be focused on building a harmonised IT infrastructure and common platforms as well as datacenter consolidation. IT systems and mobile tools were also developed to improve the Group’s internal processes and efficiency going forward. Other investments, including acquisitions, amounted to EUR 6.8 (3.7) million.
Caverion signed an agreement to acquire Gunderlund A/S, a Danish company specialising in power grid expansions and renovations in March 2020. The revenue of the acquired company amounted to EUR 3.2 million in a twelve-month period ending September 2019. Gunderlund employs about 10 people. The transaction value was not disclosed. The purchase price was paid in cash.
Caverion signed an agreement to sell certain Finnish operations of Caverion Industria Ltd to Elcoline Oy in June based on the conditions imposed on the Maintpartner transaction by the Finnish Competition and Consumer Authority (the “FCCA”). The buyer is a Finnish, internationally operating provider of industrial maintenance that has approximately 300 employees before the transaction. According to a stock exchange release published by Caverion on 22 November 2019, the FCCA approval on the Maintpartner transaction included certain conditions based on which Caverion was to divest approximately 6.5 percent of the post-transaction revenue (approximately EUR 300 million in 2018) of the Industry division in Finland. The business transfer is expected to be completed during autumn 2020. The transaction value will not be disclosed.
Caverion sold a subsidiary in Russia during the second quarter. The capital gain from the sale is reported under other operating income for the period and it amounted to EUR 7.3 million, mainly consisting of cumulative translation differences. The transaction had no cash flow impact.
Cash flow, working capital and financing
The Group’s operating cash flow before financial and tax items improved to EUR 104.3 (59.2) million in January–June and cash conversion (LTM) was 160.7 (169.9) percent. The Group’s free cash flow improved to EUR 91.0 (52.1) million. Cash flow after investments was EUR 85.7 (47.4) million.
In April–June, the Group’s operating cash flow before financial and tax items improved to EUR 48.2 (29.1) million. Cash flow was positively impacted by postponing authority payments to the value of EUR 29.6 million, which will be paid out in July–November. The Group’s free cash flow improved to EUR 45.0 (25.2) million. Cash flow after investments was EUR 43.1 (23.9) million.
The Group’s working capital improved to EUR -161.3 (-80.8) million at the end of June. There were improvements in divisions Finland, Sweden, Industry, Germany and Austria compared to the previous year. The amount of trade and POC receivables decreased to EUR 488.4 (502.2) million and other current receivables to EUR 24.0 (25.2) million. On the liabilities side, advances received increased to EUR 236.5 (196.5) million and other current liabilities to EUR 267.0 (234.5) million, while trade and POC payables decreased to EUR 189.3 (194.5) million.
Caverion’s liquidity position was strong and Caverion had a high amount of undrawn credit facilities on 30 June 2020. Caverion’s cash and cash equivalents amounted to EUR 130.2 (103.6) million at the end of June. In addition, Caverion had undrawn revolving credit facilities amounting to EUR 100.0 million and undrawn overdraft facilities amounting to EUR 19.0 million.
The Group’s gross interest-bearing loans and borrowings excluding lease liabilities amounted to EUR 140.1 (128.3) million at the end of June, and the average interest rate was 2.7 (2.9) percent. Approximately 36 percent of the loans have been raised from banks and other financial institutions and approximately 64 percent from capital markets. Lease liabilities amounted to EUR 128.9 (134.3) million at the end of June 2020, resulting to total gross interest-bearing liabilities of EUR 269.0 (262.6) million.
The Group’s interest-bearing net debt excluding lease liabilities amounted to EUR 9.9 (24.7) million at the end of June and including lease liabilities to EUR 138.8 (158.9) million. At the end of June, the Group’s gearing was 72.5 (77.3) percent and the equity ratio 18.6 (20.8) percent. Excluding the effect of IFRS 16, the gearing would have amounted to 5.2 (12.0) percent and the equity ratio to 21.2 (24.0) percent.
Caverion raised a 5-year TyEL pension loan of EUR 15 million on 29 April 2020.
On 15 May 2020 Caverion issued a EUR 35 million hybrid bond, an instrument subordinated to the company's other debt obligations and treated as equity in the IFRS financial statements. The hybrid bond does not confer to its holders the rights of a shareholder and does not dilute the holdings of the current shareholders. The coupon of the hybrid bond is 6.75 per cent per annum until 15 May 2023. The hybrid bond does not have a maturity date but the issuer is entitled to redeem the hybrid for the first time on 15 May 2023, and subsequently, on each coupon interest payment date. If the hybrid bond was not redeemed on 15 May 2023, the coupon will be changed to 3-month EURIBOR added with a Re-offer Spread (706.8 bps) and step-up of 500bps.
The outstanding EUR 66.06 million 2017 Capital Securities was redeemed in full on 16 June 2020 in accordance with its terms and conditions.
In June a one-year extension option to move the maturity of RCF (100ME) and term loan (50ME) from 2022 to February 2023 was utilised.
Caverion’s external loans are subject to a financial covenant based on the ratio of the Group’s net debt to EBITDA. The financial covenant shall not exceed 3.5:1. At the end of June, the Group’s Net debt to EBITDA was 0.1x according to the confirmed calculation principles. The confirmed calculation principles exclude the effects of the IFRS 16 standard and contain certain other adjustments.
Recent changes in financial reporting affecting comparability
Caverion made three important acquisitions in 2019. Maintparther and Huurre acquisitions were closed in the end of November 2019 and Pelisu Pelastussuunitelma Oy in October 2019, thus affecting the reporting as of December 2019 and November 2019, respectively. In December 2018, Caverion announced the sale of its small subsidiaries in Poland and Czech Republic. These were completed on 28 February 2019 and on 2 January 2019, respectively.
PERSONNEL
| Personnel by division, end of period | 6/2020 | 3/2020 | Change | 6/2020 | 6/2019 | Change | 12/2019 |
|-------------------------------------|--------|--------|--------|--------|--------|--------|---------|
| Sweden | 2,786 | 2,865 | -3% | 2,786 | 2,790 | 0% | 2,961 |
| Finland | 2,948 | 2,811 | 5% | 2,948 | 2,698 | 9% | 2,795 |
| Norway | 2,354 | 2,399 | -2% | 2,354 | 2,409 | -2% | 2,431 |
| Germany | 2,256 | 2,256 | 0% | 2,256 | 2,190 | 3% | 2,253 |
| Industri | 2,753 | 2,815 | -2% | 2,753 | 1,613 | 71% | 2,929 |
| Other countries | 1,197 | 1,238 | -3% | 1,197 | 1,235 | -3% | 1,223 |
| Austria | 847 | 834 | 2% | 847 | 829 | 2% | 828 |
| Denmark | 637 | 669 | -5% | 637 | 803 | -21% | 734 |
| Group Services | 124 | 123 | 1% | 124 | 114 | 9% | 119 |
| Group, total | 15,902 | 16,010 | -1% | 15,902 | 14,681 | 8% | 16,273 |
Caverion Group employed 16,021 (14,663) people on average in January–June 2020. At the end of June, the Group employed 15,902 (14,681) people. Personnel expenses for January–June amounted to EUR 465.5 (439.2) million.
Employee safety continued to be a high focus area in the first half of the year. Due to the corona situation, many extra actions have been taken to protect the employees, organise the work in a way that it is safe to complete and establish different supportive trainings, tools and communication methods. The Group’s accident frequency rate at the end of June was 4.1 (5.8).
Changes in Caverion’s Group Management Board and organisation structure
Elina Engman, M.Sc. (Tech.) (born 1970), was appointed as Head of Division Industrial Solutions and a member of the Group Management Board of Caverion Corporation as of 1 January 2020. She has previously worked as Vice President at AF Consult responsible for AF’s renewables and energy business consulting, as President and CEO of Voimaosakeyhtiö SF, as Vice President, Energy at Kemira Corporation as well as in energy business related roles at Areva and Siemens.
SIGNIFICANT SHORT TERM RISKS AND UNCERTAINTIES
In the first half of 2020, the general risk level in the economy increased due to the outbreak of the corona virus pandemic. In Caverion’s operating countries, the lockdown measures of the first wave of the corona pandemic impacted Caverion’s business mainly between mid-March and the end of May, after which they have been gradually dismantled and their impact has reduced. At the end of the second quarter, the corona pandemic was well contained in most Caverion countries, while at the global level the pandemic continued spreading.
A possible second wave of the corona virus could lead to renewed lockdown measures also in Caverion’s operating countries and again increase the negative business impacts.
Caverion’s business is exposed to various risks associated with the corona crisis such as suspension or cancellation of existing contracts by customers, lack of demand for new services, absenteeism of employees and subcontractor staff, closures of work sites and other work premises by customers or authorities, defaults in customer payments and lack or poor availability of financing.
Apart from its immediate effects, the corona pandemic has also led to a global economic downturn, which in many areas can negatively impact the general demand also for Caverion. However, a material part of Caverion’s offering is of such nature that the customers will need these services also during a downturn and recession.
It is still unclear how deep and long the downturn will be and what will be the speed of the economic recovery. The business volume and the amount of new order intake are important determinants to Caverion’s performance in the second half of 2020 and beyond into 2021, but both are still difficult to predict at present.
More generally, Caverion is exposed to different types of strategic, operational, political, market, customer, financial and other risks. Caverion estimates that the trade, health and political risks are increasing globally.
Caverion’s typical operational risks relate to its Services and Projects business. These include risks related to tendering (e.g. calculation and pricing), contractual terms and conditions, partnering, subcontracting, procurement and price of materials, availability of qualified personnel and project management. To manage these risks, risk assessment and review processes for both the sales and execution phase are in place, and appropriate risk reservations are being made. The Group Projects Business Unit is dedicated to the overall improvement of project risk management, to steering the project portfolio and to improving project management capabilities. Despite all the actions taken, there is a risk that some project risks will materialise, which could have a negative impact on Caverion’s financial performance and position. Project risk assessment is part of the standard project management processes in the company, and it is possible that risks may be identified in projects which are currently running and in new projects.
Despite clearly defined project controls, it is possible that some risks may materialise, which could lead to project write-downs, provisions, disputes or litigations. Caverion has made a large amount of project write-downs during the last few years. Systematic performance management continues to be part of the core project management processes in all divisions. In 2019 and 2020, Caverion reports only one old major risk project from Germany in adjusted EBITA, the completion of which has been delayed approximately into the end of 2020. It is possible that further risks may emerge in this old project or other projects.
According to Group policy, write-offs or provisions are booked on receivables when it is probable that no payment can be expected. Caverion Group follows a policy in valuing trade receivables and the bookings include estimates and critical judgements. The estimates are based on experience with write-offs realised in previous years, empirical knowledge of debt collection, customer-specific collaterals and analyses as well as the general economic situation of the review period. Caverion carries out risk assessments related to POC and trade receivables in its project portfolio on an ongoing basis. There are certain individual larger receivables where the company continues its actions to negotiate and collect the receivables. There is remaining risk in the identified receivables, and it cannot be ruled out that there is also risk associated with other receivables. The corona crisis has increased the general risk level related to the financial standing of customers and the collection of receivables.
Given the nature of Caverion’s Projects business, Group companies are involved in disputes and legal proceedings in several projects. These disputes and legal proceedings typically concern claims made against Caverion for allegedly defective or delayed delivery. In some cases, the collection of receivables by Caverion may result in disputes and legal proceedings. There is a risk that the client presents counter claims in these proceedings. The outcome of claims, disputes and legal proceedings is difficult to predict. Write-downs and provisions are booked following the applicable accounting rules.
In June 2018, Caverion reached a settlement for its part with the German Federal Office (FCO) in a cartel case that had been investigated by the authority since 2014. The investigation concerned several companies providing technical building services in Germany. Caverion Deutschland GmbH (and its predecessors) was found to have participated in anti-competitive practices between 2005 and 2013. According to the FCO’s final decision issued on 3 July 2018, Caverion Deutschland GmbH was imposed a fine of EUR 40.8 million. In the end of March 2020, the FCO issued its final decision on the cartel case against the other building technology companies involved in the matter. There is a risk that civil claims may be presented against the involved companies, including Caverion Deutschland GmbH. It is not possible to evaluate the magnitude of the risk for Caverion at this time. Caverion will disclose any relevant information on the potential civil law claims as required under the applicable regulations.
As part of Caverion’s co-operation with the authorities in the cartel matter, the company identified activities between 2009 and 2011 that were likely to fulfil the criteria of corruption or other criminal commitment in some of its client projects executed in that time. Caverion brought its findings to the attention of the authorities and supported them in investigating the case. In the end of June 2020, the public prosecutor’s office in Munich informed Caverion that no further investigative measures are intended and that no formal fine proceedings against Caverion will be initiated related to those cases. There is a risk that civil claims may be presented against Caverion Deutschland GmbH. It is not possible to evaluate the magnitude of the risk for Caverion at this time. Caverion will disclose any relevant information on the potential civil law claims as required under the applicable regulations.
Caverion has made significant efforts to promote compliance in order to avoid any infringements in the future. As part of the programme all employees must complete an e-learning module and further training is given across the organisation. All employees are required to comply with Caverion’s Code of Conduct, which has a policy of zero tolerance on anti-competitive practices, corruption, bribery or any unlawful action.
Goodwill recognised on Caverion’s balance sheet is not amortised, but it is tested annually for any impairment. The amount by which the carrying
amount of goodwill exceeds the recoverable amount is recognised as an impairment loss through profit and loss. If negative changes take place in Caverion’s result and growth development, this may lead to an impairment of goodwill, which may have an unfavourable effect on Caverion’s result of operations and shareholders’ equity.
Caverion’s external loans are subject to a financial covenant based on the ratio of the Group’s net debt to EBITDA. Breaching this covenant would give the lending parties the right to declare the loans to be immediately due and payable. It is possible that Caverion may need amendments to its financial covenant in the future. The level of the financial covenant ratio is continuously monitored and evaluated against actual and forecasted EBITDA and net debt figures. The outbreak of the coronavirus pandemic has increased the general risk level related to the availability of financing as well as foreign exchange related risks.
Caverion’s business typically involves granting guarantees to customers or other stakeholders, especially for large projects, e.g. for advance payments received, for performance of contractual obligations, and for defects during the warranty period. Such guarantees are typically granted by financial intermediaries on behalf of Caverion. There is no assurance that the company would have continuous access to sufficient guarantees from financial intermediaries at competitive terms or at all, and the absence of such guarantees could have an adverse effect on Caverion’s business and financial situation. To manage this risk, Caverion’s target is to maintain several guarantee facilities in the different countries where it operates. The outbreak of the coronavirus pandemic has increased the general risk level related to the availability of guarantee facilities.
There are risks related to the functionality, security and availability of the company’s IT systems. Caverion has made significant investments in IT and system development. There is a risk that the expected functionalities and pay-back are not fully materialised.
Financial risks have been described in more detail in the 2019 Financial Statements under Note 5.5 “Financial risk management”.
RESOLUTIONS PASSED AT THE ANNUAL GENERAL MEETING
Caverion Corporation’s Annual General Meeting, which was held under special arrangements in Vantaa on 25 May 2020, adopted the Financial Statements and the consolidated Financial Statements for the year 2019 and discharged the members of the Board of Directors and the President and CEO from liability. In addition, the Annual General Meeting resolved to authorise the Board of Directors to decide on the distribution of dividends, to support the presented Remuneration Policy for Governing Bodies, on the composition of members of the Board of Directors and their remuneration, the election of the auditor and its remuneration as well as authorised the Board of Directors to decide on the repurchase of the Company’s own shares and/or acceptance as pledge of own shares as well as share issues.
The Annual General Meeting elected a Chairman, a Vice Chairman and five (5) ordinary members to the Board of Directors. Mats Paulsson was elected as the Chairman of the Board of Directors, Markus Ehnrooth as the Vice Chairman and Jussi Aho, Joachim Hallengren, Thomas Hinnerckov, Kristina Jahn and Jasmin Soravia as members of the Board of Directors for a term of office expiring at the end of the Annual General Meeting 2021. The stock exchange release on the resolutions passed at the Annual General Meeting is available on Caverion’s website at http://www.caverion.com/about-us/media/releases.
The Board of Directors held its organisational meeting on 25 May 2020. At the meeting the Board decided on the composition of the Human Resources Committee and the Audit Committee. A description of the committees’ tasks and charters are available on Caverion’s website at www.caverion.com/investors – Corporate Governance.
DIVIDENDS AND DIVIDEND POLICY
Caverion Corporation’s Annual General Meeting, held on 25 May 2020, approved the proposal of the Board of Directors according to which no dividends will be distributed based on the balance sheet to be adopted for 2019 by a resolution of the Annual General Meeting, but that the Board of Directors be authorized to decide at their discretion on the distribution of dividends of a maximum amount of EUR 0.08 per share from the Company’s retained earnings. Based on the authorization, the Board of Directors is entitled to decide on the amount of dividends within the limits of the above maximum amount, on the dividend record date, on the dividend payment date as well as for the other measures required by the matter. The Company will publish the possible dividend distribution decision by the Board of Directors separately and in the same connection notify the applicable record and payment dates.
Caverion’s dividend policy is to distribute as dividends at least 50 percent of the result for the year after taxes, however, taking profitability and leverage level into account. Even though there are no plans to amend this dividend policy, there is no guarantee that a dividend or capital redemption will actually be paid in the future, and also there is no guarantee of the amount of the dividend or return of capital to be paid for any given year.
**SHARES AND SHAREHOLDERS**
The Caverion Corporation is a public limited company organised under the laws of the Republic of Finland, incorporated on 30 June 2013. The company has a single series of shares, and each share entitles its holder to one vote at the General Meeting of the company and to an equal dividend. The company’s shares have no nominal value.
**Share capital and number of shares**
The number of shares was 138,920,092 and the share capital was EUR 1,000,000 on 1 January 2020. Caverion held 2,849,360 treasury shares on 1 January 2020. At the end of the reporting period, the total number of shares in Caverion was 138,920,092. Caverion held 2,807,991 treasury shares on 30 June 2020, representing 2.02 percent of the total number of shares and voting rights. The number of shares outstanding was 136,112,101 at the end of June 2020.
The Board of Directors of Caverion Corporation decided in February 2020 on a directed share issue without payment for Caverion’s Restricted Share Plan 2017–2019 reward payment. The decision on the directed share issue without payment is based on the authorisation granted to the Board of Directors by the Annual General Meeting of Shareholders held on 25 May 2020. In the directed share issue without payment, 6,673 Caverion Corporation shares held by the company were on 26 June 2020 conveyed to a key employee according to the terms and conditions of the plan. Prior to the directed share issue, Caverion held a total of 2,814,664 treasury shares, of which 2,807,991 treasury shares remained with the company after the conveyance.
Caverion’s Board of Directors approved in December 2019 the commencement of a new plan period 2020–2022 in the share-based long-term incentive scheme originally established in December 2018. The scheme is based on a performance share plan (PSP) structure targeted to Caverion’s management and selected key employees. The Board approved at the same time the commencement of a new plan period 2020–2022 in the Restricted Share Plan (RSP) structure, which is a complementary share-based incentive structure for specific situations. Any potential share rewards based on PSP 2020–2022 and RSP 2020–2022 will be delivered in the spring 2023. More information on the plans have been published in a stock exchange release on 18 December 2019. The Board of Directors of Caverion has on 30 April 2020 decided, upon management’s suggestion, to postpone the commencement of PSP 2020–2022 incentive plan, latest until the beginning of the year 2021.
The Restricted Share Plan (RSP) is based on a rolling plan structure originally announced on 18 December 2015 and the commencement of each new plan within the structure is conditional on a separate Board approval. Share allocations within the Restricted Share Plan will be made for individually selected key employees in specific situations. Each RSP plan consists of a three-year vesting period after which the allocated share rewards will be delivered to the participants provided that their employment with Caverion continues at the time of the delivery of the share reward. The potential share rewards based on
the Restricted Share Plans for 2016–2018, 2017–2019, 2018–2020, 2019–2021 as well as 2020–2022 total a maximum of approximately 547,000 shares (gross before the deduction of applicable payroll tax). Of these plans, a maximum of 85,000 shares will be delivered in the spring of 2021, a maximum of 135,000 shares in the spring of 2022 and a maximum of 230,000 shares in the spring of 2023.
Caverion’s Board of Directors approved the previous long-term share-based incentive schemes for the Group’s senior management and key employees in December 2015 and in December 2018. The targets set for the Performance Share Plan 2016–2018 and 2017–2019 were not met, and no rewards thereof were paid. The targets set for the Performance Share Plan 2018–2020 were partially met and the respective share rewards will be delivered in February 2021. If all targets will be met, the share rewards based on PSP 2019–2021 will comprise a maximum of approximately 1.3 million Caverion shares (gross before the deduction of applicable taxes).
More information on the incentive plans has been published in stock exchange releases on 18 December 2015, 21 December 2016, 21 December 2017, 18 December 2018 and 18 December 2019.
Caverion has not made any decision regarding the issue of option rights or other special rights entitling to shares.
Authorisations of the Board of Directors
Authorising Caverion’s Board of Directors to decide on the repurchase and/or on the acceptance as pledge of own shares of the company
The Annual General Meeting of Caverion Corporation, held on 25 May 2020, authorised the Board of Directors to decide on the repurchase and/or on the acceptance as pledge of the Company’s own shares in accordance with the proposal by the Board of Directors. The number of own shares to be repurchased and/or accepted as pledge shall not exceed 13,500,000 shares, which corresponds to approximately 9.7% of all the shares in the Company. The Company may use only unrestricted equity to repurchase own shares on the basis of the authorisation. Purchase of own shares may be made at a price formed in public trading on the date of the repurchase or otherwise at a price formed on the market. The Board of Directors resolves the manner in which own shares are repurchased and/or accepted as pledge. Repurchase of own shares may be made using, inter alia, derivatives. Repurchase and/or acceptance as pledge of own shares may be made otherwise than in proportion to the share ownership of the shareholders (directed repurchase or acceptance as pledge).
The authorisation cancels the authorisation given by the General Meeting on 25 March 2019 to decide on the repurchase and/or on the acceptance as pledge of the Company’s own shares. The authorisation is valid until 23 September 2021. The Board of Directors has not used the authorisation to decide on the repurchase of the Company’s own shares during the period.
As part of the implementation of the Matching Share Plan, the company has accepted as a pledge the shares acquired by those key employees who took a loan from the company. As a result, Caverion had 711,034 Caverion Corporation shares as a pledge at the end of the reporting period on 30 June 2020.
Authorising Caverion’s Board of Directors to decide on share issues
The Annual General Meeting of Caverion Corporation, held on 25 May 2020, authorised the Board of Directors to decide on share issues in accordance with the proposal by the Board of Directors. The number of shares to be issued may not exceed 13,500,000 shares, which corresponds to approximately 9.7% of all the shares in the Company. The Board of Directors decides on all the conditions of the issuance of shares. The authorisation concerns both the issuance of new shares as well as the transfer of treasury shares. The issuance of shares may be carried out in deviation from the shareholders’ pre-emptive rights (directed issue). The authorisation can be used e.g. in order to develop the Company’s capital structure, to broaden the Company’s ownership base, to be used as payment in corporate acquisitions or when the Company acquires assets relating to its business and as part of the Company’s incentive programs.
The authorisation is valid until the closing of the next annual general meeting, however no later than 24 May 2021.
The Board of Directors of Caverion Corporation decided in February 2020 on a directed share issue without payment for Caverion’s Restricted Share Plan 2017–2019 reward payment. The decision on the directed share issue without payment is based on the authorisation granted to the Board of Directors by the Annual General Meeting of Shareholders held on 25 March 2019. In the directed share issue without payment, 39,127 Caverion Corporation shares held by
the company were on 27 February 2020 conveyed to 16 key employees according to the terms and conditions of the plan. Prior to the directed share issue, Caverion held a total of 2,849,360 treasury shares, of which 2,810,233 treasury shares remained with the company after the conveyance.
The Board of Directors of Caverion Corporation decided in June 2020 on a directed share issue without payment for Caverion’s Restricted Share Plan 2016–2018 reward payment. The decision on the directed share issue without payment is based on the authorisation granted to the Board of Directors by the Annual General Meeting of Shareholders held on 25 May 2020. In the directed share issue without payment, 6,673 Caverion Corporation shares held by the company were on 26 June 2020 conveyed to a key employee according to the terms and conditions of the plan. Prior to the directed share issue, Caverion held a total of 2,814,664 treasury shares, of which 2,807,991 treasury shares remained with the company after the conveyance.
**Trading in shares**
The opening price of Caverion’s share was EUR 7.24 at the beginning of 2020. The closing rate on the last trading day of the review period on 30 June was EUR 6.01. The share price decreased by 17 percent during January–June. The highest price of the share during the review period January–June was EUR 8.25, the lowest was EUR 3.79 and the average price was EUR 5.66. Share turnover on Nasdaq Helsinki in January–June amounted to 50.0 million shares. The value of share turnover was EUR 283.3 million (source: Nasdaq Helsinki). Caverion’s shares are also traded in other market places, such as Aquis, Cboe, POSIT Auction and Turquoise.
**Number of shareholders and flagging notifications**
At the end of June 2020, the number of registered shareholders in Caverion was 27,075 (3/2020: 26,629). At the end of June 2020, a total of 29.8 percent of the shares were owned by nominee-registered and non-Finnish investors (3/2020: 28.6%).
Caverion Corporation received on 17 February 2020 a notification pursuant to Chapter 9, Section 5 of the Finnish Securities Markets Act from Solero Luxco Sà r.l. (“Solero Luxco”, a company based in Luxembourg ultimately owned by Triton Fund IV). According to the notification the holding in Caverion Corporation by Solero Luxco decreased below the 5 percent threshold on 17 February 2020. The holding of Solero Luxco in Caverion decreased to 0 shares, corresponding to 0.00 percent of Caverion’s shares and voting rights.
Updated lists of Caverion’s largest shareholders and ownership structure by sector as per 30 June 2020, are available on Caverion’s website at www.caverion.com/investors.
## Condensed consolidated income statement
| EUR million | 4-6/2020 | 4-6/2019 | 1-6/2020 | 1-6/2019 | 1-12/2019 |
|------------------------------|----------|----------|----------|----------|-----------|
| **Revenue** | 518.5 | 512.3 | 1,060.1 | 1,026.7 | 2,123.2 |
| **Other operating income** | 8.5 | 0.8 | 9.0 | 1.3 | 14.0 |
| **Materials and supplies** | -131.0 | -127.8 | -259.3 | -252.1 | -524.2 |
| **External services** | -96.5 | -99.6 | -191.0 | -194.2 | -411.3 |
| **Employee benefit expenses**| -224.9 | -218.6 | -465.5 | -439.2 | -868.9 |
| **Other operating expenses** | -52.5 | -58.0 | -107.1 | -110.8 | -229.8 |
| **Share of results of associated companies** | | | 0.0 | 0.0 | 0.0 |
| **Depreciation, amortisation and impairment** | -17.2 | -16.9 | -34.8 | -34.2 | -67.6 |
| **Operating result** | 5.0 | -7.7 | 11.5 | -2.4 | 35.3 |
| % of revenue | 1.0 | -1.5 | 1.1 | -0.2 | 1.7 |
| **Financial income and expense, net** | -2.2 | -2.2 | -6.5 | -3.4 | -8.4 |
| **Result before taxes** | 2.7 | -9.9 | 5.0 | -5.9 | 27.0 |
| % of revenue | 0.5 | -1.9 | 0.5 | -0.6 | 1.3 |
| **Income taxes** | -0.7 | 2.9 | -1.3 | 1.8 | -4.4 |
| **Result for the period** | 2.1 | -7.1 | 3.7 | -4.1 | 22.6 |
| % of revenue | 0.4 | -1.4 | 0.3 | -0.4 | 1.1 |
| **Attributable to** | | | | | |
| Equity holders of the parent company | 2.0 | -7.1 | 3.6 | -4.1 | 22.6 |
| Non-controlling interests | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| **Earnings per share attributable to the equity holders of the parent company** | | | | | |
| Earnings per share, basic, EUR | 0.01 | -0.06 | 0.02 | -0.04 | 0.14 |
| Diluted earnings per share, EUR | 0.01 | -0.06 | 0.02 | -0.04 | 0.14 |
## Consolidated statement of comprehensive income
| EUR million | 4-6/2020 | 4-6/2019 | 1-6/2020 | 1-6/2019 | 1-12/2019 |
|-------------|----------|----------|----------|----------|-----------|
| **Result for the review period** | 2.1 | -7.1 | 3.7 | -4.0 | 22.6 |
| **Other comprehensive income** | | | | | |
| Items that will not be reclassified to profit/loss | | | | | |
| - Change in fair value of defined benefit pension plans | -1.1 | 0.1 | 2.3 | -0.6 | -5.7 |
| -- Deferred tax | | | | | 1.6 |
| - Change in fair value of other investments | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| -- Deferred tax | | | | | |
| Items that may be reclassified subsequently to profit/loss | | | | | |
| - Cash flow hedges | | | 0.0 | 0.1 | 0.1 |
| - Translation differences | -5.4 | -0.2 | -10.9 | 1.6 | 0.7 |
| **Other comprehensive income, total** | -6.5 | -0.2 | -8.6 | 1.1 | -3.3 |
| **Total comprehensive result** | -4.5 | -7.2 | -4.9 | -2.9 | 19.3 |
| **Attributable to** | | | | | |
| Equity holders of the parent company | -4.5 | -7.2 | -4.9 | -3.0 | 19.3 |
| Non-controlling interests | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
## Condensed consolidated statement of financial position
| EUR million | Jun 30, 2020 | Jun 30, 2019 | Dec 31, 2019 |
|-------------|--------------|--------------|--------------|
| **Assets** | | | |
| **Non-current assets** | | | |
| Property, plant and equipment | 22.9 | 14.6 | 19.3 |
| Right-of-use assets | 126.4 | 134.2 | 135.0 |
| Goodwill | 366.9 | 331.9 | 366.5 |
| Other intangible assets | 54.1 | 31.7 | 56.0 |
| Shares in associated companies and joint ventures | 1.7 | 0.1 | 1.7 |
| Other investments | 1.3 | 0.9 | 1.3 |
| Other receivables | 6.8 | 6.8 | 7.3 |
| Deferred tax assets | 21.2 | 14.9 | 19.3 |
| **Current assets** | | | |
| Inventories | 19.1 | 17.2 | 18.8 |
| Trade receivables | 264.0 | 269.9 | 329.6 |
| POC receivables | 224.4 | 232.3 | 197.6 |
| Other receivables | 24.3 | 25.6 | 33.7 |
| Income tax receivables | 2.0 | 2.6 | 1.7 |
| Cash and cash equivalents | 130.2 | 103.6 | 93.6 |
| **Total assets** | 1,265.3 | 1,186.6 | 1,281.4 |
| **Equity and liabilities** | | | |
| Equity attributable to equity holders of the parent company | | | |
| Share capital | 1.0 | 1.0 | 1.0 |
| Hybrid capital | 35.0 | 66.1 | 66.1 |
| Other equity | 155.2 | 138.1 | 161.5 |
| Non-controlling interest | 0.4 | 0.4 | 0.4 |
| **Equity** | 191.5 | 205.5 | 228.9 |
| **Non-current liabilities** | | | |
| Deferred tax liabilities | 31.9 | 30.6 | 32.6 |
| Pension liabilities | 48.3 | 43.8 | 49.1 |
| Provisions | 10.4 | 8.3 | 9.4 |
| Lease liabilities | 87.6 | 93.9 | 93.3 |
| Other interest-bearing debts | 137.1 | 125.0 | 125.0 |
| Other liabilities | 5.0 | 0.0 | 2.1 |
| **Current liabilities** | | | |
| Advances received | 236.5 | 196.5 | 216.2 |
| Trade payables | 169.3 | 173.7 | 173.7 |
| Other payables | 258.4 | 233.2 | 258.7 |
| Income tax liabilities | 15.2 | 9.1 | 15.6 |
| Provisions | 29.8 | 23.3 | 33.1 |
| Lease liabilities | 41.3 | 40.4 | 43.6 |
| Other interest-bearing debts | 3.0 | 3.3 | |
| **Total equity and liabilities** | 1,265.3 | 1,186.6 | 1,281.4 |
### Working capital
| EUR million | Jun 30, 2020 | Jun 30, 2019 | Dec 31, 2019 |
|-------------|--------------|--------------|--------------|
| Inventories | 19.1 | 17.2 | 18.8 |
| Trade and POC receivables | 488.4 | 502.2 | 527.2 |
| Other current receivables | 24.0 | 25.2 | 32.6 |
| Trade and POC payables | -189.3 | -194.5 | -194.1 |
| Other current liabilities | -267.0 | -234.5 | -269.2 |
| Advances received | -236.5 | -196.5 | -216.2 |
| **Working capital** | **-161.3** | **-80.8** | **-100.9** |
### Consolidated statement of changes in equity
| EUR million | Share capital | Retained earnings | Cumulative translation differences | Fair value reserve | Treasury shares | Unrestricted equity reserve | Hybrid capital | Total | Non-controlling interest | Total equity |
|-------------|---------------|-------------------|------------------------------------|--------------------|----------------|-----------------------------|----------------|-------|--------------------------|--------------|
| Equity on January 1, 2020 | 1.0 | 103.4 | -4.8 | -0.1 | -3.1 | 66.0 | 66.1 | 228.5 | 0.4 | 228.9 |
| Comprehensive income | | | | | | | | | | |
| Result for the period | | | | | | | | 3.6 | 0.0 | 3.6 |
| Other comprehensive income: | | | | | | | | | | |
| Change in fair value of defined benefit pension plans | | | | | | | | 2.3 | | 2.3 |
| -Deferred tax | | | | | | | | | | |
| Change in fair value of other investments | | | | | | | | 0.0 | | 0.0 |
| -Deferred tax | | | | | | | | | | |
| Translation differences | | | | | | | | -10.9 | | -10.9 |
| Comprehensive income, total | | | | | | | | 5.9 | -10.9 | 0.0 | -4.9 | 0.0 | -4.9 |
| Dividend distribution | | | | | | | | | | 0.0 | 0.0 |
| Share-based payments | | | | | | | | 1.1 | | 1.1 |
| Transfer of own shares | | | | | | | | -0.3 | | 0.3 |
| Hybrid capital repayment | | | | | | | | | | -66.1 | -66.1 | -66.1 |
| Hybrid capital issue | | | | | | | | | | 35.0 | 35.0 | 35.0 |
| Hybrid capital interests and costs after taxes | | | | | | | | | | -2.4 | | -2.4 |
| Other change | | | | | | | | | | 0.0 | | 0.0 |
| Equity on June 30, 2020 | 1.0 | 107.7 | -15.7 | -0.1 | -2.8 | 66.0 | 35.0 | 191.2 | 0.4 | 191.5 |
| EUR million | Share capital | Retained earnings | Cumulative translation differences | Fair value reserve | Treasury shares | Unrestricted equity reserve | Hybrid capital | Total | Non-controlling interest | Total equity |
|-------------|---------------|-------------------|-----------------------------------|-------------------|-----------------|---------------------------|---------------|-------|-------------------------|--------------|
| Equity on December 31, 2018 | 1.0 | 95.5 | -5.5 | -0.2 | -3.2 | 66.0 | 100.0 | 253.6 | 0.4 | 254.0 |
| Change in accounting principle, IFRS 16 | 0.1 | | | | | | | 0.1 | | 0.1 |
| Equity on January 1, 2019 | 1.0 | 95.7 | -5.5 | -0.2 | -3.2 | 66.0 | 100.0 | 253.8 | 0.4 | 254.1 |
| Comprehensive income | | | | | | | | | | |
| Result for the period | | | | | | | | -4.1 | 0.0 | -4.0 |
| Other comprehensive income: | | | | | | | | | | |
| Change in fair value of defined benefit pension plans | | | | | | | | -0.6 | | -0.6 |
| -Deferred tax | | | | | | | | | | |
| Cash flow hedges | | | | | | | | 0.1 | | 0.1 |
| Change in fair value of other investments | | | | | | | | 0.0 | | 0.0 |
| -Deferred tax | | | | | | | | | | |
| Translation differences | | | | | | | | 1.6 | | 1.6 |
| Comprehensive income, total | | | | | | | | -4.7 | 1.6 | 0.1 | -3.0 | 0.0 | -2.9 |
| Dividend distribution | | | | | | | | -6.8 | | | -6.8 | | -6.8 |
| Share-based payments | | | | | | | | -0.9 | | | -0.9 | | -0.9 |
| Transfer of own shares | | | | | | | | -0.1 | | 0.1 | | | |
| Hybrid capital repayment | | | | | | | | | | | -33.9 | | -33.9 |
| Hybrid capital interests and costs after taxes | | | | | | | | -3.8 | | | -3.8 | | -3.8 |
| Disposal of subsidiaries | | | | | | | | -0.2 | | | -0.2 | | -0.2 |
| Equity on June 30, 2019 | 1.0 | 79.1 | -3.8 | -0.1 | -3.1 | 66.0 | 66.1 | 205.1 | 0.4 | 205.5 |
| EUR million | Share capital | Retained earnings | Cumulative translation differences | Fair value reserve | Treasury shares | Unrestricted equity reserve | Hybrid capital | Total | Non-controlling interest | Total equity |
|-------------|---------------|-------------------|------------------------------------|-------------------|-----------------|---------------------------|---------------|-------|-------------------------|--------------|
| Equity on December 31, 2018 | 1.0 | 95.5 | -5.5 | -0.2 | -3.2 | 66.0 | 100.0 | 253.6 | 0.4 | 254.0 |
| Change in accounting principle, IFRS 16 | 0.1 | | | | | | | 0.1 | | 0.1 |
| Equity on January 1, 2019 | 1.0 | 95.7 | -5.5 | -0.2 | -3.2 | 66.0 | 100.0 | 253.8 | 0.4 | 254.1 |
| Comprehensive income | | | | | | | | | | |
| Result for the period | | | | | | | | 22.6 | 0.0 | 22.6 |
| Other comprehensive income: | | | | | | | | | | |
| Change in fair value of defined benefit pension plans | | | | | | | | -5.7 | | -5.7 |
| -Deferred tax | | | | | | | | 1.6 | | 1.6 |
| Cash flow hedges | | | | | | | | 0.1 | | 0.1 |
| Change in fair value of other investments | | | | | | | | 0.0 | | 0.0 |
| -Deferred tax | | | | | | | | | | |
| Translation differences | | | | | | | | 0.7 | | 0.7 |
| Comprehensive income, total | | | | | | | | 18.5 | 0.7 | 0.0 | 19.3 | 0.0 | 19.3 |
| Dividend distribution | | | | | | | | -6.8 | | | -6.8 | | -6.8 |
| Share-based payments | | | | | | | | 0.1 | | | 0.1 | | 0.1 |
| Transfer of own shares | | | | | | | | -0.1 | | 0.1 | | |
| Hybrid capital repayment | | | | | | | | | | | -33.9 | | -33.9 |
| Hybrid capital interests and costs after taxes | | | | | | | | -3.8 | | | -3.8 | | -3.8 |
| Disposal of subsidiaries | | | | | | | | -0.2 | | | -0.2 | | -0.2 |
| Equity on December 31, 2019 | 1.0 | 103.4 | -4.8 | -0.1 | -3.1 | 66.0 | 66.1 | 228.5 | 0.4 | 228.9 |
## Condensed consolidated statement of cash flows
| EUR million | 4-6/2020 | 4-6/2019 | 1-6/2020 | 1-6/2019 | 1-12/2019 |
|-------------|----------|----------|----------|----------|-----------|
| **Cash flows from operating activities** | | | | | |
| Result for the period | 2.1 | -7.1 | 3.7 | -4.1 | 22.6 |
| Adjustments to result | 14.9 | 19.0 | 35.2 | 39.1 | 95.9 |
| Change in working capital | 31.3 | 17.1 | 65.5 | 24.2 | 25.2 |
| **Operating cash flow before financial and tax items** | 48.2 | 29.1 | 104.3 | 59.2 | 143.7 |
| Financial items, net | -1.9 | -1.3 | -5.2 | -4.7 | -9.6 |
| Taxes paid | 0.6 | -0.6 | -3.4 | -1.3 | -4.7 |
| **Net cash from operating activities** | 46.9 | 27.3 | 95.7 | 53.2 | 129.4 |
| **Cash flows from investing activities** | | | | | |
| Acquisitions of subsidiaries, net of cash | 0.0 | -0.6 | -2.1 | -1.2 | -48.6 |
| Disposal of subsidiaries, net of cash | 0.0 | 0.0 | 0.0 | 1.6 | 1.5 |
| Investments in joint ventures | | | | | -1.6 |
| Capital expenditure and other investments, net | -3.8 | -2.8 | -7.9 | -6.2 | -16.2 |
| **Net cash used in investing activities** | -3.8 | -3.4 | -9.9 | -5.8 | -65.0 |
| **Cash flow after investing activities** | 43.1 | 23.9 | 85.7 | 47.4 | 64.5 |
| **Cash flow from financing activities** | | | | | |
| Change in loan receivables, net | | | 0.2 | -0.3 | -0.3 |
| Proceeds from borrowings | 15.0 | 15.0 | 125.0 | 125.0 | |
| Repayments of borrowings | | | -53.3 | -56.7 | |
| Repayments of lease liabilities | -11.7 | -11.3 | -22.9 | -22.8 | -45.5 |
| Hybrid capital issue | 35.0 | 35.0 | | | |
| Hybrid capital repayment | -66.1 | -66.1 | -33.9 | -33.9 | |
| Hybrid capital costs and interests | -3.0 | -3.1 | -3.0 | -4.7 | -4.7 |
| Dividends paid and other distribution of assets | -6.8 | 0.0 | -6.8 | -6.8 | -6.8 |
| **Net cash used in financing activities** | -30.7 | -21.1 | -41.8 | 3.1 | -23.0 |
| **Change in cash and cash equivalents** | 12.4 | 2.8 | 44.0 | 50.4 | 41.5 |
| Cash and cash equivalents at the beginning of the period | 113.2 | 101.3 | 93.6 | 51.2 | 51.2 |
| Change in the foreign exchange rates | 4.6 | -0.4 | -7.3 | 2.1 | 0.9 |
| **Cash and cash equivalents at the end of the period** | 130.2 | 103.6 | 130.2 | 103.6 | 93.6 |
### Free cash flow
| EUR million | 4-6/2020 | 4-6/2019 | 1-6/2020 | 1-6/2019 | 1-12/2019 |
|-------------|----------|----------|----------|----------|-----------|
| Operating cash flow before financial and tax items | 48.2 | 29.1 | 104.3 | 59.2 | 143.7 |
| Taxes paid | 0.6 | -0.6 | -3.4 | -1.3 | -4.7 |
| Net cash used in investing activities | -3.8 | -3.4 | -9.9 | -5.8 | -65.0 |
| **Free cash flow** | 45.0 | 25.2 | 91.0 | 52.1 | 74.0 |
1 Accounting principles
Caverion Corporation’s Half-year Financial Report for 1 January – 30 June 2020 has been prepared in accordance with IAS 34, ‘Interim Financial Reporting’. Caverion has applied the same accounting principles in the preparation of the Half-year Financial Report as in its Financial Statements for 2019.
The information presented in this Half-year Financial Report has not been audited.
In the Half-year Financial Report the figures are presented in million euros subject to rounding, which may cause some rounding inaccuracies in column and total sums.
2 Key figures
| EUR million | 6/2020 | 6/2019 | 12/2019 |
|-------------|--------|--------|---------|
| Revenue, EUR million | 1,060.1 | 1,026.7 | 2,123.2 |
| EBITDA, EUR million | 46.2 | 31.7 | 103.0 |
| EBITDA margin, % | 4.4 | 3.1 | 4.8 |
| Adjusted EBITDA, EUR million | 44.7 | 37.1 | 120.4 |
| Adjusted EBITDA margin, % | 4.2 | 3.6 | 5.7 |
| EBITA | 18.4 | 5.2 | 49.8 |
| EBITA margin, % | 1.7 | 0.5 | 2.3 |
| Adjusted EBITA | 17.0 | 10.6 | 67.2 |
| Adjusted EBITA margin, % | 1.6 | 1.0 | 3.2 |
| Operating profit, EUR million | 11.5 | -2.4 | 35.3 |
| Operating profit margin, % | 1.1 | -0.2 | 1.7 |
| Result before taxes, EUR million | 5.0 | -5.9 | 27.0 |
| % of revenue | 0.5 | -0.6 | 1.3 |
| Result for the review period, EUR million | 3.7 | -4.1 | 22.6 |
| % of revenue | 0.3 | -0.4 | 1.1 |
| Earnings per share, basic, EUR | 0.02 | -0.04 | 0.14 |
| Earnings per share, diluted, EUR | 0.02 | -0.04 | 0.14 |
| Equity per share, EUR | 1.4 | 1.5 | 1.7 |
| Equity ratio, % | 18.6 | 20.8 | 21.5 |
| Interest-bearing net debt, EUR million | 138.8 | 158.9 | 168.4 |
| Gearing ratio, % | 72.5 | 77.3 | 73.6 |
| Total assets, EUR million | 1,265.3 | 1,186.6 | 1,281.4 |
| Operating cash flow before financial and tax items, EUR million | 104.3 | 59.2 | 143.7 |
| Cash conversion (LTM), % | 160.7 | 169.9 | 139.5 |
| Working capital, EUR million | -161.3 | -80.8 | -100.9 |
| Gross capital expenditures, EUR million | 12.2 | 8.3 | 73.4 |
| % of revenue | 1.2 | 0.8 | 3.5 |
| Order backlog, EUR million | 1,739.7 | 1,704.7 | 1,670.5 |
| Personnel, average for the period | 16,021 | 14,663 | 14,763 |
| Number of outstanding shares at the end of the period (thousands) | 136,112 | 135,973 | 136,071 |
| Average number of shares (thousands) | 136,097 | 135,750 | 135,866 |
### 3 Financial development by quarter
| EUR million | 4-6/2020 | 1-3/2020 | 10-12/2019 | 7-9/2019 | 4-6/2019 | 1-3/2019 |
|-------------|----------|----------|------------|----------|----------|----------|
| Revenue | 518.5 | 541.6 | 589.0 | 507.5 | 512.3 | 514.4 |
| EBITDA | 22.1 | 24.1 | 35.9 | 35.3 | 9.1 | 22.6 |
| EBITDA margin, % | 4.3 | 4.4 | 6.1 | 7.0 | 1.8 | 4.4 |
| Adjusted EBITDA | 18.5 | 26.3 | 47.0 | 36.2 | 10.0 | 27.1 |
| Adjusted EBITDA margin, % | 3.6 | 4.8 | 8.0 | 7.1 | 2.0 | 5.3 |
| EBITA | 8.4 | 10.0 | 22.5 | 22.1 | -4.1 | 9.3 |
| EBITA margin, % | 1.6 | 1.8 | 3.8 | 4.4 | -0.8 | 1.8 |
| Adjusted EBITA | 4.8 | 12.1 | 33.7 | 23.0 | -3.2 | 13.8 |
| Adjusted EBITA margin, % | 0.9 | 2.2 | 5.7 | 4.5 | -0.6 | 2.7 |
| Operating profit | 5.0 | 6.5 | 18.9 | 18.9 | -7.7 | 5.3 |
| Operating profit margin, % | 1.0 | 1.2 | 3.2 | 3.7 | -1.5 | 1.0 |
| EUR million | 4-6/2020 | 1-3/2020 | 10-12/2019 | 7-9/2019 | 4-6/2019 | 1-3/2019 |
|-------------|----------|----------|------------|----------|----------|----------|
| Earnings per share, basic, EUR | 0.01 | 0.01 | 0.11 | 0.08 | -0.06 | 0.01 |
| Earnings per share, diluted, EUR | 0.01 | 0.01 | 0.11 | 0.08 | -0.06 | 0.01 |
| Equity per share, EUR | 1.4 | 1.7 | 1.7 | 1.6 | 1.5 | 1.6 |
| Equity ratio, % | 18.6 | 22.0 | 21.5 | 22.6 | 20.8 | 21.3 |
| Interest-bearing net debt, EUR million | 138.8 | 142.8 | 168.4 | 172.9 | 158.9 | 162.7 |
| Gearing ratio, % | 72.5 | 62.3 | 73.6 | 79.5 | 77.3 | 75.1 |
| Total assets, EUR million | 1,265.3 | 1,261.1 | 1,281.4 | 1,170.5 | 1,188.6 | 1,205.5 |
| Operating cash flow before financial and tax items, EUR million | 48.2 | 56.1 | 80.6 | 3.8 | 29.1 | 30.1 |
| Cash conversion (LTM), % | 160.7 | 162.4 | 139.5 | 177.6 | 169.9 | n.a. |
| Working capital, EUR million | -161.3 | -127.3 | -100.9 | -46.8 | -80.8 | -60.4 |
| Gross capital expenditures, EUR million | 4.0 | 8.3 | 59.5 | 5.7 | 3.8 | 4.4 |
| % of revenue | 0.8 | 1.5 | 10.1 | 1.1 | 0.7 | 0.9 |
| Order backlog, EUR million | 1,739.7 | 1,768.3 | 1,670.5 | 1,676.9 | 1,704.7 | 1,579.7 |
| Personnel at the end of the period | 15,902 | 16,010 | 16,273 | 14,606 | 14,681 | 14,489 |
| Number of outstanding shares at end of period (thousands) | 136,112 | 136,110 | 136,071 | 135,973 | 135,973 | 135,679 |
| Average number of shares (thousands) | 136,109 | 136,085 | 135,988 | 135,973 | 135,834 | 135,664 |
4 Calculation of key figures
Key figures on financial performance
EBITDA = Operating profit (EBIT) + depreciation, amortisation and impairment
Adjusted EBITDA = EBITDA before items affecting comparability (IAC) *
EBITA = Operating profit (EBIT) + amortisation and impairment
Adjusted EBITA = EBITA before items affecting comparability (IAC) *
Equity ratio (%) = \[ \frac{(Equity + non-controlling interest) \times 100}{Total assets - advances received} \]
Gearing ratio (%) = \[ \frac{(Interest-bearing liabilities - cash and cash equivalents) \times 100}{Shareholders' equity + non-controlling interest} \]
Interest-bearing net debt = Interest-bearing liabilities - cash and cash equivalents
Working capital = Inventories + trade and POC receivables + other current receivables - trade and POC payables - other current payables - advances received - current provisions
Free cash flow = Operating cash flow before financial and tax items – taxes paid – net cash used in investing activities
Cash conversion (%) = \[ \frac{Operating cash flow before financial and tax items (LTM) \times 100}{EBITDA (LTM)} \]
Organic growth = Defined as the change in revenue in local currencies excluding the impacts of (i) currencies; and (ii) acquisitions and divestments. The currency impact shows the impact of changes in exchange rates of subsidiaries with a currency other than the euro (Group’s reporting currency). The acquisitions and divestments impact shows how acquisitions and divestments completed during the current or previous year affect the revenue reported.
Share related key figures
Earnings / share, basic = \[ \frac{\text{Result for the period (attributable for equity holders)} - \text{hybrid capital expenses and accrued unrecognised interests after tax}}{\text{Weighted average number of shares outstanding during the period}} \]
Earnings /share, diluted = \[ \frac{\text{Result for the period (attributable for equity holders)} - \text{hybrid capital expenses and accrued unrecognised interests after tax}}{\text{Weighted average dilution adjusted number of shares outstanding during the period}} \]
Equity / share = \[ \frac{\text{Shareholders' equity}}{\text{Number of outstanding shares at the end of the period}} \]
*Items affecting comparability (IAC) in 2020 are material items or transactions, which are relevant for understanding the financial performance of Caverion when comparing the profit of the current period with that of the previous periods. These items can include (1) capital gains and/or losses and transaction costs related to divestments and acquisitions; (2) write-downs, expenses and/or income from separately identified major risk projects; (3) restructuring expenses and (4) other items that according to Caverion management’s assessment are not related to normal business operations. In 2019 and 2020, major risk projects only include one risk project in Germany reported under category (2). In 2019, mainly legal and other costs related to the German anti-trust fine and a compensation from the previous owners of a German subsidiary related to the cartel case were reported under category (4). In 2020, costs related to a subsidiary in Russia sold during the second quarter have been reported under category (4).
ESMA (European Securities and Markets Authority) has issued guidelines regarding Alternative Performance Measures (“APM”). Caverion presents APMs to improve the analysis of business and financial performance and to enhance the comparability between reporting periods. APMs presented in this report should not be considered as a substitute for measures of performance in accordance with the IFRS.
5 Related party transactions
Caverion announced on 7 February 2018 in a stock exchange release the establishment of a new share-based incentive plan directed for the key employees of the Group ("Matching Share Plan 2018–2022"). The company provided the participants a possibility to finance the acquisition of the company's shares through an interest-bearing loan from the company, which some of the participants utilised. In the end of June 2020 the total outstanding amount of these loans amounted approximately to EUR 4.4 million. The loans will be repaid in full on 31 December 2023, at the latest. Company shares have been pledged as a security for the loans.
Purchases from members of the Board
Caverion has a 10-month fixed term contract with a member of the Board concerning consulting services. The value of the contract is not material.
6 Financial risk management
Caverion’s main financial risks are the liquidity risk, credit risk as well as market risks including the foreign exchange and interest rate risk. The objectives and principles of financial risk management are defined in the Treasury Policy approved by the Board of Directors. Financial risk management is carried out by Group Treasury in co-operation with the Group’s subsidiaries.
The outbreak of the coronavirus pandemic and the recent market turmoil have increased the general risk level related to the availability of financing, the availability of guarantee facilities as well as foreign exchange related risks.
The objective of capital management in Caverion Group is to maintain an optimal capital structure, maximise the return on the respective capital employed and to minimise the cost of capital within the limits and principles stated in the Treasury Policy. The capital structure is modified primarily by directing investments and working capital employed.
No significant changes have been made to the Group’s financial risk management principles in the reporting period. Further information is presented in Group’s 2019 financial statement in note 5.5 Financial risk management.
Caverion’s liquidity position is strong. The outbreak of the coronavirus pandemic has led to even sharpened focus on optimising cash flow and working capital management. Ensuring adequate financing has also been prioritised.
Caverion’s external loans are subject to a financial covenant based on the ratio of the Group’s net debt to EBITDA. The covenant ratio is continuously monitored and evaluated against actual and forecasted EBITDA and net debt figures.
The table below presents the maturity structure of interest-bearing liabilities. Interest-bearing borrowings are based on contractual maturities of liabilities excluding interest payments. Lease liabilities are presented based on discounted present value of remaining lease payments. Cash flows of foreign-denominated liabilities are translated into the euro at the reporting date.
| EUR million | 2020 | 2021 | 2022 | 2023 | 2024 | 2025-> | Total |
|-------------|------|------|------|------|------|--------|-------|
| Interest-bearing borrowings | 1.5 | 3.0 | 3.0 | 128.0| 3.0 | 2.0 | 140.5 |
| Lease liabilities | 21.6 | 36.5 | 25.5 | 17.2 | 10.5 | 17.7 | 128.9 |
| **Total** | **23.1** | **39.5** | **28.5** | **145.2** | **13.5** | **19.7** | **269.4** |
## 7 Financial liabilities and interest-bearing net debt
| EUR million | Jun 30, 2020 | Jun 30, 2019 | Dec 31, 2019 |
|-------------|--------------|--------------|--------------|
| | Carrying amount | Carrying amount | Carrying amount |
| Non-current liabilities | | | |
| Senior bonds | 74.7 | 74.6 | 74.6 |
| Loans from financial institutions | 49.9 | 49.8 | 49.9 |
| Other financial loans | 0.5 | 0.5 | 0.5 |
| Pension loans | 12.0 | | |
| Lease liabilities | 87.6 | 93.9 | 93.3 |
| **Total non-current interest-bearing liabilities** | **224.7** | **218.9** | **218.3** |
| Current liabilities | | | |
| Loans from financial institutions | | | |
| Pension loans | 3.0 | 3.3 | |
| Other financial loans | | 0.0 | |
| Lease liabilities | 41.3 | 40.4 | 43.6 |
| **Total current interest-bearing liabilities** | **44.3** | **43.7** | **43.6** |
| **Total interest-bearing liabilities** | **269.0** | **262.6** | **261.9** |
| **Total interest-bearing liabilities (excluding IFRS 16 lease liabilities)** | **140.1** | **128.4** | **125.0** |
| Cash and cash equivalents | 130.2 | 103.6 | 93.6 |
| **Interest-bearing net debt** | **138.8** | **158.9** | **168.4** |
| **Interest-bearing net debt excluding IFRS 16 lease liabilities** | **9.9** | **24.7** | **31.5** |
The carrying amounts of all financial assets and liabilities are reasonably close to their fair values.
### Derivative instruments
| Nominal amounts | EUR million | Jun 30, 2020 | Jun 30, 2019 | Dec 31, 2019 |
|-----------------|-------------|--------------|--------------|--------------|
| Foreign exchange forwards | 65.6 | 68.5 | 66.7 |
| Fair values | EUR million | Jun 30, 2020 | Jun 30, 2019 | Dec 31, 2019 |
|-------------|-------------|--------------|--------------|--------------|
| Foreign exchange forwards | | | | |
| positive fair value | 0.1 | 0.1 | 0.9 |
| negative fair value | -0.4 | -0.3 | -0.2 |
The fair values of the derivative instruments have been defined as follows: The fair values of foreign exchange forward agreements have been defined by using market prices on the closing day. The fair values of interest rate swaps are based on discounted cash flows.
### 8 Commitments and contingent liabilities
| EUR million | Jun 30, 2020 | Jun 30, 2019 | Dec 31, 2019 |
|-------------|--------------|--------------|--------------|
| Guarantees given on behalf of associated companies | | 0.0 | 0.0 |
| Parent company’s guarantees on behalf of its subsidiaries | 485.3 | 415.3 | 456.0 |
| Other commitments | | | |
| - Other contingent liabilities | 0.2 | 0.2 | 0.2 |
| Accrued unrecognised interest on hybrid bond | 0.3 | 0.1 | 1.7 |
Entities participating in the demerger are jointly and severally responsible for the liabilities of the demerging entity which have been generated before the registration of the demerger. As a consequence, a secondary liability up to the allocated net asset value was generated to Caverion Corporation, incorporated due to the partial demerger of YIT Corporation, for those liabilities that were generated before the registration of the demerger and remain with YIT Corporation after the demerger. Caverion Corporation has a secondary liability relating to the Group guarantees which remain with YIT Corporation after the demerger. These Group guarantees amounted to EUR 19.3 million at the end of June 2020.
The short-term risks and uncertainties relating to the operations have been described above under “Short-term risks and uncertainties”. It is possible that especially the infringements in compliance may cause considerable damage to Caverion in terms of fines, civil claims as well as legal expenses. However, the magnitude of the potential damage cannot be assessed at the moment.
Caverion’s Financial Information for 2020
Interim report for January–September 2020 on 5 November 2020
Financial Statements Release for 2020 on 11 February 2021 |
NOTE: This MechWar 5 set of rules is a MechWar 2 variant that adapts the rules from SPI’s October War -- subsequently having evolved into MechWar ’78.\(^1\) The MechWar ’78 rules were used as the basis for this variant. These rules are designed to allow players to play a simpler MechWar 2 game utilizing whatever additional advanced MechWar 2 rules they wish to incorporate from SPI’s original MechWar 2 game.
This MechWar 5 variant attempts to provide a simpler, more playable game while still allowing certain MechWar 2 advanced features to be represented for effect, but at a more reasonable resolution. The rules have been adapted to allow the play of scenarios from Avalon Hill’s *The Arab Israeli Wars* and the scenarios from *MechWar 2 Suez to Golan*.
\(^1\) The MechWar ’78 rules are based on a suggested retrofit posted by Ian Raine in the ConsimWorld forums, as well as player discussions. The original rules formatting were provided by Jamie Shanks. Also, Fred Schwarz’s notes were helpful in the development. Rules sourced or developed from October War are in blue text. Untested rules are in orange text.
1.0 INTRODUCTION
2.0 GENERAL COURSE OF PLAY
3.0 GAME EQUIPMENT
3.1 The Game Map
3.2 The Playing Pieces
3.3 Game Charts and Tables
3.4 Definition of Terms
3.5 Game Equipment Inventory
4.0 SEQUENCE OF PLAY
4.1 Sequence Outline
4.2 Determining the “First” Player
5.0 SPOTTING
5.1 Blocking Hexsides and Hexes
5.2 Observation Range
5.3 Effect of Units in Spotting
5.4 Spotting for Indirect Fire
6.0 COMBAT
6.1 Restrictions on Fire Combat
6.2 Effect on Other Units
6.3 Multiple Fire Attacks
6.4 Direct Fire
6.5 Opportunity Fire
6.6 Terrain Effects on Combat
6.7 Partial Strength Units vs. Soft Targets
6.8 Special Weapons Classes
6.88 Ammunition Depletion
6.9 Additional Combat Actions
6.10 R Class Fire vs. Armored Vehicles
7.0 INDIRECT FIRE
7.1 Availabilities and Capabilities
7.2 Types of H Fire
7.3 H Fire Against Hard and Protected Targets
7.4 Duration of Suppression
7.5 Effects of Suppression on Hard and Protected Targets
7.6 Effects of H Fire on Soft Targets
7.7 Close Air Support (CAS)
7.8 Counter Battery Fire
8.0 MOVEMENT
8.1 Roads and Trails
(Column Formation)
8.16 Infantry Double Time
8.2 Restrictions on Movement
8.3 Zones of Control
8.4 Effect of Movement on Soft Target Defense
8.5 Stacking
9.0 TRANSPORTING
9.1 Procedures and Restrictions
9.2 Infantry on Tanks
9.3 Status of Units Engaging in Mounting or Dismounting
9.4 Combat While Mounted
9.5 Specific Unit Capabilities
9.6 Transport Vehicles
10.0 TERRAIN
10.1 Effect on Movement
10.2 Covering Terrain
10.3 Slopes and Crest Hexsides
10.4 Elevation
10.5 Hull Down (Defilade)
11.0 IMPROVED POSITIONS (ENTRENCHMENTS)
11.1 Who Can Use Improved Positions
11.2 Benefits of Improved Positions
11.3 Deployment of Improved Positions
12.0 OVERRUNS
12.1 Effect of Terrain on Overruns
12.2 Effect of Prior Fire on Overrun
12.3 Effect of Opportunity Fire on Overrun
12.4 Effects of Panic
13.0 PANIC (TROOP QUALITY)
13.1 Panic Move/Fire
14.0 OFF-BOARD ARTILLERY
14.1 Use of Off-Board Artillery
14.2 Off-Board Artillery Restrictions
14.3 U.S. Off-Board Artillery
15.0 CLOSE AIR SUPPORT
15.1 Close Air Support Scatter
15.2 Application of Close Air Support
15.3 Advanced (MechWar 2) Close Air Support
15.4 Bombing Strike
15.5 Strafing Strike
15.6 Air-Surface-Missiles (ASMS)
15.7 Aircraft Target Acquisition
15.8 Air Defense Systems
16.0 MINES
16.1 Mine Attacks
16.2 Mineplows
17.0 HELICOPTERS
17.1 Movement
17.2 Combat
17.3 Flak Units and Anti-Helicopter Fire
18.0 SMOKE
18.1 Line of Sight Effects
18.2 Persistence of Smoke
19.0 OTHER OPTIONAL / EXPERIMENTAL RULES
19.1 Command
19.2 Counterbattery Measures
19.3 Short Halt Fire
19.4 FASCAM Rounds
19.5 Wrecks
19.6 Special Units
[19.61] Engineers
[19.612] Bridges
[19.613] Bridge Demolition
[19.614] Ferries
[19.615] Abatis
[19.616] Blocks
[19.66] Motorcycles
19.7 Night
19.8 Morale
19.9 Electronic Warfare
19.10 Chemical Warfare
19.11 Tactical Nuclear Warfare
20.0 HOW TO SET-UP AND PLAY THE GAME
20.1 Scenarios
20.2 Setting Up
20.3 Available Forces
20.4 Deployment
20.5 Victory Conditions
20.6 Reinforcements
21.0 SCENARIOS—NATO and Warsaw Pact
22.0 SCENARIOS—Red Star/White Star
H Fire Procedures
Infantry vs Hard Targets
[1.0] INTRODUCTION
MechWar 4 is SPI’s tactical simulation of “modern” ground combat (from the October War of 1973 to possible future encounters).
These Suez/Golan rules are designed to be used to play Avalon Hill’s *The Arab Israeli Wars* (AIW) with MechWar 4 rules while still allowing the play of scenarios from SPI’s *MechWar 2 Red Star/White Star Suez To Golan*.
Each hex represents 200 meters from side to side. Each Game Turn represents one to six minutes of elapsed time.
[2.0] GENERAL COURSE OF PLAY
This simulation is a two-Player game. It is played in a series of turns called Game Turns. During a Game Turn, both Players’ playing pieces (called units) move and engage in combat in an attempt to achieve certain objectives. This activity takes place according to a Sequence of Play. The game is played in Scenarios. Each Scenario lists the opposing forces and conditions under which they engage. Each Player attempts to win the Scenario according to the Victory Conditions set out for him in each Scenario.
[3.0] GAME EQUIPMENT
[3.1] THE GAME MAP
The game consists of several geomorphic desert maps, which can be assembled in various combinations to allow play of The Arab-Israeli War scenarios. A few additional maps have been included to approximately represent setups for the scenarios in SPI’s *MechWar 2 Red Star / White Star, Suez To Golan*. A hexagonal grid is superimposed on the map to regulate movement, position and firing range of the units. The hexes are numbered for identification.
[3.2] THE PLAYING PIECES
The pieces are revised MechWar 4 units that include all the equivalent units necessary to play the AIW scenarios. Also included are units necessary to play the *MechWar 2 Red Star/White Star Suez To Golan* scenarios. Many pieces, are informational counters and the rest are organizational counters representing vehicle and infantry platoons, air units, headquarters, and other units.
[3.21] Sample Units
For *MechWar 2* scenarios, players may need the original *MechWar 2* rules for the setups and references to certain advanced and optional rules.
[3.4] DEFINITION OF TERMS
**Movement** is a basic game activity involving the physical displacement of a unit hex by hex across the mapboard.
**Combat** is a basic game process whereby one or more units pin, disrupt or destroy units belonging to the Opposing Player.
**Weapons Class:** The units portrayed in this game are small, platoon-sized organizations with 20 to 50 men and 3 to 10 vehicles. Each is organized around a particular main weapon system unique to its type of unit. Thus a mortar unit depends for its combat effectiveness on the high explosive shells lobbed by its mortars and any small arms carried by its men have no significant impact on its performance. Each unit then is classified according to the characteristics of its predominant weapons system.
**R Class:** Units organized around rifle and machine gun fire: typically an infantry platoon.
**A Class:** Typical MMG class weapons found in October War APC units.
**M Class:** Units whose guns fire a mix of armor piercing and high explosive shell and are effective against both armored and unarmored targets; typically a tank or assault gun platoon.
**H Class:** Units whose main weapons fire high explosive shell; typically on-map field gun or mortar units. This also includes Off-Board Artillery (14.0) and Close Air Support (15.0). In specific cases, infantry units may make an H Class attack against vehicles by virtue of inherent AT weaponry such as LAWs, RPGs or recoilless rifles (see 6.45).
**Note:** Units designated in the rules as one-step units are eliminated with a loss of one step, though their firepower still uses the Full Strength CRT values unless otherwise indicated.
[3.22] Summary of Unit Types (UFT Charts)
Also, see [19.6], Special Units.
[3.23] Dice
Most charts use a 1d6 die. Other charts, such as the Panic Table use a 1d10 die. Players should check each chart to see which set of dice are used.
If for any reason players need to consult any original *MechWar 2* tables, they generally use 2d6 dice, though not always.
[3.3] GAME CHARTS AND TABLES
The game makes use of various charts and tables as a part of its play system and also to organize data into an easily retrievable form. The use of these graphic aids is explained in the appropriate rules sections. Players should examine the charts and tables as they appear or are referred to in the rules. Please note the separate chart sheets.
**MechWar 2 Warning:** There are some table related data in the *MechWar 2* rules that only appear in the rules themselves rather than in a separate chart. For those *MechWar 2* sections converted to these variant rules, the relevant numbers have been extracted and placed in *MechWar 4* tables.
**G Class:** Units which depend for their defensive fire power on Anti-tank guided missiles.
**R* Class/Mx-n:** Flak or Anti-aircraft units, which are effective against ground targets as well as helicopters (see [17.31]).
**Note:** *MechWar 4* anti-aircraft units are indicated as such by their missile system notation at the bottom of the counter.
**Dual-Class:** Units which have not only a complement of conventional weapons, but G Class weapons as well. They have two counters each; see 6.87.
**AA Class:** Air Defense Systems used to attack Close Air Support aircraft [15.8].
**Target Type (Armor Class):** Just as a unit is classified according to its weapons, so is it classified according to the vulnerability of its elements to fire, i.e., what kind of target does it present? Units are defined as either Soft (unarmored - Green), Protected (lightly armored - Orange) or Hard (armored - Black) targets. *(Protected generally corresponds to MW2 Light (L) and Protected (P) while Hard corresponds the Hard (H) Armor Class. The MW2 Profile factor is generally only used in Defilade situations using MW2 rules.)* Soft targets rely for their protection on their ability to conceal themselves from fire and, for some, an ability to disperse their fighting elements. As an individual, the infantryman is an extremely vulnerable soft target, but the infantry platoon, while still a soft target, has a relatively high Defensive Strength, because it can take a lot of individual casualties before it ceases to be effective. Protected targets depend also on their ability to conceal themselves behind various terrain (they are usually low hull silhouettes) and their light armor when being engaged from a distance by small arms and high trajectory shells. Hard targets rely for their protection on armor. A tank, of course, is the archetypical hard target and a tank platoon is virtually invulnerable, except against weapons specifically designed to defeat armor. Helicopters are a special target type, and special procedures are used when attacking them.
**Hard target type units** are identified by black box defense factors or brackets around a numerical Defense Strength of 5 or more. Protected **target type units** are indicated with orange box defense factors or parentheses around their numerical Defense Strength, or with a numerical Defense Strength of 4 or less, and include weakly armored AFVs (see revised CRTs), APCs, Armored Cars and IFV vehicles, and mortar, AAA and missile carriers based on APCs and IFVs. **Soft target type units** have no brackets around their numerical Defense Strength or have green defense factor boxes. **Note that jeeps carrying ATGMs or RR antitank guns are Soft targets.**
**Attack Strength** is a numerical rating of the firepower that a unit possesses. It is expressed in Attack Strength Points. The ability of a unit to attack depends on both its Attack Strength and its Weapons Class.
**Defense Strength** is a numerical rating of the ability of a unit to preserve itself when attacked. It is expressed in Defensive Strength Points.
**Range** is the maximum range or distance which a unit may fire at a target. It is expressed in hexagons and is measured by counting the shortest path in hexagons from the Firing unit (exclusive) to the Target hex (inclusive).
**Panic Level** (a MW78 term) affects the ability of units to move or fire in response to enemy actions. For the *MechWar 2* scenarios, Panic Level corresponds to the **Troop Quality** as specified in the *MechWar 2* scenario **Available Forces** chart, e.g. Seasoned troops are Panic Level 0, First Line troops are Panic Level 10%, etc.. (See Panic, Section 13.0.)
*(The “Panic” term is retained in these rules though the actual functioning of this mechanism still corresponds to MW2 Troop Quality.)*
### [4.0] SEQUENCE OF PLAY
**GENERAL RULE:**
Each game or Scenario is composed of Game Turns during which both Players units move and engage in combat according to a rigid Sequence of Play.
#### [4.1] SEQUENCE OUTLINE
**A/B. DIRECT FIRE / MOVEMENT PHASE:**
- Randomly determine the first Player for this Phase (1) (See 4.2)
- **First Player**—One or more eligible units* may be activated to conduct either a Direct Fire attack on any one enemy unit or move.
- Place FIRE markers on firing units if unit fired (see NOTE below)
- Enemy units with no MOVE/FIRE markers may immediately use Overwatch Fire to attack friendly units that fire. Place FIRE markers on firing units.
- Place a MOVE marker on the moving unit if unit moved (see NOTE below)
- Enemy units with no MOVE/FIRE markers may use Opportunity Fire to attack the moving unit at any point during its move. Second Player places FIRE markers on firing units.
- **Second Player**—Repeat the process performed by the first player.
- Players continue to alternate fire or movement until both Players have fired or moved all their units or have passed. A player may pass and still get an alternating turn, but if the other player passes, the Phase is over.
**NOTE:** A unit may either Fire once or Move once in a single Game Turn (EXC: Split Fire, Overrun). (Players may rotate units that have either moved or fired.)
* If using the simplified company formation doctrine -- e.g. up to three units within three hexes of other units in the group -- all three units can be activated during a player’s activation to either move or fire. Stacked units may move together, but still fire sequentially.
Individual units in a formation can still be interrupted by either Opportunity Fire or Overwatch Fire from enemy units. Enemy Overwatch fire occurs in the friendly unit’s activation, effectively a free action as part of the friendly unit’s activation. The enemy unit will be marked as Fired, though the enemy player will still get its activation normally.
If not using a formation doctrine, only a single unit is activated in a player’s Fire/Move phase.
**Note:** Any reference to either the Move Phase or the Direct Fire Phase refers to the combined A/B Direct Fire/Movement Phase.
(1) The First Player indicated in a scenario automatically becomes the First Player in the first phase of the first Game Turn.
C. PANIC AND SUPPRESSION REMOVAL PHASE:
- Both Players remove all Suppression Markers that have been placed on units as a result of fire.
- Players then attempt to remove all Panic Markers incurred during the current Game Turn or during a previous Game Turn (see Panic, Section 13.0).
*(Leave Moved/Fired markers on until the end of the Indirect Fire Phase because fire units that have moved may not fire.)*
- If units have not moved or fired, they may reduce their Fatigue Level by one.
D. INDIRECT FIRE PHASE: Smoke Markers that impacted during the previous Game Turn are removed. Players conduct Indirect Fire (Section [7.0]). Unlike Direct Fire and Movement, Players do not alternate Indirect Fire attacks.
If using markers to indicate units that have moved or fired, these markers can be removed, or counters rotated, whichever is applicable. Remember that on-board indirect fire units that have moved may not fire.
E. REGROUPING: At the conclusion of the Indirect Fire Phase, any partial strength units that are not marked with either a FIRE or MOVE marker and have begun the game turn stacked in the same hex may combine into a full strength unit. The combined units’ strength cannot be greater than a full strength unit. Only platoons of the same company may Regroup (see MechWar 2 Unit Designations, Exclusive Rules, pg. 32).
**NOTE:** If using MechWar 2 counters, the Unit Designations are on the counters. If using MechWar 4 counters, other methods of identification are required. The Vassal module allows units to be marked with Unit Designations.
SIMPLIFIED MECHWAR 4 FORMATION RULES: For unmarked MechWar 4 units, a simplified formation doctrine allows three units of the same type that are within three hexes of each other (two intervening hexes for NATO and Israeli units or one intervening hex for Soviet and Arab units – i.e. doctrine range) can be designated as a single company. This grouping can be used for both activations and Regrouping. If desired, players may use an Hq marker on a single unit to group component company units within range into a single company. Note that the Vassal module optionally allows formation information to be included with each counter.
If using units marked with their formation designations – such as in the Vassal module – any higher level attached units may move or fire if within their own doctrine range.
The effect of breaking formation is that the company Panic Level is increased by one, which can be restored once the units restore formation doctrine. If units are out of formation due to unit loss, the company has one game turn to return to their formation doctrine before incurring any penalty.
F. END OF GAME TURN: At the conclusion of the Indirect Fire Phase, the Game Turn is completed. Note the passage of the Game Turn on the Game Turn Record Track and begin a new Game Turn.
**NOTE:** A unit may move or fire in a single Game Turn, but it may not do both (EXCEPTIONS: see Overrun, Section [12.0]; Mounted Combat, [9.4]; optional Pull Back rule, [6.92]; and the experimental Short Halt rule for vehicles with effective gun stabilization, [19.3]).
[4.2] DETERMINING THE “FIRST” PLAYER
There are certain advantages in being the first Player to move or fire. In order to grant each Player a chance to be first, the Fire and Movement Phases of every Game Turn require that a first Player be determined by random means for each of these two Phases. Each player rolls 1D6, and then adds 1/10th of the percentage Panic level (i.e., 1-5, and could be 0) to the roll, and the lower adjusted roll wins. On a tied roll the side with lower panic level wins.
[5.0] SPOTTING [OBSERVATION]
GENERAL RULE:
Spotting refers to the ability of one unit to see another unit. Whether or not a given unit spots another unit depends on whether or not the Line of Sight between the two units is blocked (obstructed), the type and location of unit being observed, and its movement/firing status. The Line of Sight is determined by drawing an imaginary straight line between the center of the sighting unit’s hex and the center of the sighted unit’s hex. The terms Line of Sight (LOS) and Line of Fire are synonymous. When a unit has a clear Line of Sight to a potential target unit within Observation Range (see Observation Range Table), it can Spot the target unit and thus use the Line of Sight as a Line of Fire.
PROCEDURE:
JUDGING THE LINE OF SIGHT
Lay a straight edge from the center of the sighting (firing) hex to the center of the target hex. The line so described is the Line of Sight (Line of Fire). If the LOS passes through a blocking hex or hexside which is not common to either the Firing unit’s hex or the Target hex, then the LOS is blocked. Otherwise it is unblocked.
CASES:
[5.1] BLOCKING HEXSIDES AND HEXES
[5.11] Hexes: Any hexside which is covered in whole or in part by blocking terrain is considered a blocking hexside. Any hex which is wholly or partially
filled by blocking terrain is a blocking hex.
Any light woods, heavy woods, smoke, or town hex is blocking terrain and causes a blind hex directly behind the feature. However, elevation differences can cause the blocking hex to completely block LOS.
If the spotting unit is at the same height as the blocking terrain, the feature completely blocks LOS. If the spotting is at a higher level than the blocking terrain, then generally a blind hex is created directly behind the object. However, if the blocking object is greater than half the distance from the higher object and lower object, then the LOS is totally blocked.
In the example, the town creates a blind hex directly behind the town from the red unit at a higher terrain. Since the blocking town hex is four hexes from the higher unit, it is more than half the distance to the green unit, six hexes away and thus completely blocks LOS for any unit behind the blocking town hex.
Hexsides: Heavy hex side symbols represent dunes (yellow bars) and crests (dark brown bars). Units directly behind these hexsides can see and be seen by an LOS that crosses these hexsides. Units in these situations are assumed to be able to be in Hull Down situations (see [6.6]). Hull Down applies even if the units are adjacent.
ELEVATION
Units at a higher level are presumed to be able to see across continued downward sloped hexes. If any hexes along the LOS to units at equal or lower terrain are higher than the spotting unit, the LOS is blocked.
In the top example, the higher red unit’s LOS to the green unit follows a continuous downward sloping series of hexes and thus constitutes an unblocked LOS. In the lower example, the LOS crosses hexes of higher terrain and thus the LOS is blocked.
[5.2] OBSERVATION RANGE
All units are initially deployed face-down and considered hidden (i.e., unobserved) so that only the owning Player knows what they are. They are turned face-up only when observed by enemy units.
[5.21] If a face-down unit fires at an Enemy unit from any range, it is automatically observed and is turned face-up. If a face-down vehicle moves through or into the LOS of an Enemy unit at any range, it is automatically observed and turned face-up. If a face-down unit neither fires nor moves, it remains face-down and unobserved until an Enemy unit is within Observation Range (see Observation Table).
[5.22] Once observed (face-up), a unit remains observed for the length of time that it remains in an Enemy unit’s LOS, regardless of the distance between the observing unit and the target unit and the effects of terrain on Observation Range. If an observed unit can move out of the LOS of all Enemy units, it may be turned face-down (unobserved) until such time as it is again observed by an Enemy unit.
[5.23] For purposes of determining Observation Range, whenever a Friendly unit moves through two or more different types of terrain that is within the LOS of an Enemy unit -- including the hexes in which the unit begins and ends its movement -- determine the Enemy unit’s ability to observe that unit based on the terrain type passed through that best affords a chance for observation. The Observation Range for various situations is shown on the Day Clear Weather Observation Range Table.
[5.24] Note that a Friendly unit attempting to move out of an Enemy LOS is still subject to Opportunity Fire from observing units in each hex en route.
[5.25] CAMOUFLAGE: In some scenarios, units may begin the game in camouflage. Whenever a unit would normally be observed, a die roll is made on the Camouflage Observation Chart. If the number is less than or equal to the value indicated for a unit in the appropriate terrain, then the unit is not observed. The unit remains unobserved for the remainder of the phase. No more die rolls need be made. If a camouflaged unit moves, fires or becomes involved in a close assault, it is no longer camouflaged and cannot again become camouflaged.
[5.3] EFFECT OF UNITS IN SPOTTING
Playing pieces never obstruct the Line of Sight. A unit may see through any number of intervening units (both Friendly and Enemy) to some distant target hex, and a unit may fire through both Friendly and Enemy units to some distant target hex without affecting the units fired through.
[5.4] SPOTTING FOR INDIRECT FIRE
Any Friendly unit, except trucks and their passengers, may spot a target hex for another Friendly unit which is capable of Indirect Fire. Unless assisted by a spotting unit, no unit may fire Indirect Fire (except when specifically allowed by the Scenario Instructions). (Units may request indirect fire from Off-Board artillery on any hex on the map that is within range of the firing unit. Off-map artillery has unlimited range in the map unless restricted by scenario.)
[6.0] COMBAT
COMMENTARY:
A Player uses his units to fire at (attack) Enemy targets. A Player may attack during the combined Direct Fire Phase/Movement Phase, or he may attack. If firing at a moving enemy unit, the attack is called Opportunity Fire; otherwise they are identical in execution. Certain units may fire during the Indirect Fire Phase (see [7.0]).
There are three Combat Results Tables. The Anti-Personnel Combat Results Table is used by all units when
firing at Soft Targets. The other table is the “Anti-Armor” Combat Results Table used by all units when firing at Hard or Protected Targets. The Range Attenuation Table reduces the combat value based on increasing range.
Every combat unit has a maximum range printed on it. This is the greatest number of hexes it can fire at a target. All other things being equal, the ability of an individual unit to use its firepower varies with the range it fires over. The Attack Strengths of the various units were calculated on the basis of the units engaging targets at an average of 400 to 600 meters (2 to 3 hexes in game terms). This effect is called Range Attenuation and is numerically summarized on the Range Attenuation Table. H Class units are insensitive to Range Attenuation, which is reflected in the H Class Combat Procedures.
GENERAL RULE:
In order to fire at an Enemy target, a unit must be able to observe the target and must be within firing range. In the Direct Fire Phase, a Player may attack any Enemy unit. During the Movement Phase, a Player may attack only the unit that the Enemy Player is moving at that moment. When an attack is executed, the result is determined by the Fire Routine which considers the characteristics of the firing unit, the panic status of the firing unit, the characteristics of the target unit, the range, and the effects of the firing unit.
PROCEDURE:
To make an attack, a Player identifies which of his units are firing and which Enemy unit is the target. (NOTE: One attack may be made with several units firing at the same target.) Each individual unit that is attempting to fire consults the Panic Table (see sheet). A 1d10 is rolled, and if the outcome of the roll falls within the limits of the numbers specified, the unit panics and may not fire or move. Place a Panic Marker on that unit to indicate this condition. If the unit does not panic, it proceeds to the fire routine.
FIRE ROUTINE:
Step 1: The attacking Player determines the range in hexes between the firing unit and the target unit. When counting hexes to determine range, count the target unit’s hex but not the firing unit’s hex. If the computed range exceeds the range of the firing unit, the unit may not fire. (NOTE: Range should be calculated before the Player announces his attack, because once the attack is announced, the unit must fire, even if its fire will be ineffective.)
Step 2: Once the target unit is determined to be within range, the attacking Player determines the type of target he is attacking: Hard (units with bracketed Defensive Strength), Protected (units with parenthesized Defensive Strength), or Soft (units with no brackets or parentheses around Defensive Strength).
Step 3: The attacking Player modifies his attack strength for range attenuation by consulting the Range Attenuation Table:
a) Determine the target type (i.e., Hard Target, Protected Target, or Soft Target) and locate the appropriate section of the table.
b) Determine the Weapon Class of the firing unit and find the appropriate column within the section of the table as determined in Step 1.
c) Determine the range (in hexes) from the firing unit to the target unit.
d) Cross index the range with the Weapon Class column and read the modification indicated on the table.
Follow this procedure for each of the units that are firing at the same target unit. If several units are involved the Player may wish to write down the modified strength of his firing units as he calculates them via the table.
The target unit subtracts its defense strength from the modified attack strength which yields the attack superiority number.
The attacking Player then selects the appropriate Combat Results Table for the defending unit:
- For Hard/Protected targets determine the state of the firing unit (i.e., Full Strength, D-1 Strength, or D-2 Strength) and use the Anti-Armor Table that corresponds to this state.
- For Soft targets, determine the potential modifier based on the state of the firing unit (-1 for a D1 firer, -2 for a D2 firer) and use the Anti-Personnel Table.
Step 4: The attacking player now rolls 1d6, modifying the result for any terrain defense bonuses or firer strength status, and cross-indexed his modified attack strength with the appropriate attack superiority column. A result is achieved which is immediately applied to the target unit (e.g., D1 meaning 1/3 of the unit is destroyed; D2 meaning 2/3 of the unit is destroyed). Sometimes a parenthesized number results which requires another die roll on the part of the defender (see Combat Results Table). Also if the attacking unit is a G class, it may deplete its ammunition (see 6.84).
CASES:
[6.1] RESTRICTIONS ON FIRE COMBAT
[6.11] A “Panicked” unit may not fire.
[6.12] A unit may suffer a Combat Result which prohibits it from firing or which reduces its effectiveness (see the explanation of Combat Results).
[6.13] A unit may not fire more than once during the Direct Fire/Movement Phase. Note that when a Player attacks, he may fire with more than one attacking unit (see COMBAT, PROCEDURE). When he announces an attack, a Player identifies which units are firing. He may not add to this list after he has stated it, nor may he fire at the same target in a later attack during the same Phase.
[6.14] Once a Player states an attack, he must execute that attack. He is responsible for calculating the chance of success before he states his intentions. If he states an attack which is subsequently found ineffectual (most commonly because he fired on a target out of range), the attack is still considered to have been executed. In effect, the firing units have wasted their fire.
[6.2] EFFECT OF OTHER UNITS
Units never block the Line of Sight. A Player may fire through Friendly and Enemy units. Whenever a target unit is stacked with other units in a hex and it receives a combat result, the other units in the hex are unaffected. Personnel being transported by a vehicle are a special case (see Section 9.0).
[6.3] MULTIPLE FIRE ATTACKS
When a Player declares several units to fire at the same target, they are considered to all be firing simultaneously.
He resolves each unit’s fire separately in any order he wishes. He must, however, resolve each fire.
[6.31] If a Player assigns several units to fire on a single target unit and the target is eliminated before all the units have had a chance to fire, the remaining units are considered to have fired for that Game-Turn.
[6.32] A multiple-fire attack is considered a single attack for purposes of the Sequence of Play. If a Player states that three of his units are making one attack, the fire of each Friendly unit involved in that attack is resolved before the Enemy Player may do anything, e.g. Overwatch Fire.
[6.4] DIRECT FIRE
[6.41] Direct Fire occurs during the Direct Fire Phase and is executed against any enemy units the Player can observe. In order to execute a Direct Fire attack against an Enemy unit, the attacking unit must be able to spot the unit with a clear Line of Sight, according to the rules of Spotting (see [5.0]).
[6.42] Any unit with an Attack Strength may use Direct Fire, except “Panicked” units.
[6.43] On-map H Class units may use Direct Fire, i.e. the target is spotted by the firing unit; if such a unit Direct Fires, resolve the attack on the D2 CRT. It is treated as Tight Pattern, affects all units in the target hex, and the Range Attenuation Table is not used. The target Defense Strength is not deducted from the Attack Strength, but the printed Attack Strength may be reduced by damage (see 7.16).
On-map artillery units have minimum and maximum ranges. If H class units do not have a minimum range on their counter, they are assumed to have a minimum range of “2”.
[6.44] All tanks and assault guns/tank destroyers are also considered to be armed with coaxial/bow and/or pintle-mounted MMG. These units (along with all APCs, including halftracks) can make a 2R3 (strength/type/range) attack on soft targets instead of an M class attack.
US and USSR tanks have an AAA HMG mounted atop the roof which can only be fired if the tank is not suppressed (buttoned up); this would be a 3R5 attack.
[6.45] Infantry units may attack vehicles at a range of 0 or 1 hex with a Direct Fire H Class attack. This attack uses a base strength of 6H, reduced for range attenuation by -3 at 1 hex range, plus any damage and/or Suppression status of the firing unit. The attack takes place on the D2 table, the target cannot button up, and rules for direct fire H Class attacks apply (7.32). Terrain is ignored.
Example: the H Class strength of a rifle platoon at D1 and S1 attacking at a range of 1 hex would be -1: base of 6H, -3 for 1-hex range, -1 for D1 status of on-map unit using an H Class attack (7.16) and -3 for S1 status (7.6).
Note: This has a chance of Disruption at close range (0-1 hexes).
[6.5] OPPORTUNITY FIRE
[6.51] Opportunity Fire occurs during the Movement Phase and is executed only against enemy units that are moving and can be observed. (See also Overwatch Fire, 6.9.)
[6.52] A Player must pause each time his unit moves into a hex to allow the Enemy Player an opportunity to fire at the moving unit. This pause permits the Enemy Player to calculate ranges, etc. before he announces the attack. Only the unit actually being moved may be fired at. The attack is resolved exactly as detailed in Section 6.0. The fire is resolved in the hex that the moving unit has entered.
[6.53] Fire against a moving enemy vehicle unit incurs a -1 modifier on the CRT.
[6.54] If a moving unit survives Opportunity Fire, it may continue moving. However, it may be fired at again when it enters a new hex, although the Enemy Player would have to use a different unit, since no unit may fire more than once per Game Turn (Exception: see Overruns, 12.0).
[6.6] TERRAIN EFFECTS ONCOMBAT
When a target unit lies in a town or woods hex or is behind covering terrain, it receives a defense bonus in the form of a die modification (see Terrain Effects Chart; Covering Terrain, 10.2).
HULL DOWN shielding: Hull Down units that are attacked through dune or crest hexsides receive a -2 to the die roll. Russian tanks (T34/85, T-10, T-55, Tiran/TI-67 and T-62) only receive a -1 to the die roll. Hull Down does not apply to indirect fire or Overrun attacks. Opportunity Fire does not trigger Hull Down from attacking units performing Overrun.
If the firing unit is on a higher terrain than the target, e.g. a brown slope hex, then the defender does not get the benefit of Hull Down.
[6.7] PARTIAL STRENGTH UNITS vs. SOFT TARGETS
Full Strength units do not modify the die roll when firing at Soft Targets. D1 strength units subtract one from the die roll when firing at Soft targets. D2 strength units subtract two from the die roll when firing at Soft targets.
[6.8] SPECIAL WEAPONS CLASSES
[6.81] An H Class unit, in addition to conducting Direct Fire [6.43], may conduct Indirect Fire (7.0) as either a Tight Pattern or a Loose Pattern attack. The difference is that a Tight Pattern attack affects defenders in the impact hex only. A Loose Pattern attack affects defenders in the impact hex and the surrounding six adjacent hexes (Impact Zone). All H Class Fire must be designated as either Tight (T) or Loose (L) Pattern as part of the Fire Plot. In the absence of such a designation, the fire is considered to be Tight Pattern.
[6.82] H Class Fire affects all units located in the impact hexes (Zone). Thus an H Class unit attack may affect more than one unit per Game Turn though it may fire at only one impact hex per Turn. When there are multiple units in the Impact Zone (hex) simply attack each one separately as though it were the only unit present. The Attack Strength of an H Class unit is not divided between multiple defenders. It attacks each one with its full strength (subject to the Resolution procedure).
[6.83] Naturally, a Loose Pattern attack represents a less dense bombardment of a given area than a Tight Pattern attack. For this reason, the procedures for resolving a Loose Pattern attack differ from a Tight Pattern; see 7.21.
[6.84] Each G Class unit is assigned an ammunition depletion rating, which is printed on its counter face. Whenever a G
Class unit fires, the Owning Player will roll one die immediately after the resolution of the attack. If the die roll is equal to or less than the printed ammunition depletion rating for the firing unit, the unit is considered to have expended all of its missiles and is considered, henceforth, to have a zero G Class Attack Strength. If a unit does not have a dual-Class identity (e.g., US M150 units) it is then removed from the map. G Class units that panic in the act of firing do not check ammunition depletion.
[6.85] A G Class unit may only fire at vehicles, both Hard and Protected target types. Soft Targets being transported are equally affected by the fire of a G Class unit.
[6.86] The following G Class units have a minimum range of two hexes (i.e., they may not fire at adjacent targets): Soviet BRDM; British Swgf; West German Cobra.
[6.87] Each Dual-Class unit has two counters: One represents its G Class Strength, the other its conventional R- or M Class Strength. All Dual-Class units (American infantry and Soviet BMPs) are portrayed on the map with their G Class counter. Whenever a Dual-Class unit loses its G Class capability on account of ammunition depletion after a G Class Attack (see 6.84), the G Class counter is permanently removed from play and replaced with its conventional counterpart.
[6.88] A dual-Class unit may fire with its conventional Strength even though it is portrayed on the map with its G Class counter. It is neither necessary nor desirable that a switch be made between the G Class counter and the conventional counter for the unit to fire its conventional Weapons Class.
AIW scenarios will list anti-tank missiles (G-type units) as separate units to be assigned to carrier units. These are ATG units in Mech War 4 and can be assigned the same way. When out of ammo, the units are removed from the board. Note that for original MechWar 2/4 scenarios, ATG units are the same as their base carrier units and when out of ammo, the counter is flipped to its basic carrier counter.
OPTIONAL AMMUNITION DEPLETION:
MW2: Most direct fire units are assigned an ammunition depletion number, either on the unit’s data sheet or on the MW4 counter. Units are assumed to initially have an Ammunition Level of 3. Ammunition Levels of 1 and 2 indicate reduced amounts. (Specific ammunition levels may be assigned by scenario.) Whenever a unit fires, the Owning Player will roll one die immediately after the resolution of the attack. If the die roll is equal to or less than the printed ammunition depletion rating for the firing unit, the unit’s ammunition level is reduced by 1. (Players may roll two dice for combat with the first (colored) die being the ammo depletion number.) Any other result and the Ammunition Level is not effected. While positive, the Ammunition Level has no effect on combat. When the Ammunition Level reaches 0, the unit is out of ammunition for that weapon system and may not fire. (But see [6.44]).
Units with an attack superiority number of +4 or greater on the Full Strength CRT or +7 or greater on the D1 CRT add +1 to their ammunition depletion number.
(This represents the fact that more effective gun systems need less ammo due to the ability to defeat armor with fewer shots.)
Weapon systems without ammo depletion numbers do not suffer ammunition depletion. (But see below.)
Note: For AIW scenarios, many combat units do not have ammo depletion numbers. Israeli armor units are assumed to have an ammo depletion number of “2”. Arab armor units are assumed to have an ammo depletion number of “3”.
[6.9] ADDITIONAL COMBAT ACTIONS
The following rules are from the October War errata are implemented here.
[6.91] OVERWATCH FIRE
GENERAL RULE:
The use of Overwatch fire allows a player to fire one of his units during his own Fire/Movement Phase at an enemy unit which has just fired at a moving or firing unit using Opportunity Fire. The Overwatch fire must be executed immediately.
Example: A T62 platoon moves into hex A and receives fire from an M60 unit in hex B. Another T62 unit in hex C could then Overwatch fire at hex B immediately. If not conducted immediately, Overwatch fire could not be directed against the above hex for the rest of that Movement Phase.
[6.911] In order for a unit to be eligible to fire Overwatch Fire it must fulfill the following conditions: it may not have moved that Game Turn nor may it have fired in that Game Turn.
[6.912] A unit may only conduct Overwatch Fire once per Game Turn.
[6.913] A unit conducting overwatch fire may immediately be itself attacked by enemy overwatch fire. This sequence may alternate back and forth one unit at a time for each player until all overwatch fires have been exectued, i.e. a firefight.
[6.913] AMBUSH
[6.9131] Before the start of the game, companies may secretly designate a single hex within their LOS as an ambush hex. (These units must be given a special Ambush Command.) Any enemy unit entering the ambush hex or adjacent hexes may be fired upon using Overwatch Fire from the designated units. However, all units that designate the same ambush hex may fire before any enemy Overwatch fire can interrupt the ambushing units.
[6.9132] If at the end of the combined Fire/Move phase, no units of the company become spotted, they may continue their Ambush status. (If the units ambushed a hex, it’s assumed the target would have been eliminated, otherwise the target would be able to spot the ambushing units.) Once the Ambush status is lost by a company, no further ambushes may be declared. Units may cancel their ambush status at any time, but then may no longer resume an ambush status.
[6.92] PULL BACK
COMMENTARY:
Vehicles in a hull down position behind certain types of covering terrain will usually pull back behind the covering terrain after firing to avoid return fire.
GENERAL RULE:
Any unit (including helicopters, 17.0) that fires while in defilade (in a hilltop hex or behind a slope, crest or railroad embankment; see 10.2) may, after firing and potentially receiving return or
Overwatch fire, revert at the end of that phase to a hidden (unobserved/inverted) state if no enemy units are within normal Observation Range (see 5.2).
*Example*: further to the situation described in 6.91, if no Soviet units were within one hex of the firing M60 unit behind the slope hexside in 3804 after it potentially received Overwatch fire, the M60 unit could, at the end of the phase, resume an unobserved/inverted state and would not be eligible as a target during the next Direct Fire phase unless it fired again.
[6.93] SPLIT FIRE
All vehicular platoons consist of multiple vehicles of the respective type shown on the counter. Given the breakdown of a platoon into abstracted D1/D2 components, a full strength platoon could fire as a Full strength platoon, one D1 and one D2, or three D2s. In essence, each individual platoon element is seeking its own target. Panic in this circumstance is evaluated for each individual fire, except if one element in a platoon panics, they all panic.
[6.10] R CLASS FIRE VS ARMORED VEHICLES
R Class units may fire at Hard and Protected Target types to a maximum range of 3 hexes using suppressive fire. To perform this type of combat you take the attack strength of the firing unit and subtract the defense strength of the target yielding an attack superiority number. The attacking player then resolves the attack on the Anti-Personnel combat results, modifying the die roll for terrain and -1 per attacker disruption level.
If any result other than no effect is achieved, the defending Hard Target type is placed in an S1 state.
Note: This has a chance of suppression at up to three hexes.
[7.0] INDIRECT FIRE
GENERAL RULE:
In most scenarios, both Players are given an Off-Board Artillery capability (14.0), which simulates the availability of artillery, rocket, or mortar batteries located elsewhere than in the area depicted on the map, to fire at the Enemy targets located on the map. In addition, players may have on-map H Class combat units assigned as organic support which function similar to Off-Board Artillery except that they move on the map and are therefore susceptible to enemy fire.
PROCEDURE:
*MechWar 4* artillery is available in terms of concentrations of H attack strength points per turn. *MechWar 2* scenarios will generally list each side’s available artillery in the Artillery Special Information sections of the *MechWar 2* scenario descriptions. The following *MechWar 2* artillery assignments generally correspond to the *MechWar 4* “H” concentrations:
Light howitzer (6H)
Medium howitzer (7H)
Heavy howitzer (8H)
Each Scenario's Order of Battle states that a Player has, for example, 3 batteries of 105mm Light Howitzers. For *MechWar 4*, this would translate to 3 concentrations of 6 H points each abbreviated to read: OFBDA 3(6H).
For those artillery concentrations listed as "sections," use the same "H" value but the artillery cannot be used as a Loose Pattern ([7.2]). Artillery concentrations listed as "batteries" (more typical) may employ a nominal tight triangle shaped 3-hex pattern with two hexes placed away from the direction of fire. All three hexes can be at full strength. A battery using a loose pattern is treated as described in [7.2]. Concentrations listed as "battalion" may employ loose patterns as defined in [7.2], but all 6 hexes can be at full strength.
For scenarios that list specific artillery caliber, the following mapping can be used to obtain the equivalent *MechWar 4* strength:
| Caliber | Strength |
|------------------|----------|
| 25 lbr, 105mm, 122mm | 6H |
| 130mm, 152mm | 7H |
| 155mm, 180mm | |
| 203mm mortar, 240mm | 8H |
Note that on-map H-class units will have their own H factor on their counters.
CASES:
[7.1] AVAILABILITY AND CAPABILITIES
[7.11] Rather than having to plot artillery fire, *MechWar 4* uses the "October War Alternate Artillery Rules Variant" by Robert Cairo. (See separate charts.)
*MechWar 5* PROCEDURE:
In the Indirect Fire Phase, instead of plotting artillery fire, each player will roll to see if there is a successful "Fire for Effect" for each artillery concentration allotted them in the scenario. If the artillery concentration achieves an FFE, then roll to check for scatter.
*Optional*: Off-Board artillery should incur a one turn delay before rolling for effect. (Recommended)
[7.12] Once an impact hex has been designated, continuous fire into the same impact hex has a beneficial scatter modifier. If the impact hex is changed or not fired on, the indirect fire procedure must be repeated.
[7.13] A unit firing Indirect Fire may fire at one and only one target hex and only one concentration may be fired at a single hex. Hexes which overlap due to scatter are only affected by the highest concentration, i.e. not additive.
A player may request fire on any hex on the map within range of the firing unit. Off-map artillery generally has unlimited range on the maps, but certain scenarios may limit the range of off-board artillery. The range of on-board artillery is shown on the counters or the unit’s status sheet.
Certain units have minimum ranges, below which fire cannot be targeted. (If not on the counter, a minimum range of 2 is assumed.)
The scatter table provides different modifiers depending on whether the target hex is spotted, i.e. within LOS of a friendly unit. Artillery units themselves may observe for their own fire.
Note: Unless otherwise specified, artillery in AIW scenarios requires at least one friendly combat unit to have LOS to the target hex, i.e. be spotted.
[7.14] If indirect fire is to be directed at the same target hex on the next turn, the Indirect fire impacts on and attacks any units which are present in the impact zone in the next Indirect Fire Phase at which point, scatter may change. The impact hex(es) remain on the map until the next Indirect Fire Phase. This is not true for rocket artillery.
[7.15] If an on-map H Class unit panics in a turn when it is plotted to fire, the fire is cancelled for that turn. In addition, if the on-map H Class unit is subjected to Direct or Opportunity fire in a turn when it is plotted to fire, the fire mission is cancelled whether the unit takes losses or not. The firing unit must have an Overwatch command to fire (if using the Command rules).
[7.16] If an on-map H Class unit takes losses, lower the H concentration 1 H factor for each loss to a minimum of 1, i.e., if a 4H mortar section takes one loss, it would thereafter fire a 3H concentration. If the attack is on a soft target, then the damage states cause a negative DRM instead, e.g. D1 = -1 DRM, etc.
[7.17] Indirect Fire is always subject to the probability of Scatter. The October War Artillery Variant table lists the die rolls necessary for Scatter. If the result is less than or equal to the listed value, the actual impact hex will be different than the intended target hex for any Indirect Fire. Roll a die for each H concentration that is Indirect Firing. If the Indirect Fire scatters; roll the die again and consult the Scatter Diagram. The Indirect Fire from that unit impacts one hex away from the target hex in the indicated direction.
[7.18] Units receiving Indirect Fire receive a terrain benefit if they are in woods or town hexes; the benefit is a -2 from the die roll.
Improved Positions: Special improved positions for tanks may be specified by scenario (implacements). These may not be constructed but only exist via initial placement. They are indicated by Hull Down markers and provide a -2 to the CRT die roll for all tanks, including Russian tanks that are subject to Indirect Fire. Only one unit is allowed per marker, though up to three markers may exist in a hex. A wreck placement from a unit in one of these improved positions eliminates the improved position.
[7.19] ENVIRONMENTAL EFFECTS (Optional):
An artillery die roll result of “1” on a town hex will produce a MechWar 2 Town Devastation in that hex. A result of “1” in a woods hex will produce an Abatis. A Devastation hex results in an additional -1 from the die roll for infantry units. It has no effect on vehicle units.
[7.2] TYPES OF H FIRE
H fire comes in three varieties: Tight pattern, Loose pattern, and smoke.
[7.21] Tight pattern H fire affects only the impact hex. Loose pattern H fire affects each hex with one-half of the original H concentration (round fractions up). All Missile/Rocket fire uses Loose pattern.
[7.22] Smoke must be fired in a Tight pattern and therefore affects only the impact hex. Place an inverted or smoke marker on that hex (see Smoke, 18.0).
[7.3] H FIRE AGAINST HARD AND PROTECTED TARGETS
Hard targets (units with a bracketed [ ] Combat Strength) and Protected targets (units with a parenthesized ( ) Combat Strength) have the option to receive H fire either buttoned or unbuttoned.
[7.31] To button up, a Hard or Protected unit voluntarily assumes a state of Suppression 1 (S1) immediately prior to the resolution of the attack. For resolution, apply the results of using the H concentration on the “H Indirect” line of the D2 CRT. The S1 applies equally to any infantry mounted on the unit.
[7.32] To receive H fire unbuttoned, a Hard or Protected target (together with its mounted infantry) is liable to Suppression 2 (S2) only if the H fire is in a Tight pattern. To resolve the Suppression on an unbuttoned target, roll the die; if the number rolled is equal to or lower than the H concentration of the attacking unit(s), the target is double suppressed (S2). Otherwise, it is Suppressed (S1). Example: a Hard target un-buttoned on a 4H concentration in Tight pattern rolls a 5; the unit is only S1. If the roll had been a 4 or less, the unit would have been S2. Also, use the H concentration results on the “H Indirect” line of the D2 CRT.
[7.4] DURATION OF SUPPRESSION
Suppression of any level is automatically removed during the next Panic and Suppression Removal Phase.
[7.5] EFFECTS OF SUPPRESSION ON HARD AND PROTECTED TARGETS
[7.51] Suppression affects a Hard Target by reducing its Attack Strength by 2 points for each suppression. Example: A unit with an Attack Strength of 15 which is double suppressed has an attack strength (while in this state) of 11. Suppressed Hard and Protected targets may not spot for Indirect Fire. Suppressed Hard and Protected targets may not mount or dismount infantry.
[7.52] The effects of suppression on Protected Targets are identical to its effects on Hard Targets except for the following units: Soviet LRRPs are open-topped and thus cannot be buttoned up. These units automatically accept fire on the D2 CRT and all passengers are affected as is their carrying unit. The effect of suppression on M113s, BMPs, BDRMs and BTR-60s is a reduction of 3 attack strength points for each suppression state. All Protected targets button up to protect their passengers just like Hard targets.
[7.52] Suppressed hard / protected targets may not fire their external weapons systems, e.g. roof mounted AA HMGs.
[7.6] EFFECTS OF H FIRE AND SUPPRESSION ON SOFT TARGETS
Soft Targets that are not in APCs (protected vehicle) are affected by H fire in Tight or Loose pattern identically. The Indirect fire attack is conducted on the Artillery line of the Anti-personnel CRT using the H fire concentration as the Attack Superiority column. (See Anti-Personnel CRT for explanation of results.)
The effect of suppression on Soft Targets is a reduction of 3 Attack Strength points for each suppression state, and the unit may not move until the suppression is removed. Soft targets incurring a Suppression result may
immediately remove the Suppression at the cost of one Disruption Level. Suppressed soft targets may not spot for Indirect Fire. **Suppression states in excess of S2 are only possible against Soft Target types.**
[7.7] CLOSE AIR SUPPORT [CAS]
Close Air Support is in all ways identical to Indirect Fire except that it is always Tight pattern, may never drop smoke, and has to be plotted only one turn in advance. See Rule 15.0. Armored vehicles may not button up when being attacked by Close Air Support; they are attacked on the +8 column of the D2 CRT. The only terrain benefits a target unit receives when being attacked by CAS are identical with 7.18.
[7.8] COUNTERBATTERY FIRE
[7.81] A Player may assign any or all of his artillery and mortar units (only) to a Counterbattery task (CB). If and when an Enemy artillery or mortar unit (only) executes Indirect Fire the die is rolled. If it is a one then the Enemy unit is Spotted and the Friendly Player may automatically fire at this unit with one or more of the Counterbattery units within range. The Counterbattery Fire is, in effect, triggered; otherwise the Firing unit remains concealed and there is no Counterbattery Fire.
[7.82] Counterbattery Fire is, by definition, a variant of Opportunity Fire and it is executed just as though it were Indirect Fire.
[7.83] Counterbattery Fire is executed in the same Game Turn that it is triggered.
[7.84] Several Enemy units may be triggering Counterbattery Fire on the same turn from several Friendly units. The Friendly Player can allocate his Counterbattery Fire among the several targets as he sees fit, so long as no unit attacks more than once per turn and all attacks are executed as separate events. If Player is executing Counterbattery fire on more than one Enemy unit in a turn, he must allocate all of his fires before executing any given Counterbattery.
[7.85] By its nature, the act of Counterbattery can reveal the Counterbattery unit to the Enemy’s Counterbattery units, who proceed to execute Counter-Counterbattery, so to speak. A Player may deliberately withhold a given unit from executing Counterbattery in hopes that some Counter-Counterbattery Enemy unit will reveal itself. All of this indirect Fire, Counterbattery, Counter-Counterbattery, etc. takes place in the Game Turn in which it is triggered.
[7.86] If an Indirect Firing unit draws Counterbattery Fire, other units in its immediate vicinity may be affected, depending on the type of Counterbattery pattern fired, and the extent and direction which the Counterbattery Fire scatters. The effect of Counterbattery Fire is no different from that of Indirect Fire.
[7.87] Once a unit has been Spotted (see 7.81) by Counterbattery it remains Spotted for all following Game Turns (and can be fired on by normal Indirect Fire) so long as it remains in the same hex it was spotted in. This is true even if it was not immediately fired on by Counterbattery.
[7.88] ALTERNATE PROCEDURE: Every Game Turn a unit fires from the same hex the chance of it revealing itself goes up by one-sixth (e.g., the second time a unit fires from a given position, a die roll of one or two Spots it). This is a very realistic optional rule, but it does involve considerable record-keeping.
[8.0] MOVEMENT
GENERAL RULE:
During the Combined Direct Fire/Movement Phase, the Players alternate moving their units one by one, by stack, or by formation if using the Formation rules. A Player may move any unit which has not fired during the current Game Turn and which is not suffering a combat result which prohibits it from moving (see Combat Results Explanation; Panic: Section 13.0). Within these restrictions, a Player may move one, some, none, or all his units. A unit or stack moves hex by hex. The distance a unit or stack may travel in a Movement Phase is dependent on the lowest Movement Allowance of the stack or unit and the cost of the terrain it crosses and enters. Whenever a unit enters a hex, it may be liable to fire from Enemy units using Opportunity Fire. Once a Player passes (i.e., declines to move another of his units of that formation) that Player Passes to the other player. If the other player passes, and the now Phasing player declines to move any more units, he may move no more units during that Movement Phase. The opposing Player may continue to move his own units until he, too, passes or has moved all his units.
PROCEDURE: The first Player announces that he will move a particular unit. He consults the Panic Table, cross-references the current strength of the unit with the movement column, and rolls a d10. If the number rolled is one of those specified on the Panic Table, the unit panics. **Full strength units do not check for panic.** If the unit is eligible to move normally, the Player moves the unit from hex to hex up to the limits of its Movement Allowance. Basically, a unit expends one movement point for each hex it enters. Some hexes and hexsides cost more than one Movement Point for a unit to move through or across them (see Terrain Effects Chart).
[8.1] ROADS AND TRAILS (COLUMN FORMATION)
When a vehicle moves so that its path coincides with the path of a road or trail in column formation, it pays only the cost for moving along the road or trail, ignoring any other terrain. It costs a vehicle in column 1 movement point to move through a hexside containing a road. It costs a vehicle in column 2 movement points to move through a trail hexside regardless of other terrain in the hex. Personnel units (unmounted) may move 1 hex regardless of the presence of roads, trails, or terrain in any Movement Phase.
[8.11] Vehicular units enter column formation by remaining stationary in their movement Phase. They may leave column at the beginning of their movement for no cost, i.e. a free action. Units leaving column may move or fire, but may not Opportunity or Overwatch fire. Units leaving column are subject to opportunity fire before the units leave column. Infantry may enter column at no cost and leave column like vehicle units.
[8.12] Column formation has no effect on stacking limits.
[8.13] Units in column may not enter defilade.
[8.14] Units in column are treated as 1-step units and fire on the D2 CRT. They suffer a +2 modifier on the CRT when fired upon.
[8.15] Units automatically leave
column when attacked in close assault, though this is not done until the unit leaves the hex or the assault ends.
[8.16] INFANTRY DOUBLE TIME AND FATIGUE
Infantry units may double their movement allowance at the cost of fatigue. They may continue to double their movement at the cost of increasing their fatigue. Initially, they may triple their movement through charge at increased fatigue penalty.
[8.161] Units initially doubling their movement are marked as Fatigue Level 1. They may not charge. Otherwise, they are treated normally.
[8.162] Units at Fatigue Level 1 may double their movement, but are marked as Fatigue Level 2. Fatigue Level 2 units may not move or expend movement points for any purpose. Otherwise, they are treated as normal infantry.
[8.163] Units with no Fatigue Level may triple their movement via charge. Units that charge are marked with a Fatigue Level 3 marker. Fatigue Level 3 units may not move or expend movement points.
[8.164] Units may rest and reduce their Fatigue by one Level by spending one Movement Phase without expending any movement points or firing, -- unless they are Overrun. They may rest in APCs as long as the infantry expends no movement points.
[8.165] The effects of Fatigue and resting occur at the end of the units’ movement.
[8.17] RIVER CROSSING
The Mech War 2 River Crossing rules ([24.0]) can be used for river crossings in Mech War 2 scenarios.
[8.2] RESTRICTIONS ON MOVEMENT
[8.21] A Player may move his units in any order he desires, but once he has moved a unit he may not move it again in that Game Turn.
[8.22] A Player may not move a unit which has fired during the current Game Turn, nor may he move a unit which has suffered a Combat Result that prohibits it from moving.
[8.23] A Player may not move any units once he has passed in that Movement Phase.
[8.24] A unit may not expend more movement points than its total Movement Allowance.
[8.25] Friendly units may never enter a hex containing an Enemy unit (exception: Overrun, 12.0), nor may they enter a hex or cross a hexside which is impassable (see Terrain Effects Chart).
[8.26] Friendly units may freely enter and pass through hexes containing other Friendly units so long as they do not terminate their Movement in violation of the Stacking limits (see 8.5). Friendly units may enter hexes containing enemy units by conducting an Overrun (12.0).
[8.27] Units may never exit from the map, unless the Scenario Instructions so indicate. Units which do exit from the map may never return to play.
[8.3] ZONES OF CONTROL
There are no Zones of Control in this game.
[8.4] EFFECT OF MOVEMENT ON SOFT TARGET DEFENSE
When a Soft Target moves, it is a body of men walking or running upright. As such, it is much more exposed than a similar body of men hugging the ground in place, taking advantage of every fold in the earth, trees, boulders, etc. Therefore, when a Soft Target receives Opportunity or Overwatch fire while moving, it loses all benefits from Terrain.
[8.5] STACKING
A Player may place up to three Friendly units in the same hex. This is called stacking. He simply places one unit on top of the other. There is no movement cost to stack units or unstack them except when such action represents mounting or dismounting (see 9.0). Stacking limitations apply only at the end of the Movement Phase. During the Movement Phase, a Player may have any number of units in the same hex, as long as he meets the limit by the completion of the Movement Phase.
[8.51] When transporting a personnel (non-vehicle) unit(s) a vehicle is placed on top of the passenger unit(s) (see 9.0). For purposes of the stacking limit, a vehicle with passengers is treated as one unit. Thus, a Player may have up to three vehicle units, each with passenger, stacked in the same hex.
[8.52] Stacking has no effect on a unit's ability to attack. Units in the same stack may fire at different targets, the same target, or no target.
[8.53] Stacking has no effect on a unit’s vulnerability to Enemy fire. Enemy units may fire at a single unit in a stack and ignore any other units in the stack (Exception: see Case 7.0).
[8.54] Units stacked together are each vulnerable, in turn, to any Indirect Fire which impacts on the hex they occupy.
[9.0] TRANSPORTING
GENERAL RULE:
Transport is a specialized form of movement which allows a vehicle unit to carry one or more personnel units. It is the only time that a Player is allowed to move more than one unit at a time. Transport requires two separate operations: Mounting and Dismounting. Mounting represents a personnel unit, such as an Infantry Platoon, boarding a vehicle, such as an APC. Dismounting represents a personnel unit's disembarkation from a vehicle. While aboard a vehicle, the personnel unit is called a mounted unit.
CASES:
[9.1] PROCEDURES AND RESTRICTIONS
[9.11] To mount, a Player places a vehicle unit on top of a personnel unit. To dismount, he places the vehicle beneath the personnel unit. While transporting, the Player moves the vehicle unit with its passengers beneath as one unit.
[9.12] To mount or dismount a personnel unit(s) must be in the same hex at the instant of mounting as the vehicle. Mounting costs the 3 movement points; dismounting costs 2 movement points. These movement points are expended by the vehicle. Mounting cost for the infantry is 1 movement point to mount or dismount. The vehicle may move in the same turn in which a unit mounts or dismounts, as long as the vehicle does not exceed its Movement Allowance.
[9.13] When a transporting vehicle is hit by fire and takes losses (a 1 or 2 result), the passenger units take the same result. Thus, if a passenger unit dismounts from a vehicle that has taken a 2 result, the
infantry also has a 2 result assessed against it.
[9.14] An infantry unit cannot mount a vehicle unit that has taken more damage than the mounting unit.
[9.15] An infantry unit or anti-tank gun stacked in an Improved Position cannot mount. The unit would have to move out of the Improved Position and mount in the following Game Turn.
[9.2] INFANTRY ON TANKS
[9.21] Infantry may ride on tanks, but if the tank is fired on, then the infantry receive fire as Soft Targets. Whatever combat roll is made is also applied on the Anti-Personnel CRT using the same differential as was achieved on the CRT against the tanks (or nearest column). Infantry are limited to riding on medium or heavy tanks and assault guns not on light tanks or armored cars, or any vehicle with explosive reactive armor (ERA although none appear in the games as published). Infantry cannot fire while mounted when riding on a tank.
NOTE: infantry riding on a tank will be affected by R Class fire. Apply the die roll achieved against the tank on the Anti-Personnel Table to find the result, but to not use the differential column; calculate the differential in the usual way for an R Class attack on a Soft Target.
[9.3] STATUS OF UNITS ENGAGING IN MOUNTING OR DISMOUNTING
[9.31] Units which in a given Game Turn are about to engage in mounting or dismounting, or which have just engaged in mounting or dismounting, are considered to be moving for purposes of combat resolution.
[9.32] When a truck or APC unit is moved into or out of an adjacent hex, in the act of mounting or dismounting, it can trigger Opportunity Fire. If any combat effects are assessed on the unit, it is placed in the initial hex, and the mounting or dismounting operation does not take place.
[9.33] Whether or not a unit is mounted or dismounted simply depends on whether or not the truck or APC unit is on top of the non-vehicle unit. Thus, if you have two unmounted units, truck and infantry, and they are fired on during the Direct Fire Phase which is prior to the Movement Phase, they would be unmounted; then, assuming they mount during the Movement Phase and then receive fire during the Indirect Fire Phase, they would be mounted for the Indirect Fire Phase.
[9.4] COMBAT WHILE MOUNTED
[9.41] Personnel (infantry platoons) may fire while mounted in APCs. The normal range and effectiveness of mounted infantry fire is reduced. (NOTE: Due to the sequence of play, some units are fired on before they have an opportunity to fire. The defensive strength of a mounted infantry unit is dependent on whether it fires or not. When the current defending Player is asked whether or not a particular mounted unit will fire in that Direct Fire Phase, his answer is binding, i.e., if he says that that unit will not fire, it cannot then fire.)
[9.5] SPECIFIC UNIT CAPABILITIES
[9.51] One infantry unit may fire while mounted on a BMP or Marder IFV. Its maximum range is 1 hex. Execution of this mounted fire does not preclude the BMP/Marder itself from firing normally. However, the BMP/Marder may not move if the infantry unit has executed mounted fire during that Game Turn. The infantry unit is considered to be inside the BMP/Marder when executing mounted fire.
[9.52] One infantry unit may fire from a BTR-60 with unaffected range and attack strength. This mounted fire does not preclude the BTR-60 from firing normally. However, the BTR-60 may not move if the infantry has executed mounted fire during that Game Turn. Infantry mounted on a BTR-60 fires from hatches and is therefore considered dismounted for defense considerations in the turn in which they fire only.
[9.53] One infantry platoon may fire while mounted on an M113 or FV432 (and all halftrack APCs) with its range and attack strength unaffected. This does not prevent the APC from also firing while the infantry execute mounted fire.
[9.54] Infantry conduct fire standing up (halftracks) or from hatches (M113/FV432); thus for defensive considerations the infantry are considered dismounted in the turn in which they fire only.
[9.55] In addition infantry may fire before the APC has moved. If they fire prior to the APC moving, their Attack Strength is halved. If the APC is stationary the Attack Strength is unaffected. Whether the APC is moving or not they are considered dismounted targets (soft) in the turn in which they fire only.
[9.6] TRANSPORT VEHICLES
Only truck, UH units (17.0) and APC units may be used to transport non-vehicle units (EXC: see 9.2). APCs are M113, F432, Marder, BMP, and BTR60 units.
[10.0] TERRAIN
GENERAL RULE:
The terrain features printed on the map represent towns, roads and bridges, natural obstacles like rivers and streams, and wooded areas, and the very contour of the ground itself. All of this terrain affects the ability of a unit to move and fight to some degree. The exact effect of a given terrain feature on Movement and Combat is summarized on the Terrain Effects Chart.
Additionally, Terrain affects the ability of one unit to see another unit, which is treated in section 5.0 (Spotting).
[10.1] EFFECT ON MOVEMENT
[10.11] When a unit moves from hex to hex, it expends Movement Points from its Movement Allowance based on the Terrain costs of each hexside it crosses and each hex that it enters. These Terrain costs are summarized on the Terrain Effects Chart (10.4). These costs are cumulative and no unit may enter a hex if it lacks the Movement Points to pay both the cost of crossing the entry hexside and the hex itself.
[10.12] Most of the hexes and hexsides on the map are Clear Terrain, i.e., devoid of any terrain symbols, and cost one Movement Point to enter (two Points for trucks). A Clear Terrain hexside has no effect on Movement since its crossing cost is zero. Hexsides which are covered by woods, hilltop or town symbols have no additional effect on Movement since the Movement cost has been built into the woods or hilltop hex itself. The only hexsides which affect Movement are stream, river, crest and slope hexsides.
[10.13] For Movement purposes all units are divided into three classes: trucks, other vehicles (including APCs) and footmobile units (those with a Movement Allowance of one), with terrain affecting the Movement of each class separately, according to the Terrain
Effects Chart.
[10.14] Roads and Towns provide a unique exception to rule 10.11. When a unit moves in a path which coincides with the path of a road (or from town hex to town hex) we assume that the unit is benefitting from the road. Thus, when a unit enters a hex by traversing a hexside which is crossed by a road, the unit expends only the Terrain cost for crossing a road hexside (0.5 Movement Point), ignoring any other terrain on the hexside being crossed or in the hex being entered.
[10.2] COVERING TERRAIN
[10.21] Terrain that affects Combat by reducing the Combat Results die roll number by the amount shown on the Terrain Effects Chart is covering terrain.
In general, woods or towns provide a -2 DRM. Being in a hilltop hex or behind a slope, crest, or railroad embankment is treated as being in defilade and provides a -2 DRM (EXC: when fired on from a higher elevation, 10.45). Vehicles in defilade are considered to be in a hull down position and may potentially use the Pull Back rule (6.92).
[10.22] Terrain has no effect on H Class/Tight pattern Fire against Hard Targets. Crest hexsides never affect any type of H Class fire.
[10.23] Some Terrain features have no effect on Combat. Those which do are divided into affecting hexes (woods, town, hilltop) and affecting hexsides (slope and crest hexsides).
Affecting hexes benefit defending units because the terrain in them gives a solid increase in protection or shelter to the defending unit. Affecting hexsides, on the other hand, provide a partial defilade to defending units. Thus, we can say that affecting hexes provide a constant benefit to units defending in them, regardless of the direction of incoming fire, while affecting hexsides are directional and provide benefits only if the incoming fire intersects them.
[10.24] Crest hexsides benefit a defending unit which is on either side of the hexside; slope hexsides are unidirectional and only benefit a unit which is on the slope-splashed hex (see 10.3).
[10.25] Terrain benefits are not cumulative. If a defending unit is in a hex in which it could benefit from two or more terrain features, it simply benefits from whichever terrain feature has a greater effect on Combat.
Example: a (Soft Target) unit is in a woods hex and is fired on through a common crest hexside by an M Class unit. The defending unit would benefit from either the crest hexside or the presence of the woods. However, if it were fired on by an H Class unit, it would benefit from the woods hex since the crest hexside doesn't help against H Fire.
[10.3] SLOPES AND CREST HEXSIDES
[10.31] If the line of fire against a friendly unit goes through an adjacent slope hex, going from a higher terrain to the defending unit’s lower terrain, the defending unit will not get a defensive advantage. Infantry units also get a defensive advantage in this case.
[10.32] Protected vehicles and infantry receive a -2 terrain modifier for fire through a defending slope hex. Russian tanks only receive a -1 defensive advantage.
[10.33] For Indirect Fire, there is no defensive advantage by being behind a slope hex.
[10.36] SUEZ CANAL TERRAIN
1. No combat unit can enter an empty Suez Canal hex.
2. A trench counter is the only neutral counter that can be placed in an empty Suez Canal hex.
[10.361] CANAL HEX SIDES
1. The hex sides that run along both sides of the Suez. Canal hexes are "moraine hex sides." These hex sides are identified by dark brown color bars (these are the only dark brown bars on board -A").
a. Every hex side that lies between a Suez Canal hex and a non- Suez-Canal hex is a moraine hex side. No other hex side is a moraine hex side.
b. The entire hex side is considered to be a moraine hex side even if the color bar does not extend to the ends of the hex side
2. No unit may move across a moraine hexside.
3. LOS/LOF cannot be traced through a moraine hex side if both the attacking unit (or spotting unit) and target unit are at ground level.
a. Moraine hex sides block LOS/LOF even between units that are adjacent.
b. Moraine hex sides do not block LOS/LOF if either the attacker or the target is on a slope hex.
4. Fortification counters and improved position counters on east bank hexes have special effects on the moraine hexsides in their hex.
a. ALL units on a fortification counter in an east bank hex trace LOS/LOF as if the moraine hex sides in that hex were sand dune hex sides instead. Thus, LOS/ LOF may be traced across the moraine hex sides that are part of that hex—but only to and from the units in that hex. The moraines remain impassable and the fortification has its normal effects on combat.
b. All non-infantry-class units on an improved position in an East Bank hex trace LOS/ LOF as if the moraine hex sides in that hex were sand dune hex sides. Thus, LOS/LOF may be traced across the moraine hex sides that are part of that hex but only to and from the non-infantry-class units in that hex. The moraines remain impassable and the improved position has its normal effects on combat.
5. An improved position counter on a WEST Bank hex transforms that hex into a slope hex/or purposes of tracing LOS/LOF. Units in the hex can sight and be sighted across all moraine and sand dune hex sides.
a. All moraine hex sides remain impassable.
b. Only two infantry-class and one non-infantry-class units may occupy an improved position on the west bank. Wrecks do not count against this limit.
c. The improved position has its normal effects on combat.
[10.362] TRENCHES
Passageways are cut through the moraines to allow access to the water's edge of the canal. These passageways or "cuts" are made by bulldozing (Israeli practice) or pumping streams of water (Egyptian practice) through the moraines. In practice these cuts are made in pairs across the canal from each other, allowing access to the canal from both sides.
1. A trench counter in a Suez Canal
hex indicates that at that point "cuts" have been made in the moraine on both sides of the canal.
a. In that hex, the two moraine hex sides that are directly opposite each other are transformed into sand dune hexes for all purposes in the game. The two opposite hex sides are where the "cuts" are located; in that hex they are the two moraine hexsides that are parallel to the Suez Canal printed on the board.
b. Note that movement is possible across the transformed moraine hexside.
2. A trench counter in a Suez Canal hex allows a bridge counter to be placed in that hex. No other unit may be placed in a Suez hex containing only a trench.
3. Trench counters are placed during initial placement, as directed by the Situation being played. If the Situation indicates that the trenches are to be placed on the Suez Canal, then the trench counters must be placed on Suez Canal hexes; if the Situation does not indicate that the trenches belong on the Suez Canal, then the trenches cannot be placed on any Suez Canal hex.
a. Only one trench can be placed in a hex.
b. Trench counters must be placed during initial placement. They may never be placed during play.
c. Once placed, trench counters may not be moved nor removed.
[10.4] ELEVATION
In establishing LOS, Players should determine the height of the sighting unit, the height of the target unit, and whether or not the height of the terrain between the two units is sufficient to block the LOS. To establish this, imagine a line between the center of the attacking unit's hex and the center of the target unit's hex. If this line passes through a hex containing terrain that would block the LOS, then the unit does not see the target and the Player may not use that unit to fire at that particular target. NOTE: Usually it is obvious to the eye when the LOS is blocked. Sometimes, however, it may be necessary to lay a straight edge directly on the map to determine exactly what hexes the LOS passes through.
[10.41] Note that slopes, crests and railway embankments block LOS and should be considered level 1 obstructions. A sighting unit at level 2 should ignore slopes, crests and railway embankments in judging LOS (although they still affect Observation Range).
[10.42] Add 20 meters to the height of Town and Woods hexes for the purpose of determining Blocking Height but not Sighting Position.
Example: A unit in town hex is at level 0; however, if an LOS is calculated through that hex, the hex is considered to be at level 1.
[10.43] In any given sighting situation, one unit will be at either a greater height or at the same height as the other. Any terrain between the two units that is higher than the terrain occupied by both units automatically blocks the LOS.
[10.44] When terrain between two units is higher than the lower unit, but the same height or lower than the higher unit, that terrain blocks the LOS only if it is closer (in hexes) to the lower unit than to the higher unit.
[10.45] When a unit fires at a target in defilade from a higher elevation than the target unit, the covering terrain is ignored.
[10.5] HULL DOWN (DEFILADE)
GENERAL RULE:
In addition to the natural terrain features that offer defilade protection, units can position themselves into defilade positions in any terrain by simply paying the terrain cost to enter defilade. Defilade entered in this manner is indicated by placing a defilade marker placed toward a specific hexside. The defilade benefit is only gained by fire through the defilade marker's facing hexside and the two adjacent hexsides.
[10.51] Each unit must independently enter defilade and each unit must be marked individually, even in the same hex. Vehicle units may always enter defilade by spending their entire Movement Phase in hex, even if they would normally not have enough movement points to do so. Units may not enter defilade in enemy occupied hexes.
[10.52] Infantry units in defilade receive defilade benefits in all directions. Thus the direction of the defilade marker on an infantry unit is not significant.
[10.53] Units in defilade do not receive defilade benefits from attacks by air units.
[10.54] Russian tanks (T34/85, T-10, T-55, Tiran/TI-67 and T-62) only receive a -1 to the die roll.
[10.55] **Note:** The original inherent ability to enter defilade was primarily based on European terrain. For Middle East AIW scenarios, dune hexsides indicate potential defilade hexes. More rugged terrain offers more opportunities for defilade. Thus, the ability to self-position into defilade is generally only allowed by specific scenario description, as local terrain was widely variable.
[10.6] TERRAIN EFFECTS CHART
(See separate sheet.)
[11.0] IMPROVED POSITIONS [ENTRENCHMENTS]
GENERAL RULE:
If in the initial deployment, a player is told to place his personnel units in improved position, all personnel units are said to be in Improved Positions. Such units benefit from the improved position so long as they remain in that hex. If a unit moves from its initial deployment hex it is no longer in improved position and may no longer assume that state. Players must keep track of which personnel units have moved (i.e., left their improved positions).
CASES:
[11.1] WHO CAN USE IMPROVED POSITIONS
Only dismounted personnel and guns may benefit from improved positions. The presence of vehicles has no effect on Improved Positions, nor do vehicles benefit from Improved Positions.
[11.2] BENEFITS OF IMPROVED POSITIONS
A unit in an Improved Position that neither moves nor fires may be observed only by an adjacent Enemy unit. If fired upon during the Direct Fire Phase, a unit in an Improved Position benefits as though it were in defilade (see 10.21). If the unit is already in defilade, it gains no further defense benefits from the Improved Position. If fired on during the
Indirect Fire Phase, a unit in an Improved Position is treated as a Hard Target Type (and can button up per 7.3).
[11.3] DEPLOYMENT OF IMPROVED POSITIONS
Improved Positions may be deployed only at the start of a Scenario as per the scenario instructions. They may not be constructed during play. Note that there is a difference between personnel initially placed in improved positions, which are removed once the units leave the improved position, and improved positions designated by scenario that remain in the hex.
[12.0] OVERRUNS
GENERAL RULE:
During the Movement Phase, a Player may move Friendly units into a hex containing Enemy units at no additional movement cost. When he does so, all other non-overrun action is suspended and an Overrun Firefight is conducted according to the Overrun Procedure.
PROCEDURE:
To Overrun, a Player moves his units into a hex containing an Enemy unit or units. The overrunning units may have begun their movement in the same or different hexes. The assaulting units perform the normal Panic-Move checks at the beginning of its movement, but there is no determination of Panic-Fire during Overrun.
Each unit being overrun gets one Opportunity approach fire, if eligible, at any one of the assaulting units before beginning the assault sequence.
The range for the assault sequence is 0 hexes. The Players roll a die for Fire Initiative. Each player rolls 1D6, and then adds 1/10th of the percentage Panic level (i.e., 1-5, and could be 0) to the roll, and the lower adjusted roll wins. On a tied roll the side with lower panic level wins. Otherwise, roll again. Fire is then alternated until all units in the Overrun hex have fired once. Alternating fire is by individual unit, not by formation. Note that tanks may choose to fire their MMG’s instead of an M class attack (see [6.44]). Infantry against armor may use [6.45] or [6.10].
Example: Israeli units with a panic level of 10% (+1 DRM) are engaged in an Overrun firefight with a Syrian unit with a panic level of 30% (+3 DRM). The Israelis roll a 4, the Syrians roll a 2. After the DRMs the adjusted rolls are both 5. Israelis have lower panic level, so they fire first.
Once an Overrun situation is established by a unit or stack, other units declared to be in the overrun (e.g. multiple units -- see [6.3]) may immediately attempt to move into the overrun hex. The stacking limit for each side is observed during the Overrun. If attacking units are eliminated, additional units declared in the overrun may enter the hex on the next round. The side winning the first round is carried forward into subsequent rounds. It is possible that due to stacking, not all declared overrunning units will get to participate. Overrunning units in different company formations receive a -1 modifier to the Panic-Move chart. Units moving into an overrun hex may be fired upon by Overwatch/Opportunity fire by other eligible enemy units but not those in the overrun hex itself since the defenders get their own single approach fire. Overwatch/Opportunity fire cannot be made against the hexes adjacent to the overrun hex. Before overrun combat, each unit in the overrun hex may get an Opportunity fire at an overrunning unit at one hex range if allowed, e.g. non-Panicked, non-fired, etc. Also, notice reduced attack strengths for suppressed units. Note that defending units only get one Opportunity Fire as their normal action for the turn. The subsequent close assault rounds are separate. Attacking units suffering a Suppression or Panic are returned to their adjacent hex. Defending units that are Panicked during Opportunity fire retain their Panicked status during the ensuing assault.
Note: Players may place their assaulting stacks adjacent to the defenders, but a single stack must begin the assault. If stacking limits in the defending hex allow, additional adjacent units may participate in additional rounds. However, at any point, there may never be more than three assaulting units participating in an assault round. If the defenders are eliminated, assaulting units that could not participate remain in their adjacent hexes.
If an Overrunning unit and at least one Enemy unit survive the fight, the Overrunning Player may either (1) retreat his Overrunning unit back out of the hex at no additional movement cost, or (2) leave his unit in the hex. If the attacker chooses to leave his unit in the hex, (3) the defender may choose to withdraw. (4) If the defender chooses to remain in the hex, another Overrun Firefight must take place immediately, repeating the same procedure.
If the units being Overrun choose to withdraw, they may withdraw into any legal terrain hex not occupied by enemy units and not in violation of friendly stacking limits. If no such hex exists, they may not withdraw. Panicked units may not withdraw. Overrun units withdrawing undergo one withdrawal fire from the Overrunning units without being able to return fire. If the Overrunning units have sufficient movement left they may continue Overrun into an adjacent hex.
Inevitably, the Overrun hex will be vacated by the departure of the Overrunning units, by the elimination of the Overrunning units, by the withdrawal of the Overrun units, or the destruction of the Overrun units. Any time assaulting units receive a Panic result, they are removed to their initial adjacent hex.
A unit eligible to move in the current Game Turn that is attacked in Overrun, and survives, may still move, unless it has withdrawn.
[12.1] EFFECT OF TERRAIN ON OVERRUNS
In an Overrun Firefight, terrain is completely ignored. (Exception: A unit in an Improved Position still benefits as though it were in defilade and infantry in woods/town hexes benefit from the -2 DRM; otherwise, the prior positions of the engaged units, the presence of woods, towns, smoke, movement, etc., are ignored).
[12.2] EFFECT OF PRIOR FIRE ON OVERRUNS
An Overrun is a special event. The units engaged in an Overrun situation are not affected by whether or not they have fired previously during the Game Turn. A unit could conceivably fire during the Direct Fire Phase or Movement Phase and still defend with fire during an Overrun.
12.3] EFFECTS OF OPPORTUNITY FIRE ON OVERRUN
A unit must be moving to conduct an Overrun. It may therefore trigger Opportunity Fire on itself from Enemy
units. Such Opportunity Fire is resolved before implementing the Overrun procedure.
[12.4] EFFECTS OF PANIC
There is no Panic Fire during an Overrun. A unit that Panic-Moves may not Overrun. Panicked units that are Overrun may fire.
[12.5] At the end of a Game Turn, no units from opposing sides should be in the same hex.
[13.0] PANIC
(Troop Quality)
The MechWar ’78 Panic mechanism serves as the MechWar 2 Troop Quality concept. (This concept has been retained for now in order to not have to re-write a good portion of these rules and associated tables.) MechWar2 applies fire modifiers based on troop quality, but this is reflected in a unit’s Panic-related fire performance by a unit sometimes being unable to fire while units with better Panic Levels will be able to perform reliably. Also, the Panic Level is able to be combined with the MechWar2 Morale concept ([18.0] in the Morale section [19.7]).
GENERAL RULE:
On every Game Turn the units in a Player’s force are exposed to Panic. That is, the Player may lose the ability to control a percentage of his units on every Game Turn. The effects of Panic are meant to simulate the real effects on the battlefield of communications problems, misunderstood orders, human error and, sometimes, just plain physical fear, which result in units not doing what the command (the Player) has ordered. Panic is assessed each time a unit attempts to move or fire.
IMPLEMENTATION:
If using the Advanced Morale rules ([19.7]), changes in individual companies’ Panic Levels affect the higher level headquarters. In this case, a marker – or notation on the players’ unit status sheet – can be used to indicate the company’s current Panic Level. (See [19.1], PROCEDURE.) The implementation of the Morale rules would require an additional counter/notation of the change in a company’s Panic Level.
PROCEDURE:
Each Player is given a Panic Level in each Scenario corresponding to Troop Quality in the MW2 Available Forces Chart. The following Panic Levels are assigned to each MW2 Troop Quality designation.
| Troop Quality | Panic Level |
|---------------|-------------|
| Seasoned | 0% |
| First Line | 10% |
| Second Line | 20% |
| Reserve | 30% |
| Green | 40% |
The following Panic Levels are assigned to each AIW Morale Level.
| Morale Level | Panic Level |
|--------------|-------------|
| A | 20% |
| B | 30% |
| C | 50% |
| D | 70% |
Immediately before firing or moving any unit, the Player must check for panic for that unit by rolling 1d10 and cross-indexing the current strength of the unit on the appropriate column of the Panic Table. If the number rolled is not one of those numbers specified, the unit may function normally. If the die roll is a result that is specified, the unit panics. Place a Panic Marker on a unit when it panics in any Phase.
Panic Markers are removed during the Panic and Suppression Removal Phase of each Game Turn by rolling 1d10 on the Panic Table for each unit affected and referring to the appropriate column. HQs - TBD
CASES:
[14.1] USE OF OFF-BOARD ARTILLERY
[14.11] Off-Board Artillery Fire is always Indirect Fire. It is H Class Fire, and the Firing Player should indicate Tight or Loose pattern. H class is susceptible, as such, to Scatter (see 7.17).
[14.12] Having a concentration of 7H Points is exactly the same as having a unit with an Attack Strength of 7H Points, except that the artillery is Off Board Artillery. Each concentration may be used once each Game Turn.
[14.13] Off-Board Artillery can reach any target on the map (unlimited range) unless limited by scenario, but it may not be fired at an Unspotted target hex, unless specifically permitted in the Scenario Instructions.
[14.14] Off-Board Artillery Fire is executed during the final step of the Indirect Fire Phase of the Game Turn, and impacts (after target-hex Scatter) during the Indirect Fire Phase of the Game Turn. The execution of Off-Board Artillery follows the PROCEDURE section of: [7.1] AVAILABILITY AND CAPABILITIES.
[13.13] Panic is determined for each individual unit separately each time it attempts to move or fire (Exception: see Overrun, 12.0).
[14.0] OFF-BOARD ARTILLERY
GENERAL RULE:
In most Scenarios, both Players are given an Off-Board Artillery capability, which simulates the availability of artillery or rocket or mortar batteries located someplace other than the map, to fire at the Enemy targets located on the map.
PROCEDURE:
Off-Board Artillery is available in terms of batteries of light, medium, or heavy howitzers. These correspond to MechWar 4 concentrations of H Attack Strength Points. [abbreviated to read: OFBDA 3(7H)] (See [7.0] INDIRECT FIRE for the correspondence of MechWar 2 artillery designations with MechWar 4 concentrations.)
CASES:
[13.1] PANIC MOVE/FIRE
A unit that panics while attempting to move or fire may neither move nor fire in that Game Turn; it is in a state of panic.
[13.11] A unit that panics while attempting to move or fire and fails to remove the panic during the Panic and Suppression Removal Phase of that Game Turn must remain panicked during all succeeding Game Turns until the Panic Marker is eliminated (see 13.2).
[13.12] A unit that panic-moves while attempting an Overrun may not overrun. Panic is determined for the attacking or defending units prior to resolving the Overrun (see 12.4).
[14.2] OFF-BOARD ARTILLERY RESTRICTIONS
[14.21] The Points in a concentration may not be apportioned against several targets, just as the fire of an individual unit may not be apportioned against several targets in a single Game Turn.
[14.22] Assuming a Player has more than one concentration available, he may fire several concentrations at the same target hex or different target hexes on the same Game Turn, but each concentration is treated as a separate attack.
[14.23] Off-Board Artillery may not be used for counterbattery fire. (But see [19.2].)
[14.24] Off-Board Artillery may not be fired at.
[14.3] U.S. OFF-BOARD ARTILLERY
(Optional) These rules are from Mech War '77 and may be used optionally in MechWar 4.
[14.31] The U.S. Player (in his role as a battalion or task force commander) could theoretically look for the support of up to eighteen batteries if his need was great enough and the target was lucrative enough. Given an H Attack Strength of 7 to either a 6-gun 155 battery or a 4-gun 8 battery, this would give him as many as 18(7H) concentrations when necessary.
[14.32] Each Scenario will state the minimum number of 7 H Attack Strength Point concentrations the American Player will have on each Game Turn. It will also state the number of turns on which he is allowed to apply additional multiples of this minimum. The exact multiples will be determined randomly through use of the die.
[14.33] Before plotting his Indirect Fire ([7.11]), the American Player may attempt to multiply his Off-Board Artillery support. He informs the Soviet Player of this desire and proceeds to roll the die, concealing the result from the Soviet Player. Whatever number he has rolled represents the multiple of Off-Board Artillery fires he is allowed to plot for arrival in the next turn’s Indirect Fire phase.
Example: In a scenario, the American Player is given a minimum of 3x7H concentrations (medium howitzers) per turn. He is permitted to apply multiples on any three Game Turns of his choice. Assume that on Game Turn 2 he decides to attempt a multiple. He rolls the die with a three resulting. He can now plot (3x3) 9(7H) concentrations for impact in the Game Turn 3 Indirect Fire phase. The die roll is not revealed until the beginning of the Game Turn 3 Indirect Fire phase.
[14.34] The scenario would normally state on how many Game Turns the American is allowed to multiply his Off-Board Artillery. He rolls the die once each turn that he decides to plot a multiple fire. A result of 1 means that he has failed to multiply his artillery, but still counts as the use of one of his multiples.
[15.0] CLOSE AIR SUPPORT
GENERAL RULE:
Either Player is sometimes given Close Air Support. This is given in terms of strikes. Each strike is a certain weight of H attack Strength Points corresponding to the H factor of the MechWar 4 aircraft. Each strike is applied just as though it were an Off-Board Artillery concentration, except for a different Scatter pattern.
PROCEDURE:
During the Indirect Fire Phase, the appropriate Player allocates a Close Air Support Strike to a target hex. During the Indirect Fire Phase the Close Air Support Scatter will be implemented, the actual impact hexes of each strike determined, and the strike executed just as though it were an Indirect Fire Attack.
CASES:
[15.1] CLOSE AIR SUPPORT SCATTER
A Close Air Support Strike scatters in the following fashion: Roll the die. If the roll is a one or two, the strike impacts on the target hex. If the roll is a three, four, five or six, the strike scatters, in which case the die is rolled once more to determine the direction of Scatter (see 7.17). The strike, if it scatters, scatters one hex.
[15.2] APPLICATION OF CLOSE AIR SUPPORT
[15.21] A Close Air Support Strike is always a Tight pattern H Attack affecting only the hex it impacts on.
[15.22] Each strike must be used as single unitary value. A single Strike may not be apportioned against several target hexes.
[15.23] The weight of each Close Air Support Strike will be given in the Order of Battle. For example, CAS 3(5H) means Close Air Support available is three strikes each of 5 H Attack Strength Points.
[15.24] Armored vehicles may not button up when being attacked by Close Air Support; they are attacked on the +8 column of the D2 CRT.
[15.3] ADVANCED (MechWar 2) CLOSE AIR SUPPORT
[15.31] Rather than treating air support as indirect fire attacks, the MechWar 4 advanced rules have support aircraft actually flying across the map similar to that for Helicopters ([17.0]). Aircraft movement takes place during the Friendly Indirect Fire Phase. Aircraft may appear on any mapedge, but if appearing at a mapedge for which Off-Map enemy air defenses are listed in the scenario, it must first be attacked by those defenses. Aircraft enter the map one unit at a time, perform their strikes, and fly off the map.
[15.32] Aircraft must be at either high or low altitude, indicated by flipping the aircraft counters to either their high or low side. Altitude affects the offensive and defensive capabilities as well as the types of strike that can be performed.
[15.33] Aircraft must fly a certain number of hexes straight forward before turning one hexside to the right or left. This is the aircraft’s “turn mode” noted on the front of the aircraft symbol.
[15.34] Aircraft must fly a certain number
of hexes straight forward before turning one hexside to the right or left. This is the aircraft’s “turn mode” noted on the front of the aircraft symbol.
[15.35] After performing their strikes, aircraft must exit the map. They may exit any mappedge, but may be attacked by off map air defense systems when doing so.
[15.35] Aircraft may perform multiple strikes, but may only fire a single weapon system at a hex in a single attack. They must circle around it they wish to attack the hex again. They may continue to attack. They may continue to attack until their ammunition is expended.
PROCEDURE:
[15.36] Airstrikes must be plotted three turns in advance. If a bombing strike is to be performed, the target hex must be designated. The scenarios will list the number of available strikes and the types of aircraft available.
[15.4] BOMBING STRIKE
[15.41] To perform a bombing strike, the aircraft must fly directly over the target hex. Bombing strikes do not scatter. Flying at either high or low determines the effectiveness of the strike. Aircraft may only perform one bombing strike per game turn. While moving to or from the target hex, the aircraft may perform strafing attacks. Aircraft use their H factor attack on the hex as an indirect fire attack.
[15.42] Smart Bombs: Aircraft equipped with Smart Bombs may attack from 10 hexes away from the target hex as long as the target is within the aircraft’s forward arc. The aircraft must attack from high altitude using their H factor.
[15.43] If attempting to destroy a bridge or ferry, a separate 1d10 die roll is made, and on a result of “1” the bridge or ferry is destroyed.
[15.44] Radar Assisted Bombing Forward Air Control (RABFAC): In scenarios with U.S. Marines, a designated LVTP7 unit may be designated to direct an F4 air unit bombing or strafing attack on a moving vehicle. The Marine LVTP7 must have an unblocked line of sight to the target unit. The LVTP7 must be predesignated at the start of the scenario.
[15.45] The attack may not be made in blizzard conditions but may be made in rain, fog, or falling snow.
[15.5] STRAFING STRIKE
[15.51] A strafing strike is treated as a direct fire using the aircraft’s “M” factor. Follow the direct fire procedure based on whether the target is a hard or soft target.
[15.52] A strafing attack may only be made at low altitude. The target must be in the aircraft’s forward arc and be exactly three hexes from the aircraft.
[15.6] AIR-SURFACE-MISSILES (ASMs)
[15.61] Aircraft may attack vehicle units with air-to-surface missiles (ASMs). Infantry may not be targeted with ASMs.
[15.62] ASMs have unlimited range and may be fired at enemy vehicles for which the aircraft has an LOS within observation range. ASMs attack with a 15M attack factor with no range attenuation. Aircraft may fire from either high or low altitude as long as the LOS restrictions are observed.
[15.63] The aircraft weapons load is either specified in the scenario, or is left up to the player. The aircraft weapons load can be specified as follows:
(A) Conventional bombs
(B) Smart bombs
(C) Conventional air-to-surface missiles *
(D) Maverick air-to-surface missiles (NATO Player only) *
* If conventional ASMs are carried, the Player should use the ASM ammo depletion number to the left of the slash on the aircraft’s data sheet; if carrying Maverick ASMs, use the ASM ammo depletion number to the right of the slash.
[15.7] AIRCRAFT TARGET ACQUISITON
[15.71] To perform strafing or missile attacks on enemy units, the aircraft must first acquire the individual targets. This is accomplished by keeping the target continuously in its forward arc while the aircraft moves a minimum of five hexes. The acquisition attempt may not be made for targets more than 15 hexes away. Aircraft must be at high altitude in order to attempt to acquire a target.
[15.72] After successfully performing an acquisition maneuver, the player rolls a 1d6, and if the value is greater than the positive number of the Loss Modification value of the terrain occupied by the target, the target has been successfully acquired. The terrain Loss Modifications has separate columns for vehicle and infantry targets. Aircraft may continually attempt to acquire a target. Ground units which fire any air defense systems are automatically acquired by any aircraft within 15 hexes at high altitude.
[15.73] Aircraft must always separately acquire their targets even if the targets are in LOS of friendly units. Once acquired, they remain acquired for the remainder of the Phase. Inverted units are not flipped over, but are revealed as either infantry or vehicles. The inverted acquired targets can be marked with a counter or noted separately as having been acquired.
[15.8] AIR DEFENSE SYSTEMS
[15.81] Air defense systems may fire at enemy aircraft that have been tracked for a requisite number of Tracking Hexes for the specific air defense unit as long as the aircraft has remained in the air defense system’s Tracking Range, as indicated on the Air Defense Combat Results Table. Missiles must have been tracking an aircraft for 15 hexes before it can fire. Air defense systems can continue to fire for every 15 hexes the aircraft is tracked within its Tracking Range. All gun air defense systems can fire for every 5 hexes the aircraft are within Tracking Range. Air defense systems are subject to ammo depletion.
[15.82] Vehicles that have continuously tracked aircraft within their 5 hex tracking range may fire at enemy aircraft at low altitude that fly over or adjacent to the firing unit with an attack strength of 2. This attack strength is used in the same manner as the missile attack strengths.
[15.82] Ground units must have a Line of Sight to the defending unit in order to attack. Units in heavy woods hexes may not fire their air defense systems.
PROCEDURE:
Once the aircraft has been tracked for the required time and distance restrictions, it may fire on the Air Defense Combat Results Table. Cross index the range of
the target aircraft with the attacking missile system to determine the attack strength. Roll a 1d6, and modify the result by the aircraft’s defense die roll modifier (the left hand number before the slash). If the number is less than or equal to the attack strength on the Air Defense CRT, the aircraft has been hit. Roll a second time and compare the result with the aircraft’s Loss Modification number (the number to the right of the slash). If the die roll is less than or equal to the aircraft’s Loss Modification number, the aircraft has survived and there is no further effect. If the aircraft fails the die roll, the aircraft is eliminated and removed from the game.
[16.0] MINES
GENERAL RULE:
In certain Scenarios, one Player or the other is allowed to deploy mines in order to impede the movement of units and to inflict damage on units. When a Player has deployed mines in a hex, it is called a **mined hex**. A mined hex is presumed to contain both anti-vehicular mines and antipersonnel mines. There are three types of mined hexes: hasty, preventive, and defensive, corresponding to a rising density of mines within the hex and an increasing probability of inflicting damage.
PROCEDURE:
The Scenario will state which Player has mines to deploy in terms of a number of mined hexes and the type of mined hexes. This Player, while both Players are deploying and setting-up their regular units, shall select which hexes on the map he deems to be mined. He shall secretly note the numbers of the mined hexes and type of mined hexes. Thereafter, in the course of play, whenever a unit (from either side) enters or leaves a mined hex, an immediate Mine Attack shall be executed against that unit, any result applied immediately, and a Mined Hex Marker is placed in that hex.
CASES:
[16.1] MINE ATTACKS
[16.11] A Mine Attack is executed against any unit, no matter what its Defense Strength or Target Type, just as though the unit were fired upon by a weapon. Mines have a certain Net Attack Superiority on the Combat Results Table, according to the type of Minefield, regardless of the Type of unit in the hex.
1. Hasty Mined Hex: Attacks a unit at -2 Net Attack Superiority.
2. Preventative Mined Hex: Attacks at +1 Net Attack Superiority.
3. Defensive Mined Hex: Attacks at +7 Net Attack Superiority.
[16.12] All considerations of Terrain, Defense Strength and Target Type are ignored when executing a Mine Attack. The Player whose mined hex it is simply announces that a unit is attempting to enter or attempting to leave a mined hex, rolls the die, and reads the result from the appropriate column of the Combat Results Table. This means that the strongest and weakest units are equally vulnerable to mines.
[16.13] Whenever a unit enters a mined hex, it must immediately cease all further movement in that Game Turn, regardless of whether the Mine Attack successfully affects it. It must cease movement within the mined hex. (Exception: see 16.15 for treatment of Overruns.)
[16.14] Whenever a Player desires a unit to leave a mined hex, he announces this fact and a Mine Attack is executed on the unit. No matter what the result of the attack, the unit is permitted to exit the mined hex. Any Panic or Suppression Results are assessed in a hex adjacent to the mined hex, which the Owning Player moves the unit to.
[16.15] Whenever a unit **Overruns** through a mined hex, it undergoes a mine attack when it enters the hex and again when it leaves the hex.
[16.16] Mines attack Friend and Foe alike. In his initial deployment, a Player may elect to place Friendly units in mined hexes. If and when he chooses to move those units out of the mined hexes, they must suffer Mine Attacks.
[16.17] Breached Minefields
Vehicles equipped with mine plows or engineer units may create breached minefields that partially negate the minefield effects.
[16.18] Vehicular units in column pay an additional four movement points to enter a breached minefield. There is no additional cost for infantry units in column to enter the hex.
[16.19] Units in column may remain in a breached minefield hex without being attacked by it and may exit without additional attacks.
[16.2] Mineplows
Certain vehicles, as specified by scenarios, can breach minefields. The vehicle enters the minefield in column without undergoing minefield attack. The vehicle must stop in in the minefield hex. At the end of the Phase, the minefield counter is replaced by a breached minefield counter.
[16.21] After breaching a minefield, the breaching player must roll one die and consult the Mine Plow Damage Table ([19.52]). If the indicated number is rolled, the mine plow is destroyed. Destroyed mineplows may not be used for the rest of the game. **Note:** Only Soviet ROD units have more than one mineplow per platoon.
[17.0] HELICOPTERS
GENERAL RULE:
Helicopters are exceptional types of units with unusual rules regarding their Combat and Movement. VH is a generic term describing any helicopters. Other than their special characteristics, they are generally treated as ground units.
CASES:
[17.1] MOVEMENT
[17.11] All VH units ignore all Terrain costs when moving. A VH unit expends one Movement Point from its Movement Allowance for each hex that it enters, regardless of the terrain in the hexside crossed or the terrain in the hex entered.
[17.12] A VH unit may freely enter and exit a hex containing any other unit(s) Friendly or Enemy except a hex containing another VH unit. They may stack with ground units at the end of a Game Turn. By the same token, ground units may ignore the presence of helicopters for Movement and Stacking purposes.
[17.13] Helicopter units may be of two types: those bearing a transport designation may be used to transport infantry and engineer units. Attack units may not transport.
[17.14] VH units are presumed always to be in the air, except a transport helicopter in the act of Mounting or Dismounting. Such a unit is presumed to be on the ground; therefore, a transport unit may not Mount or Dismount a unit which is in a forest or city hex.
[17.2] COMBAT
[17.21] VH units may attack Enemy units using their respective weapons, according to the normal Combat Rules.
[17.22] For purposes of firing at an Enemy unit, a VH unit is considered capable of elevating itself (low) to a height which allows it to see over woods hexes, town hexes, and slope and crest hexsides which would normally block Line of Sight/Line of Fire. Thus, when the Enemy target unit is in clear terrain, the VH is exempt from Line of Sight restrictions. A target which is located within a town or woods hex such that the VH unit’s LOS passes through an adjacent woods or town hexside cannot be fired at by a VH unit.
[17.23] A VH unit can always fire at a unit which it is stacked on top of or adjacent to. When firing at a unit it is stacked with, the range is considered to be one hex.
[17.24] A unit which is being fired at by a VH unit (low) loses any benefit for being behind a slope or crest hex side and retains any benefit for being in a woods or town hex.
[17.25] VH units are subject to Ammunition Depletion in the same fashion that G Class units are (see 6.84).
[17.3] FLAK UNITS AND ANTI-HELICOPTER FIRE
Helicopters are subject to being fired at according to the following special rules. This is in effect a special combat relationship, except that the normal Combat Results Table is used to determine the outcome of anti-helicopter fire.
[17.31] FLAK STRENGTH / RANGE ATTENUATION TABLE
| Range in Hexes | Unit | 0-2 | 3-5 | 6-10 | 11-20 |
|----------------|--------|-----|-----|------|-------|
| | Z23 | 16 | 14 | 12 | 10 |
| | Vulcan | 14 | 11 | 9 | 0 |
| | T55 | 8 | 6 | 0 | 0 |
| | T62 | 10 | 8 | 0 | 0 |
| | M60 | 12 | 9 | 0 | 0 |
| | Inf Co | 12 | 10 | 0 | 0 |
| | Inf Plt| 8 | 6 | 0 | 0 |
| | Others | 8 | 0 | 0 | 0 |
Simply establish the range between the firing unit and the target VH unit. Cross reference the range with the identity of the Firing unit and read the Attack Strength of the Firing unit.
After establishing the Attack Strength of the Firing unit, simply subtract the Defense Strength of the Target VH to establish the Net Attack Superiority. Roll the die. Do not adjust for terrain.
Helicopter units can be attacked by all types of units, except G Class.
[17.32] Effect of Terrain
A VH unit never receives a die roll benefit for terrain. However, a VH unit which is itself not firing may not be fired at if the LOS passes through any blocking terrain, unless the Firing unit is adjacent to the VH. Conversely, if a VH unit is firing it is presumed to have elevated itself above blocking terrain, thus exposing itself in turn to fire. See Pull Back, 6.92.
[17.33] Flak Units
The Soviets have the Z23 flak unit (Gun-1 Class); in addition to their obvious role in attacking helicopters, they may be used to attack ground targets. The U.S. also has a similar Vulcan that was more used for ground support than for anti-aircraft defense.
[17.34] Effect or Combat Results on VH Units
In assessing Combat Results of an attack on a VH unit, any Panic prime results are ignored. A VH unit is only affected by a D1, D2 or D3 Result.
[18.0] SMOKE
GENERAL RULE:
All H Class units which are capable of Indirect Fire, and Off-Board Artillery, are capable of firing Smoke instead of explosives. Firing Smoke is handled just as though the Player was firing HE, except that he adds an S notation to his Fire Plot. A unit firing Smoke may perform no other Task that Game Turn.
PROCEDURE:
1. The Player allocates his Smoke Fire to a specific target hex. It is treated as Tight pattern Indirect Fire, which means it may or may not Scatter to a different Impact Hex. The Player places a Smoke marker on this hex. The marker remains on the map, marking the Smoke hex until the beginning of the next Indirect Fire Phase, when it is removed.
2. Certain units are identified as smoke units on the Unit Function Table. Such a unit can place a smoke counter in the unit’s own hex. This takes the place of an action during a friendly fire/movement phase—the unit expends its entire movement allowance without moving or firing, and the smoke counter is immediately placed in the unit’s hex.
CASES:
[18.1] LINE OF SIGHT EFFECTS
[18.11] Smoke is treated like a woods hex with respect to LOS and combat. Effects of Smoke are not cumulative. No unit may trace an LOS through a Smoke hex (one with a Smoke marker in it). Thus, for purposes of executing fire, Smoke blocks fire. Helicopters may rise above the Smoke (see 17.22).
[18.12] If a target unit is in a Smoke hex, it may be fired at. Subtract 2 from the die roll for all R-, G- and M Class attacks. Smoke in the target hex does not affect an H Class attack.
[18.2] PERSISTENCE OF SMOKE
[18.21] Smoke persists for one full Game Turn. If a Player wishes to maintain a Smoke screen he must continue to fire Smoke. Smoke is always fired and created in the Indirect Fire Phase of a Game Turn. This holds even if the firing unit is capable of firing Direct Fire at the hex.
[18.22] No matter what the size of the Smoke firing unit, a Smoke Attack creates only one Smoke hex.
[19.0] OTHER OPTIONAL AND EXPERIMENTAL RULES
[19.1] COMMAND
NOTE: These rules replicate the original Mech War 2 Command rules within the context of these simpler rules. If there are any conflicts or players wish to more strictly follow the Mech War 2 rules, those may be used instead. If not using these rules, any references to required commands instead simply follow the Sequence of Play [4.0].
Before each Game Turn both Players simultaneously issue orders to all his units on a company by company basis.
PROCEDURE:
Each company is issued commands, either by noting the commands on a player’s unit log sheet or by placing inverted command markers under each company’s designated Hq unit ([4.1], E.).
Every AIW Mech War 4 unit has an attribute that allows a unique Identity Number to be placed on it. Every hex on the map also has a unique Identity Number. To log the command, you simply note the Identity Number of your unit and the Command describing the action you intend for this unit.
Mounting, Dismounting and Overruns are specialized forms of Movement and are considered as Movement.
The Player need not indicate beforehand which units shall engage in movement for the Bounding Overwatch command.
[19.14] COMMAND SUMMARY
| Command | Code | Description |
|---------|------|-------------|
| BC | Bound: | When activated, all units of the company must move from their original hex or change their status, e.g. enter column, enter defilade. |
| OC | Overwatch: | All units of the company must remain stationary. They may fire following the normal rules for direct fire. |
| BOC | Bounding Overwatch: | One or more units of the company must fulfill a Bound command while the remainder fulfill an Overwatch command. Companies consisting of a single unit may not be issued a Bounding Overwatch command. |
| WC | Withdraw: | The units must conform to a Bound command, except that they may instead remain in their own hex. If given a Withdraw command, they must be at Morale Level 0 before they can be given any other type of command. They may defend if close assaulted, but then must revert to a Withdraw command. |
| RC | Rally: | Companies that are Panicked ([13.0]) may remove their Panic status through the Rally command. Units issued the Rally command may not expend any movement points, fire any of their weapons, or change their status. Companies given a Rally command may roll on the Removal column of the Panic Table. If the die roll is within the range, their Panic status is removed. A company headquarters in or adjacent to their hex decreases their die roll by 1. |
| RGC | Regrouping: | Units that have taken losses may be recombined into a single unit of the same type if they begin their turn in the same hex. Only units of the same company may regroup. Units of different status may choose either status for the recombined unit. |
Off-Board Artillery and CAS can be assigned a Counterbattery mission against enemy Off-Board Artillery, as well as on-map artillery, effective the following turn. This requires record keeping as to the damage/suppression state of the off-board units.
CASES:
[19.21] All Off-Board Artillery concentrations are considered Soft Targets, with the exceptions of NATO M109 (SP 155mm providing 7H concentrations) and BAOR FV433 Abbot units, which are considered Protected Targets. Note that where a US/NATO player has ≤ 3 x 7H OBA fires in a scenario, they are all considered to be M109/Abbot batteries. For each that is available beyond 3 x 7H OBA fires (including by US fires multiplication), the fourth, eighth, etc., OBA concentration is coming from M107/M110/towed FH-70 systems and is a Soft Target.
[19.22] A Spotted off-map unit can be given a move order in the plotting phase, so that it can displace and become un-Spotted instead of firing. It becomes available again after 3 turns.
[19.23] Where the US player multiplies his fires per 14.33, and some batteries are hit by Counterbattery fire, then when the US player reverts to his usual allocation of OBA (and before any further multiplication roll) the worst affected batteries are the ones available going forward.
[19.3] SHORT HALT FIRE
Short Halt fire is a type of Opportunity Fire conducted during the Movement phase by a moving tank with effective gun stabilisation.
[19.31] To conduct Short Halt fire, a moving unit which has not moved more than half of its movement allowance can:
(a) halt and perform a fire action
(b) wait to perform an opportunity fire action
In either case, there is a -2 DRM on Short Halt fire, and the unit should be marked as Short Halt.
[19.4] FASCAM ROUNDS
NATO Off-Board Artillery may, in some scenarios, be provided with a limited
amount of FASCAM (Family of Scatterable Mines) rounds.
CASES:
[19.41] One battery FASCAM fire onto a hex creates a Hasty minefield on the hex.
[19.42] FASCAM is always Tight pattern, and may scatter.
[19.43] If more than one FASCAM fire impacts the same hex, add +2 to the attack differential for each such fire; so if three FASCAM fires hit a road junction there is a minefield which attacks (both sides) at +2.
[19.5] WRECKS
[19.51] If a full-strength, non-infantry unit receives a D3 result on the CRT, replace it with a Wreck marker. Optional: For a D1 or D2 result, consult the conditional Wreck Placement Table. Note that the Wreck Placement line corresponds to the immediate CRT result, not the accumulated number of Disruptions on the target unit.
Note: A destroyed armor unit can present a wide variety of effects, from a simple, single armor penetration to a massive explosion hurling debris many yards from the vehicle and producing a consuming fire and smoke area of effect for an extended period of time. This is the reason for the conditional wreck placement.
[19.52] Wreck markers in a road hex negate the road benefit for units moving into or out of that hex. Bridge hexsides are unaffected by Wreck markers.
[19.53] Wreck markers generate a defensive benefit of a –1 DRM in direct fire combat. Wrecks are treated like woods hexes for LOS purposes.
[19.54] Wreck markers count as one vehicle stacking point. Wrecks do not affect infantry stacking.
[19.6] SPECIAL UNITS
[19.61] ENGINEERS
1. Engineers are needed to maintain ferry crossings, and are used to construct and destroy bridges, construct abatis, breach minefields and aid in close assaults in town hexes.
2. MechWar 4 engineers are one step infantry units – i.e. a D1 damage will eliminate the unit. However, they fire as a Full Strength Attacker. When stacked with an infantry unit, they are considered part of the infantry unit and may not be attacked separately. They may be eliminated only all friendly infantry units in the hex have been eliminated. Engineers do not count for stacking purposes.
3. One engineer may transported for free by a friendly APC. However two or three squads are considered to be equivalent to an infantry unit. No more than three engineer squads may be transported by APC.
4. Engineers may not perform any engineering actions while suppressed, in defilade, or mounted in APCs.
5. An engineer stacked with an infantry in a close assault of a town hex receives a +2 to the attack die roll. The engineer’s combat factor is added to that of one of the attacking infantry units. Additional engineer units have no additional effect. Engineers may attack alone, but receive no die roll modifications. The +2 die roll modification also applies to an engineer unit defending in a town hex with another infantry unit.
[19.612] BRIDGES
Bridges take 60 turns to complete, beyond the time frame of these scenarios. Scenario descriptions will specify the locations of any in-place bridges.
[19.613] BRIDGE DEMOLATION
1. To remove a bridge counter from either a Suez Canal hex or a clear terrain hex takes an engineer unit two turns.
2. The first turn the engineer unit must start on the bridge and move adjacent to the bridge. As the engineer unit leaves the bridge with a wired marker.
3. A wired bridge functions like a bridge in all respects.
4. A wired bridge remains wired as long as any engineer unit remains adjacent to it. If all engineer units move away or are eliminated, the bridge wired marker is removed.
5. The engineer unit removes the bridge counter by “attacking” it during any fire phase. The bridge may be attacked any turn that the bridge is wired and the engineer unit is adjacent and face up; once attacked thus by an engineer unit, the wired bridge is automatically removed from the board.
[19.614] FERRIES
1. The Egyptian "GSP" and the Israeli "FERRY" units are special carrier units that can transport any unit onto or across a "cut" Suez Canal hex. Ferry units are amphibious units and may enter and leave Suez Canal hexes accordingly.
2. Ferry units have special procedures for loading and unloading and carrying passengers.
a. Any combat unit may be a passenger on a ferry.
b. A ferry can carry a passenger only while it is in a Suez Canal hex.
c. A ferry can carry only one passenger at a time. Only one passenger unit can be loaded or unloaded by a ferry in a turn.
d. Loading procedure: The ferry unit starts the turn in the Suez Canal hex, and the passenger unit must start the turn in one of the two debouchement hexes for that Suez Canal hex. The ferry unit expends its whole movement allowance without moving, and the passenger unit is automatically moved one hex onto the ferry—it is loaded.
e. Similarly, when a passenger unit is unloaded the ferry expends its full movement allowance and the passenger is placed on either of the debouchement hexes.
f. A passenger unit must be face up to load and is inverted when it unloads—it can neither expend movement points nor attack the turn it loads or unloads.
g. Note that a carrier carrying a passenger counts as one passenger when aboard a ferry.
h. A unit on a ferry cannot "bail out" (rule II.B above).
3. Otherwise, a passenger aboard a ferry is treated like a normal passenger.
[19.615] ABATIS: Engineers in light or heavy woods hexes may construct abatis by spending 12 consecutive game turns in a hex with a Bound command. If attacked by any form of direct fire, the entire sequence must be restarted. Vehicles entering an abatis hex in light woods must stop in that
Movement Phase. Vehicles may not enter abatis hexes in heavy woods hexes. Engineers may remove an abatis by spending one Game Turn in the hex with a Bound command. To be successful, they may not be targets for any direct fire during this Phase.
[19.616] BLOCKS
1. Block counters can be placed during initial placement or during play.
2. Blocks are placed during initial placement as directed for the Situation being played. Blocks placed during initial placement may be placed in any hex except a Suez Canal hex.
3. Each block placed during the play of the game must be placed by an engineer unit. When placed during play block counters may be placed only in certain hexes.
a. During play a block counter may be placed in any hex containing at least one ridge hex side.
b. During play a block counter may be placed in any town or woods hex.
c. During play a block counter may be placed in any non-Suez-Canal hex that is adjacent to a town hex or a woods hex.
d. During play a block counter may not be placed on a hex that does not meet one of the above conditions.
4. Only one block can be in a hex at a time.
a. A block counter cannot be placed in a hex that contains a fortification or an improved position.
b. A block counter can be placed in a hex with a minefield, trench or bridge.
c. A block counter cannot be placed in a hex if there is already a block counter in that hex. If the first block counter is removed during play, however, then another block counter may subsequently be placed in that hex.
d. Once placed, a block counter cannot be moved.
e. A block counter that has been placed on the board can be removed during play by an engineer counter.
5. A unit can move only one hex on the turn in which it enters a hex containing a block counter. To enter a hex containing a block counter a unit must start its turn adjacent to the block and move only one hex onto the block. That unit then stops its move.
6. A unit that starts its turn on a blocked hex may leave freely.
7. Units in the same hex with a block counter cannot make overrun attacks and they cannot be attacked by overrun fire.
8. A block counter negates the road in that hex.
[19.66] MOTORCYCLES
[19.621] Motorcycles move like vehicle units, but enter defilade without expending movement points. They are treated as dismounted infantry when fired upon. Motorcycles can always enter defilade immediately after the first fire on them has been resolved. Suppressed motorcycles are considered to be in defilade.
[19.622] Motorcycles may not enter heavy woods hexes except on roads or trails.
[19.623] Motorcycles are treated as vehicles for observation purposes. Motorcycles have neither stacking limit nor effect on stacking limits of other vehicles.
[19.624] Friendly vehicular units may move through a hex containing an enemy motorcycle unit without stopping for close assault. If however, the friendly vehicle ends the Movement Phase in the same hex with the enemy motorcycle unit, it must engage in close assault.
[19.7] NIGHT
COMMENTARY:
During the 1973 Middle East War several large scale Night actions occurred. Night combat restricts LOS, Range, and Command Control.
GENERAL RULE:
During Scenarios specified as Night actions the following restrictions are in effect. Maximum LOS is 10 hexes. Maximum range of all weapons is 5 hexes. All Arab units subtract 3 from their Panic die roll results. All Israeli units subtract 1 from their Panic die roll results.
[19.71] The maximum LOS that a unit may trace can be no longer than 10 hexes in length.
[19.72] Any sighting ranges on the Observation Range Table that are listed as greater than 10 hexes are now 10 hexes.
[19.73] Indirect fire can only be plotted for a hex that is 10 hexes or less from a friendly unit that has LOS to that hex.
[19.74] The maximum range of all weapons is 5 hexes.
[19.75] Subtract 3 from all Arab Panic die roll results. Example: If a 10 was rolled then the die roll would be a 7.
[19.76] Subtract 1 from all Israeli Panic die roll results.
[19.77] Except for the above cases there are no other effects of Night unless implementing the MechWar 2 Night rules ([23.0]).
[19.78] NIGHT OBSERVATION
The MechWar 2 Night rules ([23.0]) can be used for scenarios occurring at night. Vehicle night equipment is listed in the MechWar 2 Vehicle Unit Data sheets.
[19.8] MORALE
These simplified morale rules approximate the intent of the original MechWar 2 Morale rules, though their inclusion will add some more complexity to the game. Players are encouraged to first become familiar with the basic Panic rules ([13.0]) before using these rules.
The original MechWar 2 Morale rules were designed for the MechWar 2 combat system. However, the basic structure of the MechWar 5 combat system (the CRT) already penalizes performance against units that have incurred losses, thus making these morale rules less necessary.
As companies take losses, their initially assigned Panic Level may begin to increase, decreasing both their reliability and potentially affecting the entire battalion. If enough losses are incurred by the battalion companies, the losses may cause the battalion to begin to take morale checks which would leave the component companies unable to rally until the battalion is able to successfully rally itself.
[19.81] Company Morale
[19.811] Companies are initially assigned a Panic Level (Troop Quality, [13.0]) in each scenario. The morale levels of the battalion and brigade/regiment headquarters are affected by the change in Panic Level of the constituent companies. This should be tracked by a notation on the players’ status sheet or by a counter under the company Hq unit. (See [19.1], PROCEDURE.)
[19.812] As companies take losses, their initial Panic Level may increase. When a company initially loses a unit, its Panic Level will increase by one. In addition, a single 1d6 is rolled, and if a “1” is rolled, the company’s Panic Level increases an additional level. Upon a second loss, the Panic Level increases by one. Also, on a die roll of 1-4, the Panic Level increases by an additional level. Upon the loss of a third unit, the Panic Level is increased by two if the die roll is 1-5.
[19.813] As companies are able to rally, the Panic Level change counter or notation should be adjusted so that any return to a +3 Panic Level can be tracked.
[19.814] If the battalion HQ is eliminated, it is immediately replaced and placed on any unit of the battalion. The initial procedure to determine the HQ value should be repeated for the new battalion HQ, but the column used should be 4 columns to the right. If this shifts off the table, then the HQ is still replaced, but the companies of that battalion can no longer rally.
[19.82] Battalion Morale
[19.821] The Available Forces Chart for each scenario lists the headquarters rating for each of the formations in the scenario. Before the start of the game, each Player rolls on the Battalion HQ Table ([18.9]) under the rating column given in the scenario to determine the value of the HQ shown on the left hand column corresponding to the row containing the die results. When the HQ checks its morale, if the value is less than or equal to the determined HQ value, the HQ has successfully passed its morale check.
[19.822] Once two or more companies of a battalion have increased their Panic Level by three or more levels, the battalion must check morale. If the battalion fails its morale check, the battalion is broken and the battalion’s companies may not attempt to rally. If this conflicts with existing commands ([19.1]), the unit must be given a valid command immediately.
[19.823] Independent commands do not affect battalion morale.
[19.824] During the Indirect Fire Phase, the broken battalion HQs may attempt to rally by rolling against its battalion HQ value. The HQ must have an Overwatch or Rally command if using the Command rules. The battalion’s companies may not attempt to rally as long as the battalion is broken.
[19.825] If the battalion HQ is suppressed, add 2 to the rally die roll.
[19.826] If brigade/regiment HQs are present in the scenario, then battalion HQs can only rally if they are stacked with or adjacent to their unbroken, unsuppressed brigade/regiment headquarters. Otherwise they rally normally as per [19.824].
[19.83] Brigade Regiment Morale
[19.831] If a brigade/regiment HQ is eliminated, it is immediately replaced in the same manner as a battalion HQ. However it is considered to be broken and must rally in the same manner as battalion HQs.
[19.831] If a brigade/regiment HQ is suppressed, add 2 to the rally die roll when attempting to rally a brigade.
[19.9] ELECTRONIC WARFARE
[19.91] Radio Direction Finding: If the scenario list the ability of a player to conduct radio direction finding, in the Indirect Fire Phase the player may attempt to locate one on-map enemy HQ, jammer, artillery unit, or air defense system (those using search radar only) for each RDF-equipped HQ present.
PROCEDURE:
[19.92] The player must announce which enemy unit he is attempting to locate. The player then rolls one die for that unit. If the number rolled is equal to or less than the radio detection value of the unit he is attempting to detect, then the unit has been located. The owning player of the detected unit must immediately announce a hex within one hex of the unit.
[19.93] The radio detection values for various units are listed in the Radio Detection Values chart.
[19.94] On-map artillery units that have fired in their previous Indirect Phase have their radio detection value increased by 1.
[19.95] The enemy-designated hex must be used for any artillery targeting. Any indirect fire against the hex is treated as unobserved fire. The designated hex cannot be used for any spotting attempts, direct fire, or observed indirect fire.
[19.96] Friendly RDF may not be used on Game Turns when friendly jamming is in effect. Exception – Enemy jamming and air defense units may be located while friendly jamming is in effect.
[19.97] Enemy jammers must be working and active in order to be located.
[19.98] Artillery units may withhold fire (radio silence) to avoid detection. Once having fired, they are subject to detection.
[19.99] Players may observe radio silence to avoid detection of artillery and HQ units. The effect of radio silence is the same as successful enemy jamming ([19.916]).
[19.910] RDF-equipped HQ units are one step units. Elimination of the unit eliminates the RDF capability. If successfully targeted by indirect file in which the unit survives, another roll is made on the Anti-Personnel CRT. Any positive numerical result eliminates the RDF capability.
[19.911] HQs attempting radio detection must not perform any other actions in that game turn, e.g. movement, fire.
[19.912] JAMMING UNITS
The presence of jamming units is determined by scenario.
[19.913] Jammers are one-vehicle units with Independent command, and thus not subject to Morale. If attempting to jam, they may not perform any other
action, i.e. move/fire. Jamming units are attacked like RDF units ([19.910]).
[19.914] To begin jamming they must get the jammer to work in the Direct Fire/Move Phase. One die is rolled to see if the jammer works. (See the Jammer Table (TBD)). If the jammer fails to work, it may not attempt to recommence for two full Game Turns.
[19.915] While jamming, during the Indirect Fire phase, the player rolls on the Jammer Table (TBD). If the jammer is broken, it may not attempt to commence jamming for two full Game Turns.
[19.916] EFFECTS OF JAMMING
The effects of jamming cover all of the Mech War 2 maps and extend off-map for 100 hexes in all directions. The effects of jammers, either friendly or enemy do not interfere with each other. Friendly jamming does not affect friendly units.
[19.917] Jammed companies may only perform actions equivalent to either the Overwatch or Withdraw command, whether or not players are using the Command rules. Jammed companies may not Rally. Jammed companies subtract one on the Morale Panic Level checks.
[19.918] Jammed indirect fire units may not perform unobserved indirect fire. They may perform indirect fire at units within the firing unit’s LOS.
[19.918] Units may circumvent the effects of jamming through field telephone cables, short range radio or messengers. See the Mech War 2 rules and special scenario rules.
[19.10] CHEMICAL WARFARE
See Mech War 2 Chemical Warfare [106.0].
[19.11] TACTICAL NUCLEAR WARFARE
See Mech War 2 Tactical Nuclear Warfare [107.0].
[20.0] HOW TO SET UP AND PLAY THE GAME
[20.1] SCENARIOS
As stated in the Introduction, the game is played by Scenarios. Each Scenario is from six to twenty Game Turns in length and the number of units per side varies with the Scenario. A Scenario is a game in itself and the term Scenario and game are used interchangeably. Each Scenario listing contains a historical note which relates the Scenario to the actual event which is being simulated, an Order of Battle for each Player, and other instructions relating to the length of the Scenario, initial deployment of forces and later reinforcements, Victory Conditions and special rules pertaining to that Scenario.
Players should be aware that Section [108.0] Scenario Design contains extensive information in creating player-designed scenarios. Once players are comfortable with their chosen set of rules, they may wish to explore creating their own scenarios.
[20.2] SETTING UP
The Players must first decide between themselves who is going to play which side. Then they must decide what Scenario to play. Once they have decided which Scenario to play, that Scenario becomes the game. (Note that Scenarios do not link together.) Next they must spread out the map and seat themselves around the map, and select their respective forces from the counter mix according to the Scenario Instructions, deploying these forces on (or about to enter) the map in accordance with the Scenario Instructions. After this, they may begin the first Game Turn.
[20.3] AVAILABLE FORCES
In the Available Forces Chart on page 26 of the MechWar 2 rules, each Player is given list of the formations for each side of the scenarios in the game. For each of the formations listed, the player consults the Tables of Organization starting on page 28 of the MechWar 2 rules to obtain the units available for each of the listed formations in the scenario. Once these units are selected, the players consult the actual scenario descriptions In the Scenarios section on page 16 of the rules.
For each of the units chosen from the Tables of Organization, the player can consult the Data Sheets starting on page 33 of the MechWar 2 rules and transfer the unit data to the players’ Unit Status Sheet to play the game with the original units. For a simpler game, the players can choose the equivalent MechWar 4 units and either use those units directly on the map or transfer the information on players’ Unit Status Sheets.
[20.4] DEPLOYMENT
A Player’s initial forces (those units he begins the first Game Turn with) are placed according to the Instructions in the Scenario being played. Usually these forces are either placed physically on the map surface itself (Initial Deployment on Map) or are adjacent to the map surface for entry onto the map on the first or succeeding Game Turns.
[20.41] Initial Deployment on the Map
When a Player is instructed to deploy certain units (collectively described as a force) on the map, he is normally told to deploy them within a certain area (deployment area) which is bounded by one or more map edges and lines drawn (hypothetically) between hexes on the hex grid or along hex rows. (Note the compass rose on the map, indicating North, East, etc.) Thus, if a force is required to deploy South of the line hex 0119 through hex 3134, inclusive it means that the Owning Player would deploy the units anywhere in the Southwestern corner of the map, including and below (south of) the line of hexes 0119, 0219, 0320, 0420, 0521 2832, 2933, 3033, 3134. Occasionally, a deployment area will correspond to a complete terrain feature. The Owning Player has freedom to place his units as he sees fit, within the deployment area, subject to the normal Terrain and Stacking Restrictions.
[20.42] Initial Deployment Off the Map
When the Deployment Instructions state that a force is to enter the map on Game Turn One, they mean that the force is positioned adjacent to the map so that it might enter the map on the Movement Phase of the first Game Turn. The Deployment Instructions will indicate whether or not the force is to enter the map in a column formation (one unit behind the other, each entering Successively into the same hex, see 20.6.1) or whether the force can enter in a free formation (each unit entering onto one hex of a row of hexes or of an entire map edge, see 20.66). In either case, the units composing the entering force must be prepositioned in the order of formation in which they will enter the map.
[20.43] Secret Deployment
When using restricted Player knowledge and inverted counters (see 11.5), the Players always place their units face-down or under a blank counter. The First Player always deploys first (unless stated otherwise) and his Player-Turn is always the first in each GameTurn. The First Player is defined in each Scenario.
[20.5] VICTORY CONDITIONS
These are used to determine the winner at the end of the game. They usually state either a geographical objective, or explain how to gain Victory Points. When Victory Points are itemized in the Victory Conditions, the Player with the greater number of Victory Points at the end of the game wins. When the Victory Conditions refer to combat units they mean any unit with an Attack Strength (not trucks or APCs). When the Victory Conditions mention a town, they refer to all of the town hexes composing the town. When a unit is exited off the map in fulfillment of Victory Conditions, it must pay the Movement Point cost for the hypothetical hex it is presumed to be entering upon leaving the map. The terrain in the hypothetical hex is arbitrarily identical to that in the exit hex. Victory is evaluated at the conclusion of the final Game Turn in the Scenario.
[20.51] Victory Points for Units Destroyed
When the Victory Conditions state that a Player receives points for every destroyed Enemy unit, the number of points which the Players receive is determined is determined according to the Victory Point Schedule table.
Example: If the U.S. Player destroys a Soviet T62 Tank Company, he receives 4 Victory Points.
[20.6] REINFORCEMENTS
Scenario Instructions may state that units enter the map in column or in free formation.
[20.61] Entry in column is accomplished as follows: The units are deployed off map, one behind the other, with the lead unit poised adjacent to the map entry hex listed. If the entry hex is a road hex, a hypothetical road may be presumed to stretch off the map, away from the entry hex. Reinforcements do not have to roll for Panic Movement upon initial entry onto the map.
[20.62] As each unit enters the map, it will pay the cost for entering the entry hex plus the additional cost for any hypothetical clear terrain hexes that it would have to traverse to reach the entry hex. If units are entering on a road hex, they are considered to be moving through hypothetical road hexes until they reach the map.
Example: The lead unit in the column would pay 0.5 Movement Point to enter the map; the second would pay 1 Movement Point to enter the map, the third 1.5 Movement Points, etc.
[20.63] Once the Players have composed their columns, they may not alter the positions of units in the columns to change the order in which units reach the map. Units specified as entering on a certain game turn must begin their entry on that turn, even though it may take more than one turn to enter all the units in the columns.
[20.64] Given the number of units in some Scenarios, often it will not be possible to enter all units onto the map during the first Game Turn that they are available. Units which cannot enter on the first Game Turn of availability simply enter on the second Game-Tum in column order. Units which are off map are out of play for all game purposes except, of course, to be moved along in sequence in order to eventually reach the map.
[20.65] Once a unit enters the map, it may be moved freely with no restrictions as to formation.
[20.66] When not stated otherwise, units may be brought on to the map in any formation the Player wishes: in one column, multiple columns, one unit per entry hex, or any combination. The Player may use as many entry hexes as he wishes; however, if more than one unit enters the map through the same hex, then the units which do so are presumed to have entered in column and must follow the procedure for entry in column, above.
[20.67] Reinforcements may be brought on to the map in Mounted condition, at the Player’s option, when vehicles are provided.
[20.68] If a unit is assigned entry hex is occupied by an Enemy unit, it must enter on the nearest non-Enemy hex.
Note: In ambiguous situations wherein initial setup units can be set up so as to block entry hexes, players may want to declare 3-hex exclusion zones around appropriate entry hexes.
[21.0] SCENARIOS—THE ARAB-ISRAELI WARS
(See original The Arab-Israeli Wars rules, THE SITUATIONS)
[22.0] SCENARIOS—SUEZ TO GOLAN
(See original MechWar 2 Suez to Golan rules [206.0] SCENARIO FORMAT)
H Fire Procedures
(General)
Tight Pattern: Single hex - full strength;
Lose 6 hexes, 1/2 strength for adjacent hexes.
Off-board Artillery:
Sections: Only Tight Pattern;
Batteries: Triangle Pattern - full strength;
Battalion: Loose - full strength all hexes.
Immune from Range Attenuation [6.0], [6.43]
**Tight Pattern:** All units in target hex at full strength. Roll separately. [6.81], [6.82]
Smoke must be Tight Pattern. [7.22]
**Loose Pattern:** Includes 6 adjacent hexes at 1/2 H concentration, round up. [6.81], [6.82]
For each H unit loss
Against **hard/protected**: Subtract 1 from H factor. [7.16] ...to a minimum of 1
Against **soft target**: damage causes a negative DRM instead (D1 = -1DRM, etc.) [7.16]
-2 to die roll if target in woods or town hex. [7.18]
**Indirect**
Use October War Artillery Rules Variant [7.11]
Subsequent same hex does not have to be pre-plotted. [7.12]
Subject to scatter. [7.17]
For Arab-Israeli War scenarios, Indirect Fire requires one combat unit as **spotter**. [7.13]
**Direct**
During Direct Fire Phase. [6.43]
Fires on D2 CRT. Range Attenuation Table not used. Tight pattern. Only Attack strength used. [6.43]
**Resolution**
**H vs. Soft Targets** [7.6]
Use H fire concentration as Attack Superiority on **Artillery Line** of Anti-Personnel CRT column. [7.6]
**H vs. Hard Targets** [7.3]
H concentration on "H Indirect" line of D2 CRT, Tight Pattern - all units in target hex. [7.33]
Target Defense Strength not deducted from Attack Strength, which may be reduced by damage [7.16]. [6.43]
Roll separately for each unit in the hex.
**Results:**
**Buttoned Up:**
Assumes S1 state.
Resolve as H concentration on H Indirect Line of D2 CRT. [7.31]
**Unbuttoned:**
If tight pattern, if die <= H concentration, target is S2 state. [7.31]
Otherwise, target is S1.
Use H concentration as attack superiority on "H Indirect" line of D2 CRT. [7.32]
Infantry vs. Hard Targets
Short Range Disruption possibility at 0 to 1 hexes [6.45]
B= Attack Strength 6H, resolved on the H Indirect line of D2 Table, terrain ignored [6.45], or 6H, Tight Pattern at 0 hexes, 3H at 1 hex on H-Indirect Line of D2 CRT. Target cannot button-up. Only S1 results. Double suppression (S2) if die roll less than H concentration.
Minus one Attack Superiority per firer Disruption Level.
Longer Range Suppression at up to 3 hexes [6.10]
A= Resolved on Anti-Personnel Table, modified for terrain. Only suppression for any positive result. [6.10]
Attack Strength minus Defense strength as Attack Superiority on Anti-Personnel CRT Any result other than no effect results in a suppression (S1).
Minus one to die roll per firer Disruption Level. |
Stems age, nitrogen fertilizer and salicylic acid application in cutting induction of noble dendrobium orchid of the Yamamoto series cultivars
JEFFERSON JOÃO SOCCOL\textsuperscript{(2)}, GIORGINI AUGUSTO VENTURIERI\textsuperscript{(3)*}, ENIO LUIZ PEDROTTI\textsuperscript{(3)}
ABSTRACT
Propagation of noble dendrobium orchid (\textit{Dendrobium nobile} Lindl.) by cutting was studied in two experiments. In the first experiment we evaluated the effect stem age on propagation success: mature stems - from already bloomed stems; and young stems – yet to bloom; and Nitrogen fertilizer application, from two sources: as Nitrate and Ammonium (respectively as Calcium Nitrate at concentrations of: 5.81 gL\(^{-1}\); 11.61 gL\(^{-1}\); 17.42 gL\(^{-1}\); and Urea at concentrations of 2.00 gL\(^{-1}\); 4.00 gL\(^{-1}\) and 6.00 gL\(^{-1}\) plus control treatments). We evaluated the following parameters: the number of cuttings stalks that launched shoots and/or roots, vigor, number of roots per plant and root length per plant. Factorial analysis of variance (stems age x source of Nitrogen; and age of stem x Nitrogen level) was applied using a Generalized Linear Model (GLM) approach. Where significant differences were observed, averages were compared using post-hoc tests (Tukey). Propagation success was higher using cuttings from mature stems (60.2%), a value 1.6 times higher than obtained with stem cuttings from young stems (38.0%). Application of Nitrogen, in both forms, did not influence any of the evaluated parameters. In the second experiment we treated cuttings from mature stems with Salicylic acid in 3 concentrations (0.10 mM; 0.50 mM; 1.00 mM and plus a control treatment). Evaluated parameters included proportion of cuttings stalks that launched shoots and/or roots, leaf length, root length, and number of roots per stem cutting. Factorial analysis of variance was applied with post-hoc tests. Application of 0.50 mM of Salicylic acid increased the proportion of cuttings stalks that launched shoots and/or roots by 20.5% relative to the control treatment.
Keywords: \textit{Dendrobium nobile}, propagation, small farmers, floriculture, elicitor molecule
RESUMO
Idade dos ramos, aplicação de fertilizantes nitrogenados e ácido salicílico, na estaquía da orquídea olho-de-boneca em cultivares da série Yamamoto
A propagação da orquídea olho-de-boneca (\textit{Dendrobium nobile} Lindl.) por estaquía, foi estudada em dois experimentos: no primeiro foi avaliado o efeito de duas idades de hasteas utilizadas para as estacas: maduras - que já haviam florescido; e jovens - que não haviam florescido ainda; e aplicação de fertilizantes nitrogenados, oriundos de duas fontes: na forma de nitrito e na de amônia (respectivamente como nitrito de cálcio nas concentrações de 5,81 gL\(^{-1}\); 11,61 gL\(^{-1}\); 17,42 gL\(^{-1}\); e uréia nas concentrações de 2,00 gL\(^{-1}\); 4,00 gL\(^{-1}\) e 6,00 gL\(^{-1}\) mais tratamentos controle. Os parâmetros avaliados foram: estacas que lançaram rebentos e/ou raízes, vigor, número de raízes e comprimento de raízes por planta. A análise de variância fatorial foi aplicada (idade dos ramos x fonte de nitrogênio; e idade dos ramos x nível de nitrogênio) usando o Modelo Linear Generalizado (MLG). Quando diferenças significativas foram observadas, as medias foram comparadas pelo teste de Tukey. O melhor resultado foi obtido usando estacas oriundas de ramos maduros (60,2%), valor 1,6 vezes maior que o obtido de estacas oriundas de ramos jovens (38,0%). A aplicação de nitrogênio, em ambas as formas, não influenciou qualquer dos parâmetros avaliados. No segundo experimento foram usadas estacas derivadas de ramos maduros tratadas com ácido salicílico em três concentrações (0,10 mM; 0,50 mM; 1,00 mM e mais o tratamento controle). Os parâmetros avaliados foram: proporção de estacas que lançaram ramos e/ou raízes, comprimento das folhas por estaca, comprimento das raízes e número de raízes por estaca. Análise de variância fatorial foi aplicada com posterior teste de Tukey. A aplicação de ácido salicílico a 0,50 mM aumentou a proporção de estacas que lançaram ramos e/ou raízes sendo 20,5% superior ao tratamento controle.
Palavras-chave: \textit{Dendrobium nobile}, propagação, pequenos agricultores, floricultura, molécula elicitora
1. INTRODUCTION
Between 2008 and 2014, growth in flower cultivation in Brazil oscillated between 7% and 10% in quantity and 12% to 15% in gross sales (SEBRAE, 2015). Assuming an average of 13.5% growth in revenues, it can be estimated that gross sales in 2016 alone reached US$ 2.12 billion. The export of orchid scions totaled US$ 152,000 in 2010 and in the following year US$ 103,000, i.e. a decrease of 31.8%. However, over the same period, orchid plantlet imports increased 60.4%, with gross financial product generated rising from US$ 4.2 to US$ 6.7 million (SECEX, 2012). Considering that the “plantlet” is the basic input associated with final orchid production, the expansion of such cultivation is occurring predominantly through use of imported plantlets.
Flower cultivation is a profitable and attractive agricultural sector that involves significant manpower, about 3.8 persons/ha are directly involved in production (REETZ et al., 2007), and high income may be generated in relatively small planted areas. Therefore, flower cultivation is a business that is self-sustainable, able to generate a high number of jobs for local economies, and is accessible to micro-farm holders that use family members as the principle work force. Noble dendrobium orchid (*Dendrobium nobile* Lindl.) is a fast-growing orchid that grows well in the majority of substrates indicated for orchid propagation. It grows in both low lying and altitudinous regions (up of 2000m). The plant withstands temperatures as low as 1 °C (BAKER and BAKER, 1996) but requires temperatures of above 22 °C for good vegetative growth; low temperatures induce flowering that occurs from late winter to early summer (BAKER and BAKER, 1996). Indeed, noble dendrobium is one of the easiest orchids to grow (CAMPOS, 1998; SILVA, 1986). It is one of the three species used in the formulation of “Shi-Hu” (with *D. tosaense* and *D. moniliforme*) (LO et al., 2004; YE and ZHAO, 2002), which is an antipyretic and tonic drink that is also described as an aphrodisiac in traditional Chinese medicine (HANELT, 2001). It also has antitumor and antimicrobial properties and inhibition of *in-vitro* lipid peroxidation (DEVI et al., 2009), and anti-retroviral (HIV) properties (SÁNCHEZ-DUFFHUES et al., 2008). If its potential for the pharmaceutical industry is confirmed, there will be a rapid increase in demand for plantlets. Propagation of noble dendrobium orchid, for hobbyists and for small producers, is usually made by clump division, cuttings, or use of the shoots that appear in the axillary buds of leaves, called “Keikes”. For commercial purposes propagation by mericlones is preferred. Propagation by seeds has only been adopted for purposes of genetic improvement because it takes twice as long to blossom compared to mericlones, clump division or cuttings.
Cloning of noble dendrobium, by cuttings, has the advantage to be made without laboratorial support as used to produce mericlones; however, its potential to produce plantlets by propagule is higher than clump division, cuttings or “Keikes”. Nevertheless, many stem cuttings do not emit leaves or roots and eventually rot. The highest proportion of cuttings stalks that launched shoots and/or roots obtained with stem cuttings layered on gravel sprinkled with Urea, a nitrogen fertilizer, at 2 g L\(^{-1}\), was 40.7% (VENTURIERI and PICKSCSIUS, 2013). Salicylic acid is considered a growth regulator with many positive effects on plants as: flowering stimulant, disease resistance and Ethylene synthesis inhibition (RASKIN, 1992; HAYAT et al., 2010). In the present study, we aimed to improve the propagation of noble dendrobium by cuttings by evaluating the effect of stems age used for cuttings, and potential interactions with nitrogen fertilizer and Salicylic acid application on cutting survival rates and vigor.
2. MATERIAL AND METHODS
Cuttings production was evaluated in *D. nobile* using a bulk of 5 genotypes, equally assigned, from Yamamoto series cultivars (Yamamoto Dendrobiums, n.d.), and separated by age of stem (see below) in two different experiments. The two experiments were established in a greenhouse covered with transparent plastic film, superimposed by black plastic screen (Fitela™- Engepol, Barueri/SP - Brazil) that is able to block 60% of incidental light, in Florianópolis, Santa Catarina state, Brazil (27°34'55"S; 48°30'19"W). All plants were irrigated for ten minutes, 2-3 times a week throughout the duration of the study.
**Experiment 1: Effect of stems age and nitrogen fertilizer application.**
For this experiment we used stem cuttings, with 4 buds (approximately 20cm in length), from the bulk of the aforementioned cultivars, from stems of two ages: a) mature-stems that had already bloomed in the year since they were collected; and b) young-stems that had not bloomed since collection. Mature and young stems are easily identified as dendrobium’s growth occurs in yearly pulses. All stems cuttings were immersed in 1.60gL\(^{-1}\) fungicide solution of Mancozeb (Manzate 800 - from DuPont™, USA), and placed standing on a gravel substrate (previously tested in Venturieri and Pickscius, 2013) in 44.2 x 28.0 x 7.5cm plastic trays. For drainage, tray bases were perforated. Applied treatments were: aspersion of Calcium Nitrate (YaraLiva™ Calcinit™ - from Yara - Norway, containing 15.5% of Nitrogen (N), 14.4% N-Nitric and 1.1% N-Ammoniacal forms, and 19.0% of hydrosoluble Calcium (Ca)) at concentrations of 5.81 g L\(^{-1}\), 11.61 g L\(^{-1}\) and 17.42 g L\(^{-1}\); and Urea (from Buschle and Lepper S.A., Brazil, containing 45.0% Nitrogen) at concentrations of: 2.00 g L\(^{-1}\), 4.00 g L\(^{-1}\) and 6.00 g L\(^{-1}\). The control treatment received only water at the same liquid volume used in other treatments. Aspersions were made up to dripping point. Environmental conditions were: average temperature of 16 °C (min.= 18 °C; max.= 24 °C), average Relative Humidity of 83% (min.= 82%, max.= 84%). Treatments were applied fortnightly. Concentrations of Calcium Nitrate (mainly N at nitric form) and Urea (N at ammoniacal form) were adjusted to make pairs of treatments with the same level of theoretical Nitrogen. Treatments were applied for 3 months. The adopted experimental design was fully randomized, with 4 stem cuttings per plot and 3 plots per treatment. Evaluated parameters included number of cuttings stalks that launched shoots and/or roots, vigor (for this, grades were subjectively accounted for by a consensus of 2 observers as in Venturieri and Pickscius (2013), assigning scores from 0 to 10, as follows: 0 attributed to rotten stem cuttings and/or without bud leaf or root emission, 5 to stems cuttings with leaf buds of 5cm in height, and at least 2 roots, and 10 for stems cuttings with emitted leaf buds of more than 10cm with more than 6 roots - intermediate grades could be attributed), number of roots/plant and root length/per plant. To analyze treatment differences and interactions between factors, we used a fully-factorial analysis of variance (stems age x source of Nitrogen, and age of stem x Nitrogen level) using the Generalized Linear Model (GLM). Where significant differences were observed, averages were compared by the Tukey test for α= 0.05. Associations between the Nitrogen content by treatment, based in theoretical calculation, and all evaluated
parameters were estimated by Pearson’s correlation index. Due to the high variability of parameters values, we used averages values for each treatment (DYTHAM, 2010).
**Experiment 2: Effect of Salicylic acid.**
To test effects of Salicylic acid we used stem cuttings with 1 bud (approximately 5cm in length), from mature stems, from the same genetic material of the previous experiment. Environmental conditions; substrate; period and system of application of the treatments were all equivalent to those described in the previous experiment. The adopted experimental design was fully-randomized with 5 replications (plots) of each treatment with each plot containing 6 stem cuttings. The compared Salicylic acid concentrations were: 0.10 mM; 0.50 mM; 1.00 mM and a control, that received water only, at the same volume used for the other treatments. The evaluated parameters included proportion of cutting stalks that launched shoots and/or roots, stem cutting leaf length (measured from the root insertion up to the top of the plant), and the length and number of roots. Obtained values were subject to analysis of variance (DYTHAM, 2010).
For both experiments the analyses were made with the aid of the Statistica package 6.0 (Statsolf, Satsoft, Tulsa, United States).
### 3. RESULTS AND DISCUSSION
**Experiment 1: Effect of stems age and nitrogen fertilizer application.**
We found significant differences in the proportion of cutting stalks that launched shoots and/or roots as a function stem age ($p = 0.0$), but not Nitrogen source ($p = 0.34$), or the two-way interaction between predictors ($p = 0.05$). For age of stem, the best results were achieved using cuttings from mature stems (60.2%), in other words, a value 1.6 times higher than obtained with cuttings from young stems (38.0%) (Figure 1).

**Figure 1.** Proportion of successful cuttings, by age of stem that they were taken, when analyzed in function of Nitrogen source. Averages were considered statistically different, by the Tukey test for ($p \leq 0.0$). Vertical bars denote 95% of confidence interval.
The obtained value on mature stem is superior to the 40.7% obtained by Venturieri and Pickscius (2013) without differentiation of stem cutting ages. According to Raven et al., (1996) auxins are produced at the leaf primordium and in young leaves that inhibit the growth of lateral buds. Older stems are inclined to the soil, assuming, at the end, a prostrate position in the plant, which could also be favor the decrease of the apical dominance promoted by young leaf primordium. As mature stems lack leaves and are usually prostrated, it is presumed that stem cuttings formed from them would have less inhibited lateral buds, explaining the higher proportion of successful cuttings observed on mature stems compared to younger ones.
Also for the parameter proportion of cuttings stalks that launched shoots and/or roots, we found significant difference for the age of the stem ($p = 0.0$) (Figure 2) and no statistical difference for Nitrogen level ($p = 0.06$). No interaction between these factor were observed ($p = 0.71$).
Figure 2. Proportion of successful cuttings, by age of stem that they were taken, in function of Nitrogen level. Averages were considered statistically different, by the Tukey test for ($p \leq 0.0$). Vertical bars denote 95% of confidence interval.
Our findings that nitrogen application neither directly, nor through an interaction with stem age, influenced the proportion of viable cut stems was in agreement with results presented in a similar experiment by Venturieri and Pickscius (2013), where application of Nitrogen in the form of Urea to noble dendrobium stem cuttings, not differentiated by ages, found no effect on the proportion of stem cuttings that launched shoots and/or roots.
Age of the stem ($p = 0.01$) showed significant difference, but not Nitrogen source ($p = 0.43$), or the interaction between age of the stem and Nitrogen source ($p = 0.92$), significantly affected plant vigor (Figure 3), reflecting that stem age is also an important factor for this parameter and that Nitrogen is not.
Figure 3. Vigor of cuttings stalks that launched shoots and/or roots, by age of stem that they were taken, in function of Nitrogen source. Averages were considered statistically different, by the Tukey test for ($p = 0.01$). Vertical bars denote 95% of confidence interval.
For the vigor in function of the age of the stem and the Nitrogen level there was difference for the age of the stem ($p = 0.01$) (Figure 4) and no statistical difference for Nitrogen level ($p = 0.90$) and even interaction between them ($p = 0.97$) also, reflecting the age of stem is also influent for this parameter and Nitrogen application not.
For the parameter number of roots per plant, no significant differences were observed for age of the stem and Nitrogen source (respectively $r = 0.48$ and $0.43$), or their interaction ($r = 0.78$). For the same parameter no significant difference was also observed for age of the stem and Nitrogen level (respectively $r = 0.53$ and $0.45$) and their interaction ($r = 0.46$) either.
For the parameter root length per plant, no significant differences were observed for age of the stem and source of Nitrogen (respectively $r = 0.29$ and $0.09$), or their interaction ($r = 0.98$). For the same parameter no significant difference were also observed for age of the stem and Nitrogen level ($r = 0.27$ and $0.18$) and their interaction either ($r = 0.61$).
A significant negative correlation was observed between theoretical Nitrogen levels and number of roots ($r = -0.88$), and also with root length per plant ($r = 0.90$). Number of roots and root length per plant were positively correlated ($r = 0.8$). So, as far as the level of Nitrogen was increased, fewer roots were emitted in number and in length. A similar effect was observed *in vitro* cultivation of Phalaenopsis hybrids, where increases in NH$_4^+$ and NO$_3^-$ concentration decreased root growth (HINNEN et al., 1989) and, by the observed associations, it can be supposed that, Nitrogen offer for cuttings of mature stalks may depress root emission.
Application of Nitrogen in the form of Urea or even in the form of Nitrate, as observed in the present work, had no influence on all evaluated parameters and so, its use is not recommended for the process of noble dendrobium of the Yamamoto series cultivars cuttings formation.
**Experiment 2: Effect of Salicylic acid.**
Among evaluated parameters, application of Salicylic acid had a significant effect on the proportion of cutting stalks that launched shoots and/or roots only ($p \approx 0.0$). The highest observed value (43.3%) was in the treatment which received 0.50 mM of Salicylic acid. Despite its statistical equivalence to the treatments that received 0.1 mM or 1.0 mM, it was 1.86 times higher than observed for the control treatment (23.3%) (Figure 5).

**Figure 5.** Proportion of cuttings stalks that launched shoots and/or roots in function of the application of Salicylic acid. Averages followed by the same letter do not differ statically by the Tukey test (for $\alpha=0.05$).
Salicylic acid is involved in many physiological processes in plants, among these resistance against pathogens, net photosynthetic rate and stomata closure (RASKIN, 1992; HAYAT et al., 2010), that could have incrementally contributed to proportion of cuttings stalks that launched shoots and/or roots. Salicylic acid decreased the number and formation of Protocorm-like bodies (PLB) on *Cymbidium* orchids at a concentration of 14mM (ÇANAKCI and MUNZUROGLU, 2009), and, in the present experiment, the highest tested concentration was 1.0 mM showed a reduction of 7% in proportion of cuttings stalks that launched shoots and/or roots in relation to obtained with the dosage of 0.5 mM (Figure 1). We therefore predict that the application of Salicylic acid, at high concentrations could induce phytotoxicity, reported here to be applications higher than 0.5 mM.
**4 CONCLUSIONS**
- Mature stems achieved the highest proportion of cutting stalks that launched shoots and/or roots, vigor, number of roots per plant and root length per plant.
- Application of Nitrogen, in the ammoniae or nitric forms, did not benefit any of the evaluated parameters.
- Nitrogen offer in cuttings, particularly mature stalks, may depress root emission.
- Application of Salicylic acid at concentration of 0.50 mM increased the proportion of successful cuttings of *D. nobile* Yamamoto series cultivars.
- The use of mature stems and the application of Salicylic acid can be deployed, at commercial level, without necessity of a tissue culture laboratory to make cuttings of noble dendrobium of the Yamamoto series cultivars.
**ACKNOWLEDGMENTS**
To colleagues Antônio Correa Garcia and Jefferson Sandi for the help during the execution of the present experiments. To Mr. José Francisco Nunes Filho for the genetic material supplied. And we also acknowledge Dr. César Assis Butignol, for the critical revision of the manuscript and Dr. Alistair Campbell for linguistic advice.
**AUTHORS CONTRIBUTION**
**J.J.S.:** Creation of the idea, literature review, experimental field works, data collection, redaction of the paper; **G.A.V.:** broad conception of the project, experimental planning, statistical analysis of data, redaction of the paper; **E.L.P.:** Logistic facilities and funds, critical review with important suggestions incorporated to the work.
**REFERENCES**
BAKER, M.; BAKER, C.O. *Orchid Species Culture. Dendrobium*. Oregon, USA: Timber Press. Portland, 1996. 852 p.
ÇANAKCI, S.; MUNZUROGLU, Ö. Effects of Salicylic acid on growth and chlorophyll destruction of some plant tissues. *World Journal of Agricultural Sciences*, v.5, p.577-581, 2009.
DEVI, P.U.; SELVI, S.; DEVIPRIYA, D.; MURUGAN, S.; SUJA, S. Antitumor and antimicrobial activities and inhibition of *in-vitro* lipid peroxidation by *Dendrobium nobile*. *African Journal of Biotechnology*, v.8, n.10, p.2289-2293, 2009. DOI: https://doi.org/10.4314/ajb.v8i10.60575.
DYTHAM, C. *Choosing and Using Statistics: A Biologist’s Guide*. (3rd edition). West Sussex, USA: Wiley-Blackwell, 2010, 320p.
HANELT, P. *Mansfeld’s Encyclopedia of Agricultural and Horticultural Crops*. Gatersleben: Springer, Institut Für Pflanzengekitund Kulturflanzen – IPK, 2001. 3641 p.
HAYAT, Q.; HAYAT, S.; IRFAN, M.; AHMAD, A. Effect of exogenous salicylic acid under changing environment: A review. *Environmental and Experimental Botany*, v.68, p.14-25, 2010. DOI: http://dx.doi.org/10.1016/j.envexpbot.2009.08.005
HINNEN, M.G.J.; PIERIK, R.L.M.; BRONSEMA, F.B.F. The influence of macronutrients and some other factors on growth of *Phalaenopsis* hybrid seedlings in vitro. *Scientia Horticulturae*, v.41, n.1-2, p.105-116, 1989. DOI: https://doi.org/10.1016/0304-4238(89)90054-X
LO, S-F.; NALAWADE, S.M.; KUO, C-L.; CHEN, C-L.; TSAY, H-S. Asymbiotic Germination of Immature Seeds, Plantlet Development and *Ex Vitro* Establishment of Plants of *Dendrobium tosaense* Makino - A Medicinally Important Orchid. *In Vitro Cellular & Developmental Biology - Plant*, v.40, n.5, p.528-535, 2004. DOI: 10.1079/IVP2004571
RASKIN, I. Role of salicylic acid in plants. *Annual Review of Plant Physiology and Plant Molecular Biology*, v.43, n.16, p.439-463, 1992.
RAVEN, P.H.; EVERT, R.F.; EICHHORN, S.E. *Biologia Vegetal*. Rio de Janeiro: Kogan S.A., Guanabara, Brasil, 1996. 728 p.
REETZ, E.R.; CLEITON, S.; RIGON, L.; CORREA, S.; LINDEMANN, C. E.; BELING, R.R. *Anuário Brasileiro das Flores*. Sta. Cruz do Sul, Brasil: Ed. Gazeta, 2007. 112p.
SANCHEZ-DUFFHUES, G., CALZADO, M.A., VINUESA, A.G. DE, CABALLERO, F.J., ECH-CHAHAD, A., APPEMDINO, G., KROHN, K., FIEBICH, B.L., MUNOZ, E. Denbinobin, a naturally occurring 1,4-phenanthrenequinone, inhibits HIV-1 replication through an NF-κB-dependent pathway. *Biochemical Pharmacology*, v.76, n.10, p. 1240-1250, 2008. DOI: https://doi.org/10.1016/j.bcp.2008.09.006
SEBRAE. *Flores e plantas ornamentais do Brasil* (Série Estudos Mercadológicos), volume 1. Brasília: SEBRAE, 2015. 42 p.
Secretaria de Comércio Exterior. Sistema de Análise das Informações de Comércio Exterior (Alice Web), 2012 (2012). Available at http://aliceweb2.mdic.gov.br/, Accessed at: 20/05/2012.
VENTURIERI, G.A.; PICKSCIUS, F.J. Propagation of nobile dendrobium (*Dendrobium nobile* Lind.) by cutting. *Acta Scientiarum Agronomy*, v.35, n.4, p.501–504, 2013. DOI: http://dx.doi.org/10.4025/actasciagron.v35i4.15198
SILVA, W. 1986. *O cultivo de orquídeas no Brasil*, 6ed., São Paulo: 1986. 96p.
YAMAMOTO DENDROBIUMS n.d. *General Carrying: Fertilizing*. Available at: <http://aliceweb2.mdic.gov.br/> Accessed on: 24/02/2017.
YE, Q.; ZHAO, W. New Alloaromadendrane, Cadinene and Cyclocopacamphane Type Sesquiterpene Derivatives and Bibenzyls from *Dendrobium nobile*. *Planta Medica*, v.68, n. p.723-729, 2002. DOI: https://www.thieme-connect.de/DOI/DOI?10.1055/s-2002-33786 |
Screening natural product extracts for potential enzyme inhibitors: protocols, and the standardisation of the usage of blanks in α-amylase, α-glucosidase and lipase assays
Chinthia Lankatillake¹, Shiqi Luo¹, Matthew Flavel²,³, George Binh Lennon¹, Harsharn Gill⁴, Tien Huynh⁴ and Daniel Anthony Dias¹*
Abstract
Background: Enzyme assays have widespread applications in drug discovery from plants to natural products. The appropriate use of blanks in enzyme assays is important for assay baseline-correction, and the correction of false signals associated with background matrix interferences. However, the blank-correction procedures reported in published literature are highly inconsistent. We investigated the influence of using different types of blanks on the final calculated activity/inhibition results for three enzymes of significance in diabetes and obesity; α-glucosidase, α-amylase, and lipase. This is the first study to examine how different blank-correcting methods affect enzyme assay results. Although assays targeting the above enzymes are common in the literature, there is a scarcity of detailed published protocols. Therefore, we have provided comprehensive, step-by-step protocols for α-glucosidase-, α-amylase- and lipase-inhibition assays that can be performed in 96-well format in a simple, fast, and resource-efficient manner with clear instructions for blank-correction and calculation of results.
Results: In the three assays analysed here, using only a buffer blank underestimated the enzyme inhibitory potential of the test sample. In the absorbance-based α-glucosidase assay, enzyme inhibition was underestimated when a sample blank was omitted for the coloured plant extracts. Similarly, in the fluorescence-based α-amylase and lipase assays, enzyme inhibition was underestimated when a substrate blank was omitted. For all three assays, method six [Raw Data - (Substrate + Sample Blank)] enabled the correction of interferences due to the buffer, sample, and substrate without double-blanking, and eliminated the need to add substrate to each sample blank.
Conclusion: The choice of blanks and blank-correction methods contribute to the variability of assay results and the likelihood of underestimating the enzyme inhibitory potential of a test sample. This highlights the importance of standardising the use of blanks and the reporting of blank-correction procedures in published studies in order to ensure the accuracy and reproducibility of results, and avoid overlooked opportunities in drug discovery research due to inadvertent underestimation of enzyme inhibitory potential of test samples resulting from unsuitable blank-correction. Based on our assessments, we recommend method six [RD – (Su + SaB)] as a suitable method for blank-correction of raw data in enzyme assays.
*Correspondence: firstname.lastname@example.org
¹ School of Health and Biomedical Sciences, Discipline of Laboratory Medicine, RMIT University, Bundoora 3083, Australia
Full list of author information is available at the end of the article
**Background**
Enzymes are the molecular targets of almost half of all marketed small-molecule drugs [1, 2]. Their protein structure affords a high level of druggability and target validation which makes enzymes an attractive target for novel drug discovery efforts [3]. Enzyme inhibitors form an important class of clinical drugs ranging in use from cancer [4, 5], cardiovascular disease [6, 7], diabetes [8, 9], neurological disorders [10–13] and obesity [14, 15]. Inhibitors of the endogenous carbohydrateases $\alpha$-glucosidase and $\alpha$-amylase, reduces postprandial hyperglycaemia by delaying the digestion of dietary carbohydrates and are valuable therapeutics in the management of diabetes [8, 16]. Similarly, calorie restriction imposed by inhibition of carbohydrases and pancreatic lipase is useful for the prevention of weight gain and the treatment of obesity [14, 17]. Therefore, enzyme inhibition assays targeting $\alpha$-glucosidase, $\alpha$-amylase and lipase are widespread in research, and screening plant extracts and natural products for inhibitory activity against these enzymes is a common approach for the discovery of potential antidiabetic and antiobesity drugs to treat and manages these metabolic diseases [18–20].
Natural products are secondary metabolites produced by living organisms such as plants and microorganisms [21]. An abundant and easily accessible source of natural products is the kingdom of plants, a large proportion of which remains to be explored for potential bioactive metabolites [21]. These molecules possess high chemical and structural diversity unrivalled by synthetic compound libraries [22–24], have evolved intrinsic bioactive properties due to their evolutionary biological roles in living organisms [23–26], and are excluded from Lipinski’s rules of five [23, 24]. Therefore, natural products are an attractive source of therapeutic molecules and a significant body of research is devoted to the discovery of drugs from natural product extracts such as plant extracts [23, 24].
In-lab approaches for screening enzyme inhibitors from natural product extracts are based on spectroscopy and often determines enzyme activity using specially designed, labelled substrates which, upon enzymatic cleavage, produce a spectrometrically measurable signal either as colour (absorbance) or fluorescence of a defined wavelength [27–31]. For example, the chromogenic substrate $p$-nitrophenyl-$\alpha$-$D$-glucopyranoside ($p$NPG) is widely used for the determination of $\alpha$-glucosidase activity (Fig. 1). $p$NPG is a colourless molecule which contains a $D$-glucose residue linked to a $p$-nitrophenol moiety via a glycosidic bond. Hydrolysis of this glycosidic bond by $\alpha$-glucosidase releases $p$-nitrophenol (coloured product), enabling the spectrophotometric determination of $\alpha$-glucosidase activity at the absorbance maxima of $p$NPG; $\lambda = 405$ nm [27, 32].
Alternatively, there are also fluorescence-based enzyme assays which make use of substrates linked to a fluorophore. Enzymatic cleavage releases the fluorophore which re-emits light (fluoresces) upon excitation at a specific wavelength [33–35]. A routinely used substrate for fluorescence-based $\alpha$-amylase assays is the BODIPY® FL-DQ™ (boron-dipyromethene fluorescent dye quenching) starch which consists of a starch

derivative (DQ™ starch) conjugated to the green fluorescent BODIPY® FL fluorophore. The DQ™ starch is heavily labelled with BODIPY® FL to such an extent that the close proximity of the fluorophores to each other results in intramolecular self-quenching and the substrate fluorescence is almost completely quenched (Fig. 2). As the substrate is hydrolysed by α-amylase, the intramolecular self-quenching is disrupted causing an intense increase in green fluorescence [28, 36].
A commonly used fluorogenic substrate for lipase assays is 4-methylumbelliferyl oleate (4-MUO) [37]. Lipase-catalysed cleavage of 4-MUO liberates the fluorescent product, 4-methylumbelliferone (4-MU) (Fig. 3), in proportion to lipase activity [37–39].
These enzyme assays can be performed in 96-well microplate readers [40–42], or in traditional cuvette-based spectrophotometers or spectrofluorometers [43–45]. However, miniaturisation of assays by adapting them to a 96-well format is beneficial as it reduces the volumes of reagents required, is cost-efficient and enables faster testing suitable for screening large libraries of plants, extracts or natural products [46].
**Fig. 2** Schematic illustration of the BODIPY®FL-DQ™ starch assay concept Adapted from [36]
**Fig. 3** Hydrolysis of 4-MUO by lipase Modified from [39]
Spectrometry-based assays are subject to spectral interference from sample colour, autofluorescence, and turbidity arising from poor solubility [47, 48]. Therefore, these factors must be taken into consideration when designing and carrying out enzyme-based assays with natural product extracts.
Plant extracts are often highly coloured as they contain natural, highly conjugated pigments. The colouration of plant extracts and natural products can cause interference with absorbance measurements in spectrophotometric assays [47]. Chlorophylls and carotenoids are the two major classes of photosynthetic plant pigments. Chlorophylls absorb light strongly in the 430–480 nm and 640–660 nm range [49]. However, chlorophylls are also capable of absorbing light at other wavelengths. Carotenoids encompass a wide range of compounds such as β-carotene, phytoene and lycopene, and have a wide visible light absorbance spectrum ranging from 400–530 nm [50]. Betalains, which includes betacyanins and betaxanthins [51]; and flavonoids such as anthocyanins and flavanols [52, 53] are the other classes of pigments contributing to colour in plants. All these phytochemical classes can interfere with assays which use wavelengths that overlap their absorption spectra.
In addition to the pigments intrinsic to plants, post-harvest changes can lead to the development of additional coloured products. For example, the action of polyphenol oxidases generates melanins and other brown pigments in harvested plant tissue exposed to oxygen. This process, referred to as enzymatic browning, is particularly common in fruits [54–56]. Samples with high sugar content, such as sugarcane (*Saccharum* spp.) extracts, are susceptible to intense browning caused by the Maillard reaction and caramelisation reactions [57]. The coloured products of such post-harvest reactions can be a significant source of interference in absorbance-based assays.
Autofluorescence is observed in some plants (e.g. *Berberis vulgaris* L., *Humulus lupulus* L., *Matricaria chamomilla* L., and *Salvia officinalis* L. [28]) and endogenous natural products [58–61] in a range of wavelengths which can interfere with fluorescence-based assays. For instance, anthranilates, alkaloids, coumarins, and stilbenes fluoresce in the blue-violet range (∼400–520 nm), flavones and flavonoids in the green-yellow range (∼520–590 nm), polycyclic aromatic quinones, tannins and some alkaloids in the orange range (∼635–590 nm), and chlorophyll, porphyrins and certain quinones fluoresce in the red–far red range (∼590–700 nm) [59, 60].
Poor solubility of some extracts and compounds in the assay buffers results in turbidity due to the presence of undissolved, suspended particles and may lead to inaccurate results. Light passing through a turbid medium is subject to multiple scattering and absorption events [62]. Therefore, turbidity interferes with spectrophotometric measurements by increasing absorbance and can result in misleadingly high readings [63]. Similarly, the absorbance and scattering of photons in a turbid medium can also distort fluorescence measurements [62].
The substrate can also be a source of error in enzyme assays. For example, unstable substrates may gradually decay to form their product. Contamination of the substrate with the chromogenic or fluorogenic product introduces a false signal and can cause a misleading increase in absorbance or fluorescence which is unrelated to enzyme activity [64].
In summary, assay interference due to sample colour, autofluorescence and turbidity can contribute to errors in measurements and hence affect the accuracy and reproducibility of results [47, 63]. Therefore, it is essential to minimise the effects of these interferences by blank-correcting raw data (RD) using appropriate sample and reagent blanks.
A sample blank contains an equal concentration of the test sample—whether it be an extract, an isolated compound, or a drug used as a control—without the enzyme or substrate. The absorbance (or fluorescence) of the sample blank quantifies the absorbance (or fluorescence) contributed by the colour, autofluorescence and/or turbidity of the sample. Subtracting the sample blank reading from the test well (which contains the enzyme + substrate + test sample) reading provides the value of the absorbance or fluorescence which is due to the enzymatic reaction; i.e. the ‘true’ value contributed by the reaction product.
The optical properties of different test samples vary widely. Therefore, to ensure the accuracy of results, it is necessary to include a sample blank for each sample in multi-sample assays, a “positive control blank” (equivalent to a sample blank for the positive control), and a “negative control blank” (equivalent to a sample blank for the negative control), and to correct each measurement using their respective blanks. Sample blanks have been included in some published studies [45, 65, 66], however, many appear to overlook the use of a sample blank in their experiments.
A practical approach to minimise assay interference from substrate contamination and degradation is to use a reagent (substrate) blank [63]. Some published studies [65, 67] have included a substrate blank to eliminate any false signals due to colour (absorbance) or fluorescence of the substrate. However, as with the sample blanks, many studies appear to omit the use of a substrate blank. Instead, many report using either a buffer blank or a blank that is not defined in the publications.
Although the use of blanks in experiments is common practice, there is currently no consensus on which
blanks should be used. Published studies vary widely with regards to which blanks are included for the calculation of blank-corrected data. Researchers have previously reported assays using a buffer blank [68–70], sample blank [66, 71–74], a substrate blank [75, 76], or even an enzyme blank [77, 78]. Some researchers have combined substrate and sample in a single blank to account for interference from both [65, 79, 80]. Therefore, there appears to be many ways of blank-correction, with no standardisation, which not only adds to the confusion of researchers attempting these enzyme inhibition assays, it also adds bias and makes it difficult to compare results between studies. There are no studies that have investigated whether the type of blank used influences the calculated enzyme activity or enzyme inhibition results, which makes the selection of appropriate blanks problematic. In addition, despite the availability of a range of assay methods and countless publications involving α-glucosidase, α-amylase, and lipase inhibition assays, there is a lack of comprehensive published protocols that detail the assays step-by-step.
The aims of this study were to (1) provide comprehensive protocols for carrying out a colorimetric (absorbance) α-glucosidase inhibition assay, and fluorometric α-amylase and lipase inhibition assays in 96-well microplates; and (2) investigate whether using a buffer blank, substrate blank, sample blank, or a combination of blanks for blank-correction will affect the final, calculated enzyme inhibition results.
**Materials and methods**
**Plant samples**
Six medicinal plants with evidence of enzyme inhibitory activity against α-glucosidase (*Aegle marmelos* (L.) Corrêa [81, 82] and *Phyllanthus niruri* L. [83–86]), α-amylase (*Gardenia jasminoides* Ellis. [87] and *Nelumbo nucifera* Gaertn [87]), and lipase (*Camellia sinensis* L. [14, 88–90] and *Sophora japonica* L. [14]) were selected for the enzyme assays, based on the literature, to determine how their pigmentation and the use of different blanks and blanking methods affect the final assessment of their bioactivity. These medicinal plants have previously been investigated by our research group for their antidiabetic and antiobesity properties. They were chosen for initial testing based on a systematic review conducted on promising bioactive plants which highlighted the above species. These plants were selected for inclusion in this study because their ability to inhibit the respective enzymes have already been established in studies published by others [81–86, 88–90] and in previous studies carried out by our research group [14, 87].
*Aegle marmelos* was identified by local experts and dry leaves were collected in Kandy, Sri Lanka on 30/05/2017 by Dr Tien Huynh. Dried *P. niruri* leaf samples were obtained from a commercial herbal products supplier in Malaysia (identified and collected by the supplier in Jalan Jelebu, Malaysia on 20/02/2017). Herbarium samples can be found at the Janaki Ammal Herbarium, Indian Institute of Integrative Medicine (Accession No. 18696) for *A. marmelos* and the University of South Florida Herbarium, Institute for Systematic Botany (Accession No. 66964) for *P. niruri*.
Commercially prepared herbal extract granules (Nong’s, HK) of *S. japonica* flowers (Batch No. A1601428; 1203A1601428071119), *N. nucifera* leaves (Batch No. A1601450; 1356A1601450060919), *G. jasminoides* fruit (Batch No. A1600810; 1033A1600810050719) and *C. sinensis* (matcha) leaves were provided by GL Natural Healthcare Clinic, Strathmore, Victoria, Australia.
**Positive controls**
Acarbose, a known pharmacological inhibitor of α-glucosidase and pancreatic α-amylase was used as the positive control in the α-glucosidase and α-amylase assays [45, 67, 91]. Orlistat, a known inhibitor of pancreatic lipase was used as the positive control in the lipase assay [92, 93]. Acarbose (Cat. No. ACR459080010), and orlistat (Cat. No. O4139) were purchased from Thermo Fisher Scientific Australia and Sigma-Aldrich, respectively.
**Chemicals and reagents**
Analytical grade ethanol (Cat. No. 111727), and dimethyl sulfoxide (DMSO) (Cat. No. 102952) were purchased from Merck Millipore Australia. The EnzChek™ Ultra Amylase assay kit (Cat. No. E33651) was purchased from Life Technologies Australia, α-amylase from porcine pancreas (EC Number 126.96.36.199, Cat. No. 10102814001), lipase from porcine pancreas (Type VI-S, EC Number 188.8.131.52, Cat. No. L0382), 4-methylumbelliferyl oleate (4-MUO) (Cat. No.75164), Tris base (Cat. No. 10708976001), intestinal acetone powders from rat (Cat. No. I1630), calcium chloride (CaCl₂), and sodium chloride (NaCl) were purchased from Sigma-Aldrich Australia. *p*-nitrophenyl α-D-glucopyranoside (pNPG) (Cat. No. ACR337150050), and phosphate-buffered saline (PBS) tablets (Gibco) (Cat. No. 18912014), were purchased from Thermo Fisher Scientific Australia. Anhydrous disodium hydrogen phosphate (Na₂HPO₄, AnalR®), and sodium dihydrogen phosphate dihydrate monobasic (NaH₂PO₄·H₂O, UNIVAR®) were obtained from AJAX Chemicals Australia and BDH Chemicals Australia, respectively. Milli-Q® water was obtained from a Millipore Milli-Q® water purification system. Distilled water was used in the preparation of buffers and reagents unless otherwise stated.
Corning® Costar® tissue culture-treated clear, flat bottom 96-well plates (Cat. No. CLS3516) were purchased from Sigma-Aldrich Australia. Nunc™ Nunclon™ delta treated, black polystyrene, flat bottom 96-well plates (Cat. No. 137101) were purchased from Thermo Fisher Scientific Australia.
**Methods**
An absorbance assay with pNPG as the substrate was used for the determination of α-glucosidase activity, and fluorescence assays using the substrates BODIPY®FL-DQ™ starch, and 4-MUO were used for the determination of α-amylase and lipase activity, respectively. Acarbose was the positive control in the α-glucosidase and α-amylase assays while orlistat was the positive control in the lipase assay. The corresponding assay buffer was used as the negative control in each of the assays.
**Absorbance assay: α-glucosidase**
The α-glucosidase assay was adapted from [91, 93] with modifications. The method is described in detail in the proceeding section “α-glucosidase assay step-by-step.”
**Extraction of plant samples**
The leaves of *Aegle marmelos* and *Phyllanthus niruri* were separated from the dried plant samples, homogenized to a fine powder using a mortar and pestle or an electric blender (KitchenAid®), and the homogenized samples were stored in the dark at room temperature (RT = 25 °C) with silica gel desiccant.
Approximately 50 mg of the powdered leaf samples were further homogenized with 500 μL of 100% ethanol in lysis tubes using an MP Biomedicals™ FastPrep-24™ instrument. Each sample was then extracted for 15 min at 37 °C and 900 rpm using an Eppendorf ThermoMixer®, and then centrifuged for 15 min at 12,700 rpm at RT. The supernatant (ethanol extract) was subsequently transferred into a new Eppendorf tube. The pellet was re-suspended in 500 μL of Milli-Q water and allowed to extract for 15 min at 37 °C and 900 rpm using an Eppendorf ThermoMixer®. The supernatant (water extract) was then combined with the ethanol extract that was previously generated. The combined ethanol/water extract (50:50) was centrifuged for 10 min at 12,700 rpm at RT until a clear supernatant was obtained. The extracts were concentrated by evaporating solvents under reduced pressure using a rotational vacuum concentrator (Martin Christ RVC 2-33 CDplus) operating at 1500 rpm at 30 °C for 6–8 h. The average yields of the concentrated extracts were 0.3 mg for *A. marmelos* and 0.8 mg for *P. niruri*. The concentrated extracts were stored in darkness at RT with silica gel desiccant until required.
**Preparation of buffers, reagent solutions, test samples and controls**
- **α-glucosidase assay buffer I** (for preparing extracts and acarbose) consisted of freshly prepared 100 mM sodium phosphate buffer with 2% DMSO (pH 6.9). All test extracts and acarbose (positive control) were prepared by dissolving in assay buffer I.
Comments: DMSO was used to aid in the solubility of the extracts in the buffer [94, 95]. Preliminary testing confirmed that 2% DMSO (final concentration of 0.001% DMSO) had no significant effect on rat intestinal α-glucosidase activity. It is also possible to use PBS (pH adjusted to 6.9) as the incubation buffer in this assay without affecting rat intestinal α-glucosidase activity.
- **α-glucosidase assay buffer II** (for preparing assay reagents) consisted of a freshly prepared 100 mM sodium phosphate buffer without DMSO (pH 6.9). The enzyme and substrate solutions were prepared by dissolution in assay buffer II.
- **Test samples** were prepared from the concentrated plant extracts by dissolving and diluting the extracts in α-glucosidase assay buffer I to obtain working test sample solutions of 1 mg/mL. These were freshly prepared immediately before experiments and used on the same day.
- **Positive control, acarbose** powder was dissolved in α-glucosidase assay buffer I to prepare a 100 mg/mL stock solution which was stored short-term at 4 °C. A working solution of 1 mg/mL acarbose was prepared by diluting the stock solution in α-glucosidase assay buffer I immediately before experiments.
- **Negative control** (vehicle) was α-glucosidase assay buffer I.
- **Rat intestinal α-glucosidase enzyme solution** was prepared by sonicating 0.625 g of rat intestinal acetone powder in 50 mL of assay buffer II at RT, followed by vigorous mixing at 800 rpm for 20 min at RT. The mixture was centrifuged at 14,000 rpm for 30 min at 4 °C (Beckman Coulter Avanti J-25i high-speed centrifuge), and the supernatant (working enzyme solution) was stored in aliquots in the dark at 4 °C until use. The working α-glucosidase solution can be refrigerated for storage for up to 3 months without any appreciable decline in enzyme activity. The working enzyme solution cannot be frozen as freezing inactivates the enzyme.
Comments: Note that the concentration of α-glucosidase in the enzyme solution is unknown. However, different volumes of the enzyme solution
(10, 20 and 30 μL/well) were tested as part of the assay optimisation and 30 μL/well was optimal for the assay method described herein.
- **p-NPG substrate solution** was prepared by dissolving and diluting p-NPG powder in assay buffer II to obtain a working substrate solution of 5 mM.
**α-glucosidase assay step-by-step**
The absorbance-based α-glucosidase assay was performed in Corning® Costar® tissue culture-treated clear, flat-bottom 96-well plates. The assay method is described below including volumes of all test samples, controls, and assay reagents and a suggested plate layout is given in Fig. 4.
**Step 1:** Set up 96-well plate with test samples, controls, and blanks.
Firstly, assay buffer I was added to the wells for the negative control (50 μL/well) and the various blanks. Then the test samples and acarbose (50 μL/well of 1 mg/mL, final concentration 0.5 mg/mL) were added to their respective test wells and corresponding blank wells.
Comments: Assay buffer I was added to the sample blank wells and control blank wells to make their volumes and concentrations equal to the volumes and concentrations in the test wells. It was not necessary to add the substrate to the substrate blanks in Step 1 because, unlike the samples and controls which were all added at the start of the assay, the substrate was only added in Step 3 after pre-incubation. Temperature variation between outer and inner wells and increased evaporation in the outermost wells can give rise to edge effects which contribute to the degradation of assay results [96, 97]. To avoid this, only the central wells were used in the assay and the peripheral wells were excluded.
**Step 2:** Add enzyme and pre-incubate with test samples.
Using a multichannel pipette, the enzyme solution (30 μL/well) was added to all test wells and mixed by gentle pipetting. The plate was incubated in the dark at 37 °C for 10 min.
Comments: ‘Test wells’ refers to the wells containing enzyme + substrate and a test sample or control. Care must be taken not to create air bubbles and avoid foaming during mixing as aeration can denature the enzyme [98, 99]. In addition, entrapped air bubbles refract light and can therefore cause interference with absorbance measurements [47]. A temperature of 37 °C was chosen for all incubation steps in the α-glucosidase assay, as being close to body temperature, it is physiologically relevant and is optimal for mammalian enzyme activity [100].
**Step 3:** Add substrate to start reaction and incubate plate.
Using a multi-channel pipette, the substrate solution (20 μL/well of 5 mM, final concentration 1 mM) was added to all test wells and the substrate blank wells and mixed by gentle pipetting. The plate was incubated in the dark at 37 °C for 20 min.
Comments: Substrate was not added to the sample blanks, control blanks, or the buffer blanks. The plate was incubated in the microplate reader CLARIOstar® microplate reader (BMG LABTECH) set at 37 °C.

**Step 4:** Measure absorbance using plate reader.
Absorbance was measured at 405 nm using the microplate reader set to 37 °C. The plate was read from the top.
**Fluorescence assays: α-amylase and lipase**
**Preparation of herbal extract granules for testing**
To prepare the samples for testing, the herbal extract granules of *Gardenia jasminoides* and *Nelumbo nucifera* (for the α-amylase assay), and the granules of *Camellia sinensis* and *Sophora japonica* (for the lipase assay) were ground to a fine powder using a mortar and pestle, and 20 mg of the ground granules were dissolved in 4 mL of distilled water containing 2% DMSO to obtain 5 mg/mL stock solutions. These were vortexed for 5 min and sonicated for 10 min to aid solubilisation, and then filtered through a Millex-HP 0.45 μm filter (Millipore). The test sample stock solutions thus prepared were diluted in assay buffer I to prepare working solutions as described below.
**α-amylase assay**
**Preparation of buffers, reagent solutions, test samples and controls**
- **α-amylase assay buffer I** (for diluting test samples and preparing the positive control acarbose) consisted of 10 mM sodium phosphate, 2.68 mM KCl, 140 mM NaCl and 1 mM CaCl₂ (pH 6.9) and 2% DMSO.
Comments: DMSO was used to aid the solubility of the extracts in the buffer [94, 95]. Preliminary testing confirmed that 2% DMSO (final concentration of 0.001% DMSO) had no significant effect on α-amylase activity in this assay.
- **α-amylase assay buffer II** (for diluting substrate and enzyme) consisted of 10 mM sodium phosphate, 2.68 mM KCl, 140 mM NaCl and 1 mM CaCl₂ (pH 6.9).
- **Substrate buffer** (solvent to dissolve substrate DQ™ starch) provided with the EnzChek™ Ultra Amylase assay kit, consisted of 50 mM sodium acetate buffer, pH 4.0
- **Test samples** (20 mg of herbal extract granules) were dissolved in distilled water with 2% DMSO (4 mL) to prepare a stock solution 5 mg/mL and diluted in α-amylase assay buffer I to obtain a working solution of 1.2 mg/mL (final concentration in well = 0.3 mg/mL). Samples were freshly prepared before experiments and used immediately.
- **Positive control acarbose** (20 mg) was dissolved in α-amylase assay buffer I (4 mL) to obtain a 5 mg/mL stock solution and diluted to obtain a solution of 1.2 mg/mL (final concentration of 0.3 mg/mL) immediately before experiments.
- **Negative control** (vehicle) was α-amylase assay buffer I.
- **Porcine pancreatic α-amylase** ($10^7$ mU/mL) was serially diluted in α-amylase assay buffer II to obtain a working enzyme solution of 48 mU/mL.
- **BODIPY®FL-DQ™ starch substrate** was prepared by dissolving first in substrate buffer, then in α-amylase assay buffer II to obtain a stock solution of 1 mg/mL as per the manufacturer’s instructions. A working substrate solution of 200 μg/mL was prepared by serial dilution.
**α-amylase assay step-by-step**
A fluorescence assay for determining porcine pancreatic α-amylase activity using DQ™ starch as a substrate was performed in Nunc™ Nunclon™ delta treated, black polystyrene, flat-bottom 96-well plates using a method adapted from [45].
**Step 1:** Set up 96-well plate with test samples, controls, and blanks.
Firstly, α-amylase assay buffer I was added to the negative control wells (25 μL/well) and the various blanks. Then the test samples and acarbose (25 μL/well of 1.2 mg/mL, final concentration 0.3 mg/mL) were added to their respective test wells and their corresponding sample blank or positive control blank wells.
Comments: Assay buffer I was added to the sample blank wells and control blank wells to make their volumes and concentrations equal to the volumes and concentrations of the test wells. Assay buffer I was also the negative control (vehicle). It was not necessary to add substrate to the substrate blanks in Step 1 because, unlike the samples and controls which were all added at the start of the assay, the substrate was only introduced in Step 3 after pre-incubation. Temperature variation between outer and inner wells and increased evaporation in the outermost wells can give rise to edge effects which contribute to the degradation of assay results [96, 97]. To avoid this, only the central wells were used in the assay and the peripheral wells were excluded.
**Step 2:** Add enzyme and pre-incubate with test samples.
Using a multichannel pipette, the amylase solution (25 μL/well of 48 mU/mL, final concentration 12 mU/mL) was added to all test wells and mixed by gentle pipetting. Assay buffer I (25 μL/well) was added to all test wells to bring up the volume to 100 μL. The plate was incubated in darkness at RT for 20 min.
Comment: ‘Test wells’ refer to the wells containing enzyme+substrate and a test sample or control. Note
that the well volumes were adjusted to 100 μL with the addition of buffer to maintain the total reaction mixture volumes consistent across all three assays, for convenience and calculations, and to minimise the effects of evaporation during incubation. This is not strictly necessary and adjusting the volumes (and concentrations) of enzyme, substrate, and test substances to a total of 100 μL is suggested.
**Step 3:** Add substrate to start the reaction and incubate plate.
Using a multichannel pipette, the substrate solution (25 μL of 200 μg/mL, final concentration 50 μg/mL) was added to all test wells and the substrate blank wells and mixed thoroughly by gentle pipetting. Next, the plate was incubated in the dark at RT for 20 min.
Comments: Substrate was not added to the sample blanks, control blanks, or the buffer blanks.
**Step 4:** Measure fluorescence using a plate reader.
Fluorescence was measured with the CLARIOstar® microplate reader (BMG LABTECH) at excitation and emission wavelengths of 485 nm and 530 nm, respectively, over 30 min.
**Lipase assay**
*Preparation of buffers, reagent solutions, test samples and controls*
- **Lipase assay buffer I** (for diluting test samples and preparing the positive control orlistat, and the substrate) consisted of 13 mM Tris-HCl, 75 mM NaCl, 1.3 mM CaCl₂ (pH 8.0) and 2% DMSO.
Comments: DMSO was used to aid with the solubility of the extracts in the buffer [94, 95]. Preliminary testing confirmed that 2% DMSO (final concentration of 0.001% DMSO) had no significant effect on lipase activity in this assay.
- **Lipase assay buffer II** (for preparing porcine pancreatic lipase enzyme) consisted of 13 mM Tris-HCl, 75 mM NaCl and 1.3 mM CaCl₂ (pH 8.0).
- **Test samples** (20 mg herbal extract granules) were dissolved in 4 mL distilled water containing 2% DMSO to prepare stock solutions of 5 mg/mL and diluted in lipase assay buffer I to 0.4 mg/mL (final concentration of 0.1 mg/mL). Samples were freshly prepared before experiments and used immediately.
- **Positive control orlistat** (25 mg) was dissolved in lipase assay buffer I (5 mL) to obtain a 5 mg/mL stock solution and diluted in lipase assay buffer II to 0.4 mg/mL (final concentration of 0.1 mg/mL) immediately before experiments.
- **Negative control (vehicle)** was the lipase assay buffer I.
- **Porcine pancreatic lipase enzyme** powder was dissolved in lipase assay buffer II to obtain a stock solution $3.5 \times 10^4$ U/mL. Subsequently, the enzyme solution was serially diluted to obtain a working solution of 50 U/mL.
- **4-MUO substrate solution** was prepared by dissolving in lipase assay buffer I to obtain a working substrate solution of 0.1 mM.
**Lipase assay step-by-step**
The assay was adapted from methods previously described [71, 101]. The volumes of test samples, controls, and assay buffer are illustrated below using an example plate layout (Fig. 5).
**Step 1:** Set up 96-well plate with test samples, controls, and blanks.
Firstly, lipase assay buffer I was added to the negative control wells (25 μL/well) and the various blanks. Then the test samples and orlistat (25 μL/well of 0.4 mg/mL, final concentration 0.1 mg/mL) were added to their respective test wells and their corresponding sample blanks or positive control blanks.
Comments: Lipase assay buffer I was added to the sample blank wells and control blank wells to make their volumes and concentrations equal to the volumes and concentrations of the test wells. It was not necessary to add the substrate to the substrate blanks in Step 1 because, unlike the samples and controls which were all added at the start of the assay, the substrate was only introduced in Step 3 following pre-incubation. Temperature variation between outer and inner wells and increased evaporation in the outermost wells can give rise to edge effects which contribute to the degradation of assay results [96, 97]. To avoid this, only the central wells were used in the assay and the peripheral wells were excluded.
**Step 2:** Add enzyme and pre-incubate with test samples.
Using a multi-channel pipette, lipase solution (25 μL/well of 50 mU/mL, final concentration 12.5 mU/mL) was added to all test wells and mixed by gentle pipetting. Assay buffer I (25 μL/well) was added to all test wells to bring the volume up to 100 μL. The plate was incubated in darkness at RT for 5 min.
Comment: ‘Test wells’ refer to the wells containing enzyme + substrate and a test sample or control.
**Step 3:** Add substrate to start the reaction and incubate plate.
Using a multi-channel pipette, the substrate solution (50 µL of 200 µg/mL, final concentration 50 µg/mL) was added to all test wells and the substrate blanks and mixed thoroughly by gentle pipetting. Next, the plate was incubated in the dark at RT for 30 min.
Comments: Substrate was not added to the sample blanks, control blanks, or the buffer blanks.
**Step 4:** Measure fluorescence using plate reader.
Fluorescence was measured with the CLARIOstar® microplate reader (BMG LABTECH) at excitation and emission wavelengths of 355 nm and 460 nm, respectively.
Comments: The lipase substrate 4-MUO had poor solubility in aqueous-based buffer. DMSO increased solubility, however, there was still some turbidity which can interfere with fluorescence measurements [62]. A possible solution would be to use a stopping reagent, centrifuge the plate, and transfer the supernatant to a new plate before fluorescence measurement.
**Calculation of results**
**Blank-correction and calculation of enzyme activity**
For all three assays, enzyme activity was calculated using blank-corrected raw data. Enzyme activity was expressed as a percentage of the negative control which was normalised to 100% activity.
\[
\text{Percentage enzyme activity} = \frac{\text{Blank-corrected Absorbance of test well}}{\text{Blank-corrected Absorbance of negative control}} \times 100
\]
For comparison, raw data was blank-corrected in six different ways (Table 1) using a combination of buffer-only, substrate, and sample blanks:
1. Raw data corrected by subtracting only the buffer-only blank: RD – BB.
2. Raw data corrected using only the substrate blank: RD – SuB.
3. Raw data corrected using dedicated sample blanks for each sample, a positive control blank, and a negative control blank (“test substance blanks” for the samples and controls): RD – SaB.
4. Raw data corrected using both the buffer-only blank, and individual sample or control blanks: RD – (BB + SaB).
5. Raw data corrected using both the substrate blank, and individual sample or control blanks: RD – (SuB + SaB).
6. Raw data corrected with the difference in absorbance or fluorescence between the substrate blank and the buffer-only blank, and the individual sample and control blanks: RD – [(SuB – BB) + SaB].
The α-glucosidase activity and inhibition were calculated from the blank-corrected absorbance data as a percentage of the negative (uninhibited) control using the following formula:
\[
\text{Percentage inhibition} = \left(1 - \frac{\text{Blank-corrected Absorbance of test well}}{\text{Blank-corrected Absorbance of negative control}}\right) \times 100
\]
### Table 1 Formulas for blank-correction of raw absorbance data
| Method | Formula for blank-correction |
|--------|------------------------------|
| | **α-glucosidase assay** | **α-amylase and lipase assays** |
| 1 | $A_{\text{test}} - A_{\text{buffer blank}}$ | $F_{\text{test}} - F_{\text{buffer blank}}$ |
| 2 | $A_{\text{test}} - A_{\text{substrate blank}}$ | $F_{\text{test}} - F_{\text{substrate blank}}$ |
| 3 | $A_{\text{test}} - A_{\text{sample blank}}$ | $F_{\text{test}} - F_{\text{sample blank}}$ |
| 4 | $A_{\text{test}} - (A_{\text{buffer blank}} + A_{\text{sample blank}})$ | $F_{\text{test}} - (F_{\text{buffer blank}} + F_{\text{sample blank}})$ |
| 5 | $A_{\text{test}} - (A_{\text{substrate blank}} + A_{\text{sample blank}})$ | $F_{\text{test}} - (F_{\text{substrate blank}} + F_{\text{sample blank}})$ |
| 6 | $A_{\text{test}} - [(A_{\text{substrate blank}} - A_{\text{buffer blank}}) + A_{\text{sample blank}}]$ | $F_{\text{test}} - [(F_{\text{substrate blank}} - F_{\text{buffer blank}}) + F_{\text{sample blank}}]$ |
$A$ and $F$ refers to absorbance and fluorescence respectively.
- $A_{\text{test}} =$ the absorbance of treatment wells (enzyme + substrate + plant extract), positive control wells (enzyme + substrate + positive control acarbose) and negative control wells (enzyme + substrate + vehicle)
- $A_{\text{buffer blank}} =$ the absorbance of the buffer-only blank used to determine the absorbance due only to the buffer
- $A_{\text{substrate blank}} =$ the absorbance of the substrate blank (substrate + buffer)
- $A_{\text{sample blank}} =$ the absorbance of the sample blank, positive control blank, or negative control blank
- $F_{\text{test}} =$ the fluorescence of treatment wells (enzyme + substrate + plant extract), positive control wells (enzyme + substrate + positive control acarbose) and negative control wells (enzyme + substrate + vehicle)
- $F_{\text{buffer blank}} =$ the fluorescence of the buffer-only blank used to determine the absorbance due only to the buffer
- $F_{\text{substrate blank}} =$ the fluorescence of the substrate blank (substrate + buffer)
- $F_{\text{sample blank}} =$ the fluorescence of the sample blank, positive control blank, or negative control blank (Please see note below)
Note on sample blanks: In this study, each “test sample” (i.e. extracts, positive control (acarbose or orlistat), and negative control (vehicle) were given their own sample blank. The sample blank for the positive control, which may also be referred to as the “positive control blank” contained buffer + positive control. The sample blank for the negative control, which may also be referred to as the “negative control blank” contained buffer + negative control (vehicle). In methods 3–6, the positive and negative controls were blank-corrected using their respective positive control “sample” blank and negative control “sample” blank.
---
**Percentage enzyme inhibition**
$$= 1 - \left( \frac{\text{Blank-corrected Absorbance of test well}}{\text{Blank-corrected Absorbance of negative control}} \right) \times 100$$
---
The α-amylase and lipase activity and inhibition were calculated from the blank-corrected fluorescence data as a percentage of the negative (uninhibited) control using the following formula:
**Percentage enzyme activity**
$$= \frac{\text{Blank-corrected Fluorescence of test well}}{\text{Blank-corrected Fluorescence of negative control}} \times 100$$
**Percentage enzyme inhibition**
$$= 1 - \left( \frac{\text{Blank-corrected Fluorescence of test well}}{\text{Blank-corrected Fluorescence of negative control}} \right) \times 100$$
---
**Results and discussion**
In all three assays, the positive controls—acarbose for α-glucosidase (Fig. 6) and α-amylase (Fig. 7), and orlistat for lipase (Fig. 8)—caused a significant reduction in enzyme activity when compared with the uninhibited control. This is as expected and provided validation of the assay methods and confirmed the suitability of the assays for their intended purpose [102].
Blank-correction using just the buffer blank (RD — BB) yielded high enzyme activity, and therefore low enzyme inhibition results in all three assays (Figs. 6, 7 and 8). In
the $\alpha$-glucosidase assay, RD – BB exhibited the highest enzyme activity value for acarbose, and the second highest enzyme activity values for *A. marmelos* and *P. niruri*. Similarly, in the lipase assay, RD – BB yielded the highest enzyme activity for *C. sinensis* and *S. japonica*, and the second highest for orlistat. In the $\alpha$-amylase assay, RD – BB gave very high enzyme activity, however, these enzyme activity values were almost identical to RD – SaB and RD – (BB + SaB).
Buffer blanks are routinely used for baseline correction and account for the background absorbance or fluorescence of the assay buffer. However, if no substrate blank or sample blanks are included, this method (RD – BB) does not account for interference from the substrate or samples. Therefore, in RD – BB, the background absorbance (or fluorescence) of the substrate and the samples, which caused an additive effect on the test well measurements were uncorrected and resulted in misleadingly high enzyme activity results. This is also illustrated in Fig. 9 which shows the components contributing to the total absorbance (or fluorescence) of a test well, buffer blank, sample blank, and substrate blank. Table 2 provides an example of how the different blanking methods used in this study changed the blank-corrected absorbance (or fluorescence) of the test well.
In the $\alpha$-glucosidase assay (Fig. 6), enzyme inhibition by the two plant extracts *A. marmelos* and *P. niruri* was only observed when a sample blank was included [RD – SaB, RD – (BB + SaB), RD – (SuB + SaB), and RD – (Su + SaB)] to offset the interference due to the colour of the extracts (Fig. 10). When results were calculated without a sample blank (RD – BB and RD – SuB), *A. marmelos* and *P. niruri* appeared to promote enzyme activity. This apparent increase in enzyme activity can be attributed to the colour of the two extracts contributing an additive effect to the absorbance of the test wells resulting in an over-estimation of enzyme activity. Note that the interferences due to sample colour may be minimised by diluting the samples. However, if the sample is too dilute for inhibitory activity to be detected, screening
samples at low concentrations increases the risk of missing potential hits.
In contrast to the plant extracts, acarbose inhibited $\alpha$-glucosidase significantly regardless of whether or not a sample blank was included in the blank-correction. Acarbose was colourless and dissolved completely in $\alpha$-glucosidase assay buffer I to give a clear, colourless solution (Fig. 10). Hence, any spectral interference due to the background absorbance of acarbose was negligible and not correcting this background absorbance did not affect the final results as much as the highly coloured extracts which had high background absorbances (sample blank absorbance).
Figures 7 and 8 illustrate the effects of the six blanking methods on the inhibition of $\alpha$-amylase and lipase. Regardless of the blanking method used, all test samples and positive controls (acarbose for $\alpha$-amylase, and orlistat for lipase) showed significant inhibition of their respective enzymes. This contrasted with the absorbance-based $\alpha$-glucosidase assay, where not including a sample blank gave enzyme activity > 100% for the two plant extracts. The reason for this was that the colours of the extracts used in the $\alpha$-glucosidase assay had high background absorbance values relative to the absorbance of the test wells and therefore contributed a larger error to the final results. Correcting this large error by subtracting the sample blanks caused a larger reduction in the final results. On the other hand, the background fluorescence of the samples in the amylase and lipase assays were much smaller relative to the fluorescence of the test wells, and therefore only contributed a smaller error to the final results.
Although the sample blank did not influence the enzyme activity values in the fluorescence assays, the inclusion or omission of a substrate blank significantly impacted the final enzyme activity values in both the $\alpha$-amylase (Fig. 7) and lipases assays (Fig. 8). The enzyme activity results within each group (each of the positive controls and plant extracts) separated into two tiers depending on whether a substrate blank had been included in the blank-correction: the enzyme activity of all groups were significantly higher when a substrate
blank was not included [RD – BB, RD – SaB, and RD – (BB + SaB)] than when a substrate blank was included (RD – SuB, RD – (SuB + SaB), and RD – (Su + SaB)). This was attributed to the high background autofluorescence of the substrate (157,700 for α-amylase substrate, and 35,632 for lipase substrate) which contributed a proportionally larger error to the final results (e.g. for *G. jasminoides*, $F_{\text{test}} = 193222$, $F_{\text{sample blank}} = 179$ vs $F_{\text{substrate blank}} = 171765$). Subtracting the high background autofluorescence of the substrate (substrate blank) proportionally reduced the blank-corrected fluorescence of the test wells (193,222 – 171,765 = 21,457 when substrate blank was included versus 193,222 – 179 = 193,044 when the sample blank was included) and therefore reduced the final calculated enzyme activity values. In the α-glucosidase assay, the substrate had a comparatively smaller effect on the enzyme activity results because the background absorbance due to the substrate was lower when compared to the raw absorbance data of the test wells. These results indicated that it is crucial to include a substrate blank to avoid the risk of missing potential enzyme inhibitory activity in drug candidates due to the overestimation of enzyme activity.
Subtracting two blanks [RD – (BB + SaB) and RD – (SuB + SaB)] generally resulted in lower enzyme activity than for the methods that only subtracted one blank. Note that both the substrate blank and the sample blank contains a certain volume of buffer (Fig. 9). Therefore, the absorbance or fluorescence of these blanks was in part, due to the buffer, and only a proportion of the absorbance was due to the substrate or sample. If two blanks were subtracted during blank-correction, as in the case with RD – (BB + SaB) and RD – (SuB + SaB), this “hidden” background due to the buffer would be subtracted twice, resulting in “double-blanking.” In the literature, some studies have combined sample and substrate in a single blank [65, 79, 80]. This approach prevents double-blanking the buffer as only one blank (containing
buffer + sample + substrate) is subtracted from the raw data. However, this must be carried out with caution as unexpected results may occur when the substrate interacts with unknown constituents in the sample; especially in crude natural product extracts which contain complex mixtures of compounds and are a notorious, complicated biological matrix. Another drawback to this approach is that the substrate must be added to each sample blank and this would therefore require larger amounts of substrate per assay which adds to the cost of the assay.
**Table 2** Effect of blank-correction methods on the test well absorbance values in Fig. 9
| Method | Formula | Calculation | Blank-corrected absorbance |
|--------|---------|-------------|----------------------------|
| 1 | RD — BB | $0.750 - 0.050$ | 0.700 |
| 2 | RD — SuB | $0.750 - 0.150$ | 0.600 |
| 3 | RD — SaB | $0.750 - 0.250$ | 0.500 |
| 4 | RD — (BB + SaB) | $0.750 - (0.050 + 0.250)$ | 0.450 |
| 5 | RD — (SuB + SaB) | $0.750 - (0.150 + 0.250)$ | 0.350 |
| 6 | RD — [(SuB — BB) + SaB] | $0.750 - [(0.150 - 0.050) + 0.250]$ | 0.400 |
RD: raw data ($A_{test}$); BB: buffer blank; SuB: substrate blank; SaB: sample blank
method six [RD − (Su + SaB)], the buffer blank was subtracted from the substrate blank to obtain the absorbance or fluorescence contributed by the substrate only (Su = SuB − BB) and this value was subtracted from the raw data along with the sample blank [RD − (Su + SaB)]. This allowed for the correction of interference due to the sample and the substrate, without double-blanking the buffer, and without adding the substrate to each sample blank and any unexpected artefacts which may arise from doing so. Therefore, [RD − (Su + SaB)] was the most appropriate blanking method for all three assays.
**Conclusion**
This is the first study to investigate the effects of using different blanks and blanking methods on the results of enzyme assays, and to provide comprehensive, step-by-step protocols for α-amylase, α-glucosidase, and lipase-inhibition assays that can be performed in 96-well format, in a simple, fast, and resource-efficient manner, with clear instructions for blank-correction and calculation of results.
The type of blanks and blank-correction methods employed in the assays had a significant effect on the final, calculated enzyme activity (and inhibition) values in all three assays. The importance of including a sample blank when testing highly coloured samples, as well as the relevance of a substrate blank, and avoiding errors due to unintended double-blanking were highlighted in the results. Depending on the blank(s) used in the blank-correction of raw data, variation in the final calculated results can lead to either an over-estimation or under-estimation of the calculated enzyme activity. Not accounting for interferences due to the colour of the natural product extracts can result in misleadingly high enzyme activity values which underestimates the bioactivity of the target sample. Therefore, potential enzyme inhibitors can be inadvertently overlooked resulting in missed opportunities in the drug discovery process.
The variation in the final calculated results demonstrated that the same set of raw data can produce different results depending on the blank-correction method. This emphasises the importance of standardising the use of blanks and the reporting of blank-correction procedures in published literature in order to enhance the reproducibility of results and prevent misleading results in not only enzyme assays, but also in other spectrometry-based assays involving natural product extracts.
Out of the methods tested, the sixth blanking method [RD − (Su + SaB)] adequately accounted for interferences due to the background absorbance/fluorescence of both the substrate and sample without double blanking, and using lower volumes of substrate, and is therefore recommended for the blank-correction of raw data in enzyme assays.
**Abbreviations**
4-MU: 4-Methylumbelliferone; 4-MUO: 4-Methylumbelliferyl oleate; $A_{\text{absorbance}}$: Absorbance; BB: Buffer blank; BODIPY®: Boron-dipyromethene; BODIPY®FL-DQ™: Boron-dipyromethene fluorescent dye quenching; Cat. No.: Catalogue number; DMSO: Dimethyl sulfoxide; DQ™: Dye quenching; DQ™ starch: Dye quenching starch; EC: Enzyme commission number; $F_{\text{Fluorescence}}$: Fluorescence; PBS: Phosphate buffered saline; pNPG: $p$-Nitrophenyl-$\alpha$-$D$-glucopyranoside; RD: Raw data; RT: Room temperature (25 °C); SaB: Sample blank; SuB: Substrate blank.
**Acknowledgements**
Dr Daniel Anthony Dias gratefully acknowledges The American Society of Pharmacognosy for being awarded a Research Starter Grant and support from The Product Makers Pty Ltd (TPM Bioactives Division), Melbourne, Australia.
**Authors’ contributions**
CL and SL designed and carried out the experiments, CL conducted data processing, statistical analyses, and created all graphs and figures, MF, GL, HG, TH, and DD provided materials for the experiments and assisted with experimental design, CL and SL drafted and wrote the manuscript, MF, GL, HG, TH and DD commented and provided critical feedback on the manuscript, TH and DD revised the text and structure and outlined it several times together with CL and SL. All authors read and approved the final manuscript.
**Funding**
The American Society of Pharmacognosy and The Product Makers Pty Ltd (TPM Bioactives Division), Melbourne, Australia.
**Availability of data and materials**
The datasets used during the current study are available from the corresponding author on reasonable request.
**Ethics approval and consent to participate**
Not applicable.
**Consent for publication**
Not applicable.
**Competing interests**
The authors declare no competing interests.
**Author details**
1 School of Health and Biomedical Sciences, Discipline of Laboratory Medicine, RMIT University, Bundooora 3083, Australia. 2 TPM Bioactives Division, The Product Makers Pty Ltd, Melbourne, Australia. 3 School of Life Sciences, La Trobe University, Melbourne, Australia. 4 School of Science, RMIT University, Bundooora 3083, Australia.
References
1. Hopkins AL, Groom CR. The druggable genome. Nat Rev Drug Discov. 2002;1(9):727–30.
2. Copeland RA. Why enzymes as drug targets? Evaluation of enzyme inhibitors in drug discovery: a guide for medicinal chemists and pharmacologists. 2nd ed. Hoboken: Wiley; 2013. p. 1–23.
3. Copeland RA, Harpel MR, Turminno PJ. Targeting enzyme inhibitors in drug discovery. Expert Opin Ther Targets. 2007;11(7):967–78.
4. Seiler N. Thirty years of polyamine-related approaches to cancer therapy. Retrospect and prospect. Part I. Selective enzyme inhibitors. Curr Drug Targets. 2003;4(7):537–64.
5. Scatena R, Bottino P, Pontoglio A, Mastrototaro L, Giardina B. Glycolytic enzyme inhibitors in cancer treatment. Expert Opin Investig Drugs. 2008;17(10):1533–45.
6. McFarlane SJ, Kumar A, Sowers JR. Mechanisms by which angiotensin-converting enzyme inhibitors prevent diabetes and cardiovascular disease. Am J Cardiol. 2003;91(12):30–7.
7. Messerli FH, Bangalore S, Bavishi C, Rimoldi SF. Angiotensin-converting enzyme inhibitors in hypertension. J Am Coll Cardiol. 2018;71(13):1474–82.
8. Chiasson JL, Josse RG, Gomis R, Haneefeld M, Karasik A, Laakso M. Acarbose for prevention of type 2 diabetes mellitus: the STOP-NIDDM randomised trial. Lancet. 2002;359(9232):2072–7.
9. Van de Laar FA, Lucassen PLBJ, Akkermans RP, Van de Lisdonk EH, Rutten GEHW, Van Weel C. Alpha glucosidase inhibitors for type 2 diabetes mellitus. Cochrane Database Syst Rev. 2005;CD003639.
10. Youdim MBH, Edmondson D, Tipton KF. The therapeutic potential of monoamine oxidase inhibitors. Nat Rev Neurosci. 2006;7(4):295.
11. Finberg JPM, Rabey JM. Inhibitors of MAO-A and MAO-B in Psychiatry and Neurology. Front Pharmacol. 2016;7:340.
12. Mehta M, Adem A, Sabbagh M. New acetylcholinesterase inhibitors for Alzheimer’s disease. Int J Alzheimers Dis. 2012;2012:728983.
13. Rolinski M, Fox C, Maidment I, McShane R. Cholinesterase inhibitors for dementia with Lewy bodies, Parkinson’s disease dementia and cognitive impairment in Parkinson’s disease. Cochrane Database Syst Rev. 2012;2012(3):CD006504.
14. Luo S, Gill HJ, Dias DA, Li M, Hung A, Nguyen LT, et al. The inhibitory effects of an eight-herb formula (RCM-107) on pancreatic lipase: enzymatic, HPTLC profiling and in silico approaches. Heliyon. 2019;5(9):e02453.
15. Tucci SA, Boyand EJ, Halford JCG. The role of lipid and carbohydrate digestive enzyme inhibitors in the management of obesity. Diabetes Metab Syndr Obes. 2010;3:125–43.
16. Agarwal P, Gupta R. Alpha-amylase inhibition can treat diabetes mellitus. Res Rev J Med Health Sci. 2016;54:1–8.
17. Sellami L, Louati H, Karnoun J, Kchaou A, Damak M, Gargouri Y. Inhibition of pancreatic lipase and amylase by extracts of different spices and plants. Int J Food Sci Nutr. 2017;68(3):313–20.
18. Buchholz T, Melzig MF. Medicinal plants traditionally used for treatment of obesity and diabetes mellitus—screening for pancreatic lipase and α-amylase inhibition. Phytother Res. 2016;30(2):260–6.
19. Rajan L, Palaniswamy D, Mohankumar SK. Targeting obesity with plant-derived pancreatic lipase inhibitors: a comprehensive review. Pharmocol Res. 2020;155:104681.
20. Turrids R, Loizzo M. Natural products as alpha-amylase and alpha-glucosidase inhibitors and their hypoglycaemic potential in the treatment of diabetes: an update. Mini Rev Med Chem. 2010;10:315–31.
21. Dias DA, Urban S, Roessner U. A historical overview of natural products in drug discovery. Metabolites. 2012;2(2):303–36.
22. Cragg GM, Grothaus PG, Newman DJ. New horizons for old drugs and drug leads. J Nat Prod. 2014;77(3):705–23.
23. Newman DJ, Cragg GM. Natural products as sources of new drugs from 1981 to 2014. J Nat Prod. 2016;79(3):629–61.
24. Beutler JA. Natural Products as a Foundation for Drug Discovery. Curr Protoc Pharmacol. 2009;46(1):9–11.
25. Lahiou M. The Success of natural products in drug discovery. Pharmacol Pharm. 2013;4(3A):17–31.
26. Shen B. A New Golden Age of Natural Products Drug Discovery. Cell. 2015;163(6):1297–300.
27. Qurratul A, Ashiq U, Jamal RA, Saleem M, Mahroof-Tahir M. Alpha-glucosidase and carbonic anhydrase inhibition studies of Pd(l)-hydrazide complexes. Arabian J Chem. 2017;10(4):488–99.
28. Molecular Probes Inc. EnzChek Ultra Amylase Assay Kit (E33651). 2006. https://www.thermofisher.com/order/catalog/product/E33651. Accessed 10 Feb 2018.
29. Goddard JP, Reymond JL. Enzyme assays for high-throughput screening. Curr Opin Biotechnol. 2004;15(4):314–22.
30. Reymond JL, Fluxa VS, Maillard N. Enzyme assays. Chem Commun. 2009;1:34–46.
31. Lankatillake C, Huynh T, Dias DA. Understanding glycaemic control and current approaches for screening antidiabetic natural products from evidence-based medicinal plants. Plant Methods. 2019;15(1):105.
32. Granados-Guzman G, Castro-Rios R, Waksman de Torres N, Salazar-Aranda R. Optimization and validation of a microscale in vitro method to assess alpha-glucosidase inhibition activity. Curr Anal Chem. 2018;14(5):458–64.
33. Beneyton T, Wijaya IP, Postros P, Najah M, Leblond P, Couvent A, et al. High-throughput screening of filamentous fungi using nanoliter-range droplet-based microfluidics. Sci Rep. 2016;6:27223.
34. Held P. Enzymatic digestion of polysaccharides. Part 1: monitoring polymer digestion and glucose production in microplates. BioTek Instruments, Inc. 2012. https://www.biotek.com/assets/tech_resources/Cellulosic_App_Note_Part_1.pdf. Accessed 15 Aug 2020.
35. Koyama K, Hirao T, Tonita A, Hayakawa K. An analytical method for measuring alpha-amylase activity in starch containing foods. Biomed Chromatogr. 2013;27(5):583–8.
36. Thermo Fisher Scientific Inc. Detecting Peptidases and Proteases—Section 10.4. 2020. https://www.thermofisher.com/au/en/home/references/molecular-probes-the-handbook/enzymessubstrates/detecting-peptidases-and-proteases.html. Accessed 16 Aug 2020.
37. Buchholz T, Melzig MF. Polyphenolic compounds as pancreatic lipase inhibitors. Planta Med. 2015;81(10):771–83.
38. Ortiz-Miranda S, Ji JR, Jurczyk A, Aryee KE, Mo S, Fletcher T, et al. A novel transgenic mouse model of lysosomal storage disorder. Am J Physiol Gastrointest Liver Physiol. 2016;311(15):G903–19.
39. Jahn M, Zerr A, Fedorowicz FM, Brigger F, Koulov A, Mahler HC. Measuring lipolytic activity to support process improvements to manage lipase-mediated polysorbate degradation. Pharm Res. 2020;37(6):118.
40. Li X, Zhong X, Wang X, Li J, Liu J, Wang K, et al. Bioassay-guided isolation of triterpenoids as α-glucosidase inhibitors from Cirsium setosum. Molecules. 2019;24(10):1844.
41. Di Sotto A, Locatelli M, Macone A, Toniolo C, Cesa S, Carradori S, et al. Hypoglycemic, antiglycation, and cytoprotective properties of a phenol-rich extract from waste peel of Punica granatum L. Var. Dente di cavallo DCZ. Molecules. 2019;24(17):3103.
42. Ostberg-Potthoff JJ, Berger K, Richling E, Winterhalter P. Activity-guided fractionation of red fruit extracts for the identification of compounds influencing glucose metabolism. Nutrients. 2019;11(6):1166.
43. Olaiakun OO, Alaba AE, Ligege K, Mikolo NM. Phytochemical content, antidabetes, anti-inflammatory antioxidant and cytotoxic activity of leaf extracts of Elephantorrhiza elephantina (Burch.) Skeels. S Afr J Bot. 2020;128:319–25.
44. Popović BM, Blagojević B, Ždero Pavlović R, Mičić N, Bilić S, Bogdanović B, et al. Comparison between polyphenol profile and bioactive response in blackthorn (Prunus spinosa L.) genotypes from north Serbia—from raw data to PCA analysis. Food Chem. 2020;302:125373.
45. Yilmazer-Musa M, Griffith AM, Michels AJ, Schneider E, Frei B. Grape seed and tea extracts and catechin 3-gallates are potent inhibitors of alpha-amylase and alpha-glucosidase activity. J Agric Food Chem. 2012;60(36):8924–9.
46. Auld DS, Coassin PA, Coussens NP, Hensley P, Klumpp-Thomas C, Michael S, et al. Microplate Selection and Recommended Practices in High-throughput Screening and Quantitative Biology. In: Assay Guidance Manual. Bethesda: Eli Lilly & Company and the National Center for Advancing Translational Sciences; 2020.
47. Simeonov A, Davis MI. Interference with Fluorescence and Absorbance. In: Assay Guidance Manual. Bethesda: Eli Lilly & Company and the National Center for Advancing Translational Sciences. 2018
48. Thorne N, Auld DS, Inglese J. Apparent activity in high-throughput screening: origins of compound-dependent assay interference. Curr Opin Chem Biol. 2010;14(3):315–24.
49. Chlorophylls Gross J. Pigments in Vegetables. Boston: Springer; 1991. p. 3–74.
50. Carotenoids Gross J. Pigments in Vegetables. Boston: Springer; 1991. p. 75–278.
51. Azeredo HMC. Betalains: properties, sources, applications, and stability—a review. Int J Food Sci Technol. 2009;44(12):2365–76.
52. Mlodzinska E. Survey of plant pigments: molecular and environmental determinants of plant colors. Acta Biol Crac Ser Bot. 2009;51(1):7–16.
53. Glover BJ, Martin C. Anthocyanins. Curr Biol. 2011;22(5):R147–50.
54. Moon KM, Kwon EB, Lee B, Kim CY. Recent trends in controlling the enzymatic browning of fruit and vegetable products. Molecules. 2020;25(12):2754.
55. Nicolas JJ, Richard-Forget FC, Goupy PM, Amiot MJ, Aubert SY. Enzymatic browning reactions in apple and apple products. Crit Rev Food Sci Nutr. 1994;34(2):109–57.
56. Toledo L, Aguirre C. Enzymatic browning in avocado (Persea americana) revisited: history, advances, and future perspectives. Crit Rev Food Sci Nutr. 2017;57(18):3860–72.
57. Geremias-Andrade IM, Rocheto AC, Gallo FA, Petrus RR. The shelf life of standardized sugarcane juice stored under refrigeration. Food Sci Technol. 2020;40(1):95–101.
58. Roshchina VV, Kuchini AV, Yashin VA. Application of autofluorescence for analysis of medicinal plants. Int J Spectrosc. 2017;2017:1–8.
59. Duval R, Duplais C. Fluorescent natural products as probes and tracers in biology. Nat Prod Rep. 2017;34(2):161–93.
60. Donaldson L. Autofluorescence in plants. Molecules. 2020;25(10):2393.
61. Muller SM, Gallhardt H, Schneider J, Barsics BG, Seidel T. Quantification of Forster resonance energy transfer by monitoring sensitized emission in living plant cells. Front Plant Sci. 2013;4:413.
62. Muller MG, Georgakoudi I, Zhang Q, Wu J, Feld MS. Intrinsic fluorescence spectroscopy in turbid media: disentangling effects of scattering and absorption. Appl Opt. 2001;40(25):4633–46.
63. Hach. What is the difference between a reagent blank and a sample blank? 2019. https://support.hach.com/app/answers/answer_view/a_id/100699/loc/en_US/__highlight__. Accessed 28 Apr 2020.
64. Földesi B. Guide to enzyme unit definitions and assay design. 2019. https://www.biomed.com/resources/biomol-blog/guide-to-enzyme-unit-definitions-and-assay-design/. Accessed 16 Aug 2020.
65. Yang X, Kong F. Evaluation of the in vitro α-glucosidase inhibitory activity of green tea polyphenols and different tea types. J Sci Food Agric. 2016;96(3):777–82.
66. Ozek G, Ozbek MU, Yur S, Goger F, Arslan M, Ozek T. Assessment of endemic cota fulvida (Asteraceae) for phytochemical composition and inhibitory activities against oxidation, α-amylase, lipoygenase, xanthine oxidase and tyrosinase enzymes. Rec Nat Prod. 2019;13(4):333–45.
67. Tan Y, Chang SKC, Zhang Y. Comparison of α-amylase, α-glucosidase and lipase inhibitory activity of the phenolic substances in two black legumes of different genera. Food Chem. 2017;24:1259–68.
68. Tunsarangkarn T, Rungsiyothin A, Ruangrungsi N. α-Glucosidase inhibitory activity of Thai mimosaceous plant extracts. J Health Res. 2008;22(1):29–33.
69. Cardullo N, Mucciilli V, Pulvirenti L, Cornu A, Pouysegul L, Deffieux D, et al. C-glycosidic ellagitannins and galloylated glucoses as potential functional food ingredients with anti-diabetic properties: a study of alpha glucosidase and alpha-amylase inhibition. Food Chem. 2020;313:126099.
70. Rubilar M, Jara C, Poo Y, Acevedo F, Gutierrez C, Sineiro J, et al. Extracts of Maqui (Aristotelia chilensis) and Murta (Ugni molinae Turcz.), sources of antioxidant compounds and alpha-Glucosidase/alpha-Amylase inhibitors. J Agric Food Chem. 2011;59(5):1630–7.
71. Podsedek A, Majewskia J, Redzynia M, Sosnowska D, Koziolikiewicz M. In vitro inhibitory effect on digestive enzymes and antioxidant potential of commonly consumed fruits. J Agric Food Chem. 2014;62(20):4610–7.
72. Sosnowska D, Podsedek A, Redzynia M, Kucharska AZ. Inhibitory effect of black chokeberry fruit polyphenols on pancreatic lipase – Searching for most active inhibitors. J Funct Foods. 2018;49:196–204.
73. Yuan Y, Zhang J, Fan J, Clark J, Shen P, Li Y, et al. Microwave assisted extraction of phenolic compounds from four economic brown macroalgae species and evaluation of their antioxidant activities and inhibitory effects on alpha-amylase, alpha-glucosidase, pancreatic lipase and tyrosinase. Food Res Int. 2018;113:288–97.
74. Hao W, Wang M, Lv M. The inhibitory effects of Yixing Black Tea extracts on A-Glucosidase. J Food Biochem. 2017;41(1):e12269.
75. Jacobson RL, Schlein Y. Phlebotomus papatasi and Leishmania major parasites express α-amylase and α-glucosidase. Acta Trop. 2001;78(1):41–9.
76. Khalil AT, Ovais M, Ullah I, Ali M, Jan SA, Shinwari ZK, et al. Bioinspired synthesis of pure massicot phase lead oxide nanoparticles and assessment of their biocompatibility, cytotoxicity and in vitro biological properties. Arabian J Chem. 2020;13(1):916–31.
77. Sun L, Warren FJ, Gidley MJ. Soluble polysaccharides reduce binding and inhibitory activity of tea polyphenols against porcine pancreatic α-amylase. Food Hydrocol. 2018;79:63–70.
78. Dou F, Xi M, Wang J, Tian X, Hong L, Tang H, et al. α-Glucosidase and α-amylase inhibitory activities of saponins from traditional Chinese medicines in the treatment of diabetes mellitus. Die Pharmazie-Int J Pharm Sci. 2013;68(4):300–4.
79. Awosika TO, Aluko RE. Inhibition of the in vitro activities of α-amylase, α-glucosidase and pancreatic lipase by yellow field pea (Pisum sativum L.) protein hydrolysates. Int J Food Sci Technol. 2019;54(6):2021–34.
80. Hamdani SS, Khan BA, Ahmed MN, Hameed S, Akhter K, Ayub K, et al. Synthesis, crystal structures, computational studies and α-amylase inhibition of three novel 1,5,8-oxadiazole derivatives. J Mol Struct. 2020;1200:127085.
81. Phuwanapisrisan P, Pukasook T, Jong-Aramruang J, Kokpol U. Phenylethyl cinnamides: a new series of alpha glucosidase inhibitors from the leaves of Aegle marmelos. Bioorg Med Chem Lett. 2008;18(18):4956–8.
82. Ansari P, Afroz N, Jaliil S, Azad SB, Mustakim MG, Anwar S, et al. Anti-hyperglycemic activity of Aegle marmelos (L.) corr. is partly mediated by increased insulin secretion, α-amylase inhibition, and retardation of glucose absorption. J Pediatr Endocrinol Metab. 2017;30(1):37–47.
83. Beidokhti MN, Andersen MV, Eid HM, Sanchez Villavicencio ML, Staerk D, Haddad PS, et al. Investigation of antidiabetic potential of Phyllanthus niruri L. using assays for α-glucosidase, muscle glucose transport, liver glucose production, and adipogenesis. Biochim Biophys Res Commun. 2017;493(1):869–74.
84. Guillen Quispe YN, Hwang SH, Wang Z, Zuo G, Lim SS. Screening in vitro targets related to diabetes in herbal extracts from Peru: identification of active compounds in hypericum laricifolium juss by offline high performance liquid chromatography. Int J Mol Sci. 2017;18(12):2512.
85. Okoli CO, Obidike IC, Ezike AC, Akah PA, Salawu OA. Studies on the possible mechanisms of antidiabetic activity of extract of aerial parts of Phyllanthus niruri. Pharm Biol. 2011;49(3):248–55.
86. Ranilla LG, Kwon YI, Apostolidis E, Shetty K. Phenolic compounds, antioxidant activity and in vitro inhibitory potential against key enzymes relevant for hyperglycemia and hypertension of commonly used medicinal plants, herbs and spices in Latin America. Biosourc Technol. 2010;101(12):4676–89.
87. Luo S, Lenon GB, Gill H, Hung A, Dias DA, Li M, et al. Inhibitory effect of a weight-loss Chinese herbal formula RCM-107 on pancreatic alpha-amylase activity: enzymatic and in silico approaches. PLoS ONE. 2020;15(4):e0231815.
88. He Q, Lv Y, Yao K. Effects of tea polyphenols on the activities of α-amylase, pepsin, trypsin and lipase. Food Chem. 2007;101(3):1178–82.
89. Ramirez G, Zavala M, Perez J, Zamalloa I. In vitro screening of medicinal plants used in Mexico as antidiabetics with glucosidase and lipase inhibitory activities. Evid Based Complement Alternat Med. 2012;2017:701261.
90. Gondoin A, Grussu D, Stewart D, McDougall GJ. White and green tea polyphenols inhibit pancreatic lipase in vitro. Food Res Int. 2010;43(5):1537–44.
91. Lordan S, Smyth TJ, Soler-Vila A, Stanton C, Ross RP. The alpha-amylase and alpha-glucosidase inhibitory effects of Irish seaweed extracts. Food Chem. 2013;141(3):2170–6.
92. Herrera T, Navarro del Hierro J, Fornari T, Reglero G, Martin D. Inhibitory effect of quinoa and fenugreek extracts on pancreatic lipase and α-amylase under in vitro traditional conditions or intestinal simulated conditions. Food Chem. 2019;270:509–17.
93. Zhang B, Deng Z, Ramdath DD, Tang Y, Chen PX, Liu R, et al. Phenolic profiles of 20 Canadian lentil cultivars and their contribution to antioxidant activity and inhibitory effects on alpha-glucosidase and pancreatic lipase. Food Chem. 2015;172:862–72.
94. Di L, Kerns EH. Biological assay challenges from compound solubility: strategies for bioassay optimization. Drug Discov Today. 2006;11(9–10):446–51.
95. Johnston PA, Johnston PA. Cellular platforms for HTS: three case studies. Drug Discov Today. 2002;7(6):353–63.
96. Lundholt BK, Scudder KM, Pagliaro L. A simple technique for reducing edge effect in cell-based assays. J Biomol Screen. 2003;8(5):566–70.
97. Lau CY, Zahidi AAA, Liew OW, Ng TW. A direct heating model to overcome the edge effect in microplates. J Pharm Biomed Anal. 2015;2015(102):199–202.
98. Clarkson JR, Cui ZF, Darton RC. Protein denaturation in Foam: II. Surface activity and conformational change. J Coll Interface Sci. 1999;1999(215):333–8.
99. Clarkson JR, Cui ZF, Darton RC. Effect of solution conditions on protein damage in foam. Biochem Eng J. 2000;2000(4):107–14.
100. Bisswanger H. Enzyme assays. Perspect Sci. 2014;1(1–6):41–55.
101. Nakai M, Fukui Y, Asami S, Toyoda-Ono Y, Iwashita T, Shibata H, et al. Inhibitory effects of oolong tea polyphenols on pancreatic lipase in vitro. J Agric Food Chemi. 2005;53(11):4593–8.
102. Gorovits B, Roldan MA, Baltrukonis D, Cai CH, Donley J, Jani D, et al. Anti-drug antibody assay validation: improved reporting of the assay selectivity via simpler positive control recovery data analysis. AAPS J. 2019;21(5):76.
**Publisher’s Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
SmoothMoves: Smooth Pursuits Head Movements for Augmented Reality
Augusto Esteves¹, David Verweij¹,², Liza Suraiya³, Rasel Islam³, Youryang Lee³, Ian Oakley³
¹Centre for Interaction Design, Edinburgh Napier University, Edinburgh, United Kingdom
²Department of Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands
³Human and Systems Engineering, Ulsan National Institute of Science and Technology, Ulsan, Korea
{a.esteves, firstname.lastname@example.org, {liza, islam, email@example.com, firstname.lastname@example.org
ABSTRACT
SmoothMoves is an interaction technique for augmented reality (AR) based on smooth pursuits head movements. It works by computing correlations between the movements of on-screen targets and the user’s head while tracking those targets. The paper presents three studies. The first suggests that head based input can act as an easier and more affordable surrogate for eye-based input in many smooth pursuits interface designs. A follow-up study grounds the technique in the domain of augmented reality, and captures the error rates and acquisition times on different types of AR devices: head-mounted (2.6%, 1965ms) and hand-held (4.9%, 2089ms). Finally, the paper presents an interactive lighting system prototype that demonstrates the benefits of using smooth pursuits head movements in interaction with AR interfaces. A final qualitative study reports on positive feedback regarding the technique’s suitability for this scenario. Together, these results show SmoothMoves is viable, efficient and immediately available for a wide range of wearable devices that feature embedded motion sensing.
Author Keywords
Wearable computing; eye tracking; augmented reality; AR; input technique; smooth pursuits; motion matching; HMD.
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous.
INTRODUCTION
Augmented Reality (AR) glasses are a rapidly maturing technology. The latest products, such as Microsoft HoloLens [22], include powerful computers, high resolution displays and sophisticated tracking. While these technical achievements are impressive, there is less clarity about the best ways for users to interact with AR contents and interfaces. There is an active community exploring viable modalities for head-mounted displays (HMDs) including on-headset touch [36], mid-air hand input [23] and the use of dedicated wearable peripherals such as gloves [12] or belts [8]. Within this space, we argue that input from movements of the eyes [35] and head [3] are particularly practical and appealing: in such scenarios, hands remain free and all sensing can be integrated into the headset.
Traditional approaches to head based input focus on pointing by either tracking gaze location [31] or via ray-casting techniques that infer an object of interest from the orientation of the head [25]. While the simplicity of these approaches is laudable, problems remain. Although they readily enable a user to hover over a specific icon or region, they also both require a discreet, explicit confirmation mechanism to trigger a selection. Common approaches such as dwell add a fixed time cost and decrease accuracy [13]. Alternatives such as hand gestures (as in the Microsoft HoloLens) require additional sensing equipment. Furthermore, while gaze tracking solutions exist for mobile settings, well reported challenges in accurate tracking and calibration in real world scenarios [10] makes gaze based target selection techniques practically infeasible.
To mitigate these problems, authors have proposed gaze input systems based on smooth pursuits [1,11,18,32] – distinctive, continuous, low latency adjustments to gaze that are naturally produced when (and only when) visually tracking a moving object. Smooth pursuits systems operate by showing a user a set of moving targets whilst tracking gaze. Statistical matching between the gaze and target...
trajectories is used to infer which target a user is attending to. The technique has been shown to be useful in tasks as diverse as calibrating eye tracking systems [26] and creating novel gaze input techniques for devices large (e.g. public displays [35]) and small (e.g. smart watches [9]).
While current accounts of smooth pursuits input show its potential, we argue that key aspects of the behavior remain unstudied. In particular, we note that fundamental literature on visual tracking indicates that it involves a synergistic combination of head and eye movement [18]. Accordingly, we argue that it may be possible to reliably perform explicit smooth pursuits style tracking movements with the head instead of the eye - this extends Dhuliawala et al.’s [7] recent proposal that explores complementary movements of the head and eye. Using head motions accords considerable practical benefits, primarily that the Inertial Motion Units (IMUs) needed to accurately track head movements are small, cheap, low power and already integrated into the majority of AR glasses and other wearables.
In order to explore the potential of this idea, this paper contributes SmoothMoves, an input technique that relies on data from a head-mounted IMU to enable users to select moving targets by continuously matching the target position with the orientation of their head. To explore the viability and value of this idea, we also contribute three studies. First, a fundamental study (using a PC monitor) compares performance with IMU based head tracking against the more established baseline of gaze tracking in situations where only a single target is shown. We report strong similarities across a range of target movement conditions. Second, we compare the performance of SmoothMoves in both handheld and HMD based AR systems in situations where multiple targets are presented. Building on these results, the final sections of this paper apply SmoothMoves input to an HMD used in a smart home scenario and report on results from a qualitative user study. Together this work represents a comprehensive exploration of the potential, feasibility, reliability and experience of head motion based smooth pursuits as an input modality for augmented reality.
**RELATED WORK**
Gaze is the inseparable product of head movements plus eye movements. The relationship between these activities is sophisticated. At the most fundamental level, the Vestibulo-Ocular Reflex (VOR) [19] continuously stabilizes gaze by adjusting (basically inverting) eye position in response to changes in head position sensed by the vestibular system. It is key to providing a stable visual experience of objects. In contrast, during smooth pursuits tracking of rapidly moving objects [29], the head and eye move together [18] to keep an object optimally in view. Smooth pursuits movements also involve two distinct stages. Initially, the eyes and head are accelerated to align with the moving stimuli, an open-loop process that can take up to 300-500ms [27]. Subsequent closed-loop tracking closely matches the target, particularly in situations where velocities are stable.
A number of properties make smooth pursuits movements useful as an input technique. First, they are innate. Users know how to visually track targets and can generate this kind of motion without training. Second, they are distinctive. Users are only able to generate smooth pursuits eye movements in the presence of visually moving targets. Third, they operate on movement not position. As such, they are relatively immune to changes in target size [9] and robust to tracking errors – capturing changes in gaze is much simpler than accurately determining what a user is looking at. Fourth, they are operated hands-free. And fifth, they do not require users to memorize gestures. Several systems have been recently introduced to leverage these properties. Vidal et al. [35] used smooth pursuits to enable quick, spontaneous interaction with public displays, while Lutz et al.’s [20] applied the technique to text entry on public dashboards. Cymek et al. [6] and Khamis et al. [16] explored how smooth pursuits input can create safer PIN entry systems, and Esteves et al. [9] and Kangas et al. [14] relied on the scale-independent, calibration-free nature of smooth pursuits gaze input to deliver hands-free interaction on, respectively, smart watches and glasses. Finally, Dhuliawala et al. [7] show that alternative eye gaze sensing modalities, such as EOG, also have the potential to support smooth pursuits input. This work demonstrates that the technique is sufficiently powerful and flexible to be deployed in a wide range of input scenarios.
However, these systems rely on smooth pursuits eye movements. We identify an opportunity to study the viability of using IMU-derived head movements to achieve the same objectives. This approach would convey a number of advantages. First and foremost is cost: wearable eye tracking remains expensive (computer vision: ~1500 USD [15]; EOG: ~1500 USD [37]) whereas head tracking can be achieved with an IMU costing no more than ten USD. The second is form-factor: eye trackers require cameras or electrodes mounted at specific locations on the user’s face, with the former also requiring a clear line of sight to the eyes. In contrast, IMUs can be mounted anywhere on the head. Furthermore, IMU’s are small and light enough (<10mm square, <1 gram) to be integrated into almost any wearable item: headphones, eyewear, jewelry, clothes and, indeed, existing smart glasses (e.g. Microsoft HoloLens). Optical systems are also susceptible to changing light conditions, such as those that occur outdoors, while IMU’s are relatively unaffected by environmental factors.
These beneficial properties have not gone unremarked. Indeed, a range of techniques for input based on head movements has been proposed and studied. Ray-based pointing, in which users interact by projecting a ray from their head to intersect with a target of interest, is the most common [4] and has been integrated into current head-mounted displays, such as the Google Cardboard [30] and the Microsoft HoloLens. Other authors have proposed the use of head tracking in mobile contexts to provide gestural input in the form of head tilting [5] and nodding [24].
Furthermore, studies on smart TVs have explored the use of off-the-shelf webcams to capture head motion during smooth pursuits [3]. Finally, while rigorous studies are presently lacking, recent work has proposed achieving head-based input during pursuits tracking by monitoring VOR movements [7]. In sum, while this work highlights the appeal of head-based input, to the best of our knowledge, no prior studies have explored explicit head movements for target tracking input in AR.
**SMOOTHMOVES**
SmoothMoves is an interaction technique for selecting graphical targets in AR interfaces. The targets move in orbital trajectories and users make selections by matching these motions with movements of their head that are sensed by a worn IMU. SmoothMoves is heavily influenced by prior pursuits based gaze interaction techniques [35], but replaces the use of eye coordinates with *yaw* and *pitch* data from the IMU. The matching process is simple: for each displayed target, Pearson’s correlations are computed for $x_{\text{target}}$-*yaw* and $y_{\text{target}}$-*pitch* relationships. If both exceed a certain *correlation-threshold* for a given target, and no other currently displayed targets attain the same result (either individually or via an average of both results), then the target is selected. The correlation takes place after *start-up time* and on a particular data rolling *window size*. The start-up time is the period immediately after the appearance of a set of SmoothMoves targets when the user is engaged in open-loop orientating behavior that marks the beginning of a smooth pursuit movement. Performing target matching in this period would not be meaningful. The window-size specifies the duration of data sampled for SmoothMoves correlations. In the eye gaze literature, longer window sizes ensure fewer erroneous selections at the cost of the lower comfort and higher performance time [9,35].
Visually, SmoothMoves closely mimics Orbits [9]. Each graphical control is comprised of a trajectory around a center point and a target (see Figure 4) that continuously traverses this trajectory. Each control can be used for either discrete input, where target acquisitions result in issuing a command, or continuous control by monitoring the time a target is tracked for. Target disambiguation is achieved in two ways. First, targets move in different *phases*. For example, with four targets, they would be spaced at 90° intervals. Second, targets can move in different *directions*: clockwise and counterclockwise.
**STUDY 1: EYE AND HEAD-TRACKING**
To explore the viability of SmoothMoves, we first conducted a lab study. It had three goals. First, to validate the idea that users can acquire targets using smooth pursuits head motions. To do so, we simultaneously captured eye- and head-tracking data of participants following a series of single moving targets with different instructions: to perform the tracking naturally; to track only with the eyes and; to track only with the head. This supports contrasting head and eye motion performance. Second, to explore performance variations in eye and head tracking with a variety of moving stimuli. The goal was to enable us to make recommendations about optimal stimuli to display. Finally, the third goal was to define optimal values for the key parameters of correlation threshold, start-up time and window size, to enable construction of a working system.
**Participants**
18 participants were recruited (12F), aged between 20 and 26 years ($M = 24$, $SD = 1.85$). All participants were undergraduate or graduate students at a local institution, and except for one, had minimal experience with eye-tracking. All had normal or corrected to normal vision. Nine participants wore contact lenses, one wore glasses, and the remaining eight did not require any visual aids.
**Experimental Setup and Design**
The experiment was conducted in a quiet and private laboratory space, with participants sitting 60cm away from a 27” display (1920×1080 resolution screen). Eye data was recorded using a Pupil Pro [15] wearable eye-tracker equipped with a single camera tracking the right-eye (reported mean gaze estimation accuracy of 0.6° of visual angle). The tracker was adjusted for focus and to ensure a clear field of view of the eye and a close match between the horizontal and vertical axes of the eye and the camera. No further calibration was performed; only normalized pupil locations were recorded. A GY-86 nine axis IMU was attached to the front camera mount of the pupil using a 3D printed fixture and wired to an Arduino. A complementary filter (Mahony *et al.* [21]) tracked head orientation and provided yaw and pitch data. The display and both sensors were all connected to the same computer. The display update and IMU data logging rate were 60Hz. Difficulties in capturing a reliably timed data stream from the eye tracker resulted in recording eye packets at a target rate of 90Hz, and an actual rate of between 75HZ and 90Hz.
All participants completed the same set of trials in three different input conditions: *natural*, *eyes*, and *head*. In all conditions a single moving target was displayed for four seconds and trials were presented in a random order. In the natural condition, participants were simply asked to follow the target. In the eyes condition, participants were asked to follow the target with their eyes. Similarly, in the head condition, participants were asked to follow the target with their head. All participants completed the natural condition first, to ensure there was no instructional bias in the way they opted to follow the moving target. The eyes and head conditions were counter-balanced to reduce possible fatigue and practice effects. The set of moving targets used in the study was selected to replicate previous studies of smooth pursuits eye movements [9,14]. Variations included:
• **Trajectory size**: there were three on-screen sizes: 4cm (~3.50° visual angle), 13cm (~11.75°) and 22cm (~20°).
• **Target speed**: targets moved in one of three angular velocities: 60°/sec, 120°/sec, or 180°/sec.
Additional novel variations were included in the study, so as to expand the design knowledge about interfaces based on smooth pursuits. These included:
• **Trajectory shape**: targets moved in either circular or rhomboidal trajectories (see Figure 4).
• **Trajectory visibility**: target trajectories were either invisible, where only the target was displayed, or visible, where the target’s movement path was also shown.
• **Speed type**: targets could move with constant speeds, or increase their speed midway through the trial. Speed adjustments always involved an increase by 60°/sec.
• **Direction type**: as with speed type, targets could either move in a fixed orbital direction, or invert this halfway through the trial.
Each possible trial combination occurred once in each condition. Consequently, data from a total of 7776 trials (18 participants x 3 conditions x 3 sizes x 3 speeds x 2 trajectories x 2 visibilities x 2 speed types x 2 direction types) was recorded.
**Data Pre-Processing**
Prior to analysis, the separate data streams of eye, head and visual target movements were pre-processed. First, the eye-data was down-sampled to 60Hz and the three data streams were matched using timestamps. Second, eye data trials were removed in situations where there were breaks in the data of greater than 300ms, a threshold derived from typical blink durations. The goal was to include trials involving natural behavior such as blinks but exclude those trials where eye tracking was lost or degraded (as judged by the confidence statistic reported by the tracker) for reasons such as a prolonged closure of the eye, a glance away from the screen or a failure of the tracking algorithms. We opted for removing these trials as long lapses in the data would disrupt the planned rolling window correlation analysis. In total, we excluded 93 trials (1.2%). Of these, 71% were in the head condition, likely a consequence of the larger movements disrupting eye tracking. Furthermore, they were biased by participant (33% from one subject) due to variations in the robustness of the eye tracker fit/calibration. They were evenly distributed over all other variables and are not sufficient in number, or skewed enough in distribution, to invalidate our analysis. The final stage of pre-processing involved running a rolling average filter over eye, head and target data streams (ignoring gaps in the eye data) with a window size of 64ms, or 4 samples. This smoothed out inevitable fluctuations in sampling times associated with data capture from three separate sources.
**Results and Analysis**
Initial analysis of the results focused on determining an appropriate configuration of SmoothMoves parameters. We adopted a 500ms startup-time, based on fundamental literature [27] indicating that initial motions in a tracking movement involve orientating actions that differ from later tracking motions. Using this figure, we ran correlations between all eye and head data in the three experimental conditions using window-sizes of 500ms, 1000ms, 1500ms and 2000ms – see Figure 2. Prior work has identified 1000ms as sufficient to achieve correlation results of 0.8 with gaze and suggested this is a viable correlation threshold for input [9]. With these baseline parameters, results from the natural condition show slightly diminished performance: a median of 0.75. We attribute this to the large range of stimulus display parameters used in the study and discussed in the next paragraph. Performance in the eyes condition matches the 0.8 recorded in prior work. In both these conditions, we note that correlations against the head data are low (0.46-0.5) and insufficient to support recognition via the algorithmic matching process proposed in this paper. It also indicates that participants more naturally followed targets with their eyes than their head, an effect which may be partly due to participants being aware of the eye-tracking equipment during the study setup. Data from the head condition, however, strongly shows that head based tracking can be readily achieved by participants; head correlation coefficients were higher than those reported for the eyes in any of the experimental conditions. Specifically, with the 1000ms window-size, participants achieved a median correlation between head and target movements of 0.88. This provides a firm basic validation of the SmoothMoves concept. Reflecting these results, we used a 1000ms window-size and 0.8 correlation threshold for all further analysis and activities in this paper.
A key goal of this paper is to characterize the performance of eye and head tracking movements with different trajectory designs. Rather than a high-dimensionality ANOVA, we opted to do this by analyzing each trajectory variable/modality pair individually with a low alpha threshold for significance. Specifically, we examined correlations from eye and head movements in, respectively, the eye and head conditions using six separate two-way repeated measures ANOVAs (either 3x2 or 2x2). For variables with three levels, the ANOVAs incorporated Greenhouse-Geisser corrections when Mauchly’s test showed sphericity violations and were followed by Bonferroni-corrected *post-hoc* t-tests. In total, we ran six separate main tests using an alpha threshold of p<0.05/6, or p<0.008. Effect sizes are given as partial eta squared ($\eta_p^2$). In the interests of brevity, we report only significant results.
The raw data for each variable in the eye and head conditions are shown in Table 1. The head data (from the head condition) led to significantly higher correlation values than the eye data (from the eye condition) in all tests: (F (1, 17) = 15.7, p <0.001, $\eta_p^2 = 0.481$). This supports the idea that head condition led to improved tracking accuracy compared to the eye condition. Beyond this, as the raw figures show, the results were relatively uniform. Results varied in terms of *direction type* (F (1, 17) = 35.747, p <0.001, $\eta_p^2 = 0.678$). This suggests that changes in target direction disrupted participant’s ability to track accurately. Similarly, the data differed significant with *trajectory shape* (F (1, 17) = 20.321, p <0.001, $\eta_p^2 = 0.544$), indicating that participants tracked targets moving in rhomboidal trajectories more accurately. Finally, significant differences emerged with variations in trajectory size (F (1.241, 21.091) = 14.259, p <0.001, $\eta_p^2 = 0.456$). *Post-hoc* t-tests indicated tracking the smallest targets was more challenging that tracking those in medium (p=0.002) or large (0.004) conditions. Interactions were also observed in *trajectory visibility* (F (1, 17) = 15.052, p =0.001, $\eta_p^2 = 0.47$) and *speed type* (F (1, 17) = 11.476, p =0.004, $\eta_p^2 = 0.403$). These results suggest that tracking with the eyes modestly improves when targets move more unpredictably, an effect that is not present with head movements. This is possibly due to the eyes’ faster response time.
**Discussion**
The study strongly confirms the idea that head motions can accurately track moving targets. In the head condition, the fidelity of the behavior, as expressed by the median correlation coefficients, exceeded that of the eyes in both the natural and eyes conditions of the current study as well as that reported in prior work [9]. This suggests that head based input can act as a surrogate for eye-based input in many smooth pursuits input scenarios; it may even be preferred in terms of performance. However, data from the natural condition also clearly indicates that participant’s predilection was to track with the eyes; only when specifically instructed did they use clear, accurate and distinctive head movements.
A second goal of the study was to expand knowledge about what stimulus parameters are effective in tracking based input systems. Although a number of significant differences emerged, serving to isolate more and less effective designs, the primary message from this data is one of the robustness of the technique to variations in target movements. This is a positive outcome as it suggests that both eye and head versions of the technique can be deployed with targets moving in a broad range of patterns and thus support a large variety of graphical forms and interface designs. Specific recommendations from the study are to avoid direction changes and small target trajectories. Rhomboidal trajectories may provide some benefits. While these recommendations are sensible, we note the small absolute differences and moderate effect sizes – they may ultimately have limited impact on performance.
Beyond these analyses and recommendations, it is also worth describing the movements captured in the study. For this, we focus on data in the head condition, as this involves explicit bodily motion and represents the core idea proposed in this paper. The scale of these movements will impact a range of factors such as the obtrusiveness [39], social acceptability [28] and, possibly, long term comfort of the technique. While a full exploration of these issues goes
beyond the scope of this article, we can present and interpret basic data. The small (3.5°), medium (11.75°) and large (20°) target trajectories led to mean head rotations of 9.19° (SD 6.18), 14.85° (SD 9.35) and 17.65° (SD 9.69) and showed minimal variation (<1 degree) between yaw and pitch. This indicates participants exaggerated head movements for small targets and modestly reduced them for larger targets (see Figure 3 for examples). The movements could also be relatively subtle – for the smallest targets, median head rotations were just 6.7°. We believe these movements are sufficiently small to ensure the technique is discrete and not unduly fatiguing. Further studies will need to empirically examine these claims and formally establish how fatiguing SmoothMoves interaction is. We also note that stimuli in the current study were very simple and future work should investigate more complex situations where, for example, users would need to engage in a visual search for targets prior to performing selection.
**SMOOTHMOVES VALIDATION STUDY**
We opted to build on these results by validating SmoothMoves input for AR in a follow-up study deploying optimal cues in a more realistic AR setup.
**Participants**
A total of 16 participants completed the study (9F), aged between 21 and 26 ($M = 22.19$, $SD = 1.84$). All participants were students at a local institution and were compensated approximately ten USD for their time. In general, they rated their experience with smartphones as very high ($M = 5/5$) but their experience of wearables such as smart watches (1.8/5) and smart glasses (1.2/5) as low. Three participants were smart watch owners, resulting in the modestly higher rating for these devices.
**Experimental Setup and Design**
The study involved two device conditions, intended to simulate different AR viewing scenarios. These were *glasses* and *phone*. The glasses condition used the Epson Moverio AR glasses [38] which feature a pair of semi-transparent displays with a 23” field of view. In the phone condition, targets were displayed on a mobile phone (a Huawei Nexus 6P with a 5.7” display) held comfortably in participants’ hands. This simulates a common current AR experience in which standard handheld devices are used as the main display device in a video-see through paradigm [33]. In both cases, participants wore the same head mounted IMU used in the first study.
The study also explored two further conditions: *trajectory-size* and *target-cardinality*, or the number of simultaneously presented targets. We re-examined the former variable as it was shown to impact performance in the first study. Furthermore, perceptual trajectory sizes in the two display devices differ substantially from each other and from those used in the first study. This reflects a more realistic deployment of SmoothMoves targets in which it is not possible to fully standardize trajectory sizes across different devices and platforms. We again selected three trajectory sizes but did so based on the available screen size of the devices (rather than visual angle). The sizes were selected so the rhomboidal target paths occupied approximately 18%, 54% and 90% of the smaller screen dimension. In the large condition this left sufficient space to display the moving target, while in the small conditions overlap of the targets remained minimal. We also examined cardinality as this is an essential practical issue for any target selection system. We displayed targets in equidistantly spaced groups of two, four, six and eight (see Figure 4) in order to determine the impact this exerts on performance.
The study was arranged so that phone and glasses were repeated and balanced: all participants completed both conditions, half in each possible order. Within each device condition, participants completed three blocks of trials. Each block contained four target-cardinalities trials for each target-size. Trials in each block were randomly presented and the first block was treated as practice and discarded. As such, we retained data from 3072 trials (16 participants x 2 devices x 2 blocks x 4 target-cardinalities x 3 target-sizes x 4 repetitions). For each trial, we logged error count and successful target selection time. Errors occurred if no target selection took place within 10 seconds (a timeout) or a wrong target was selected. In these cases, trials were re-entered into the pool of remaining trials. In this way, all participants correctly completed their allotted set of trials.
Beyond these variables, the stimuli used parameters from the first study. Targets moved at 120°/sec; their trajectories were continuously presented; there were no speed or direction changes. Three other display variables were equally distributed among each set of four cardinality/size trials. These were target direction (clockwise/anticlockwise), trajectory shape (circle/rhombus) and target starting angle (four cardinal directions). Rather than as experimental variables, these variations increased the realism of the study – trajectories in real systems will likely vary in path (or appearance) and the study examined performance in this relatively unpredictable situation.
**Procedure**
This study implemented SmoothMoves using parameters from the first study. Participants sat at a desk holding the phone in their right hands, or wearing the AR glasses. They started each trial by tapping a key on a PC keyboard on the desk. A set of targets was then displayed, but no data was collected for 500ms of start-up time. Correlations were analyzed with a window-size of 1000ms and a selection triggered when participants reached a correlation-threshold of 0.8. In cases where the standard deviation of head movements in either axis was less than 2°, no correlations were calculated. This threshold was substantially under the mean standard deviation observed in head condition trials in the first study (6°-9° over the size conditions) and served to reduce false positives by capturing only intentional movements. Finally, if multiple targets led to correlations above the target threshold, no selection was returned.
**Results and Analysis**
Time and error data from the study were analyzed with a pair of three-way repeated measures ANOVAs on device, trajectory size and target cardinality. In cases where sphericity was violated, we report Greenhouse-Geisser-corrected degrees of freedom. Post-hoc pairwise comparisons include Bonferroni CI adjustments. For brevity, only results significant at p<0.05 are reported. We use partial eta squared ($\eta_p^2$) to express effect size.
The time data is charted in Figure 5. Only the main effect of trajectory size attained significance (F (1.07, 16.056) = 10.901, p=0.004, $\eta_p^2$=0.421), so no interactions are included in the chart. The effect size is moderate and borne out by *post-hoc* t-tests showing the smallest trajectories led to slower selections than the medium (p=0.009) and large (p=0.016) trajectories. This indicates that participants took longer to select targets moving around the smallest paths.
Error data was more diverse. Two way interactions are plotted in Figure 6 and main effects in Figure 7. The three-way interaction did lead to a significant result (F (2.162, 6.692) = 4.423, p = 0.018, $\eta_p^2$=0.228), but we opt to interpret the data in terms of the more comprehensible significant two-way interactions and main effects, as these all exhibit larger effect sizes. Specifically, the significant two way interactions were between trajectory size and target cardinality (F (1.974, 29.607) = 6.488, p<0.05, $\eta_p^2$=0.302) and trajectory size and device (F (1.164, 17.456) = 5.082, p = 0.033, $\eta_p^2$=0.253). Looking at the charts, the first interaction suggests that while errors increase with more targets, they do so more steeply with small trajectory sizes. The second interaction indicates that performance with the glasses was superior to the phone with six or less targets, but this relationship was inverted with eight targets.
The significant main effects were trajectory size (F (1.408, 21.113) = 12.272, p = 0.001, $\eta_p^2$=0.45) and target cardinality (F = (1.204, 18.062) = 40.167, p<0.001, $\eta_p^2$=0.728). These are the largest effect sizes in the study, and relate to simple outcomes. Specifically, *post-hoc* t-tests showed that small trajectories led to more errors than medium (p=0.025) and large (p=0.001) trajectories and that all differences in cardinality were significant (at p<0.01 or less) except for a non-significant comparison between four...
and six targets. Unsurprisingly, this indicates that target selection became more error-prone when more targets were displayed. These results also confirm that participants find targets moving on small trajectories more difficult to track.
**DISCUSSION**
The goal of this study was to explore the performance of SmoothMoves head tracking in an AR scenario in order to contrast performance with related techniques and make recommendations on how to best deploy it. Time data are simple. Mean task selection time was approximately two seconds and the only significant variation was an increase when the smallest trajectories were used: small on-display target paths should be avoided. This figure includes the 500ms start-up time and a 1000ms window-size, making it approximately half a second greater than the minimum time that the study supported. We argue this is fast enough to make the technique compelling in hands-free AR scenarios: recent studies of hand and head mediated ray based selection report task times of between 2.25 and 3.5 seconds for making a selection from a set of 16 targets [25]; and techniques based on smooth pursuits eye movements report task times ranging from 4.3 to 4.6 seconds [34]. Performance with more traditional, albeit 3D, direct selection techniques based on moving the hand to a target location within arm’s reach show broadly similar results: Özacar *et al.* [25] examine this modality with three types of selection trigger (a physical button, dwell, and a hand gesture) and report task times of three to four seconds.
The error rate data paint a more complex picture. These range considerably, and the more extreme conditions studied are sufficiently challenging to render them inappropriate for use in a real system. If small trajectories are avoided, we argue the data supports the display of up to six targets simultaneously: this led to a mean error rate of 2.6% with the glasses and 4.9% with the phone. The difference between these devices is possibly due to the larger perceptual sizes for the trajectories shown on the HMD, suggesting the technique is better suited to large field of view glasses-based AR than to the perceptually smaller displays of handhelds. It is also worth noting that the experience of interacting with SmoothMoves between the two types of device is also very different. With the HMD, the screen moves with the head; with the phone, it is likely static. We note that the study results indicate that the technique is robust to this difference. It is also worth contrasting the error rates for our recommended SmoothMoves configuration with comparable selection techniques. In terms of ray pointing, Özacar *et al.* [25] report error rates of 6%-10%; while Esteves *et al.* [9] report errors in an optimally configured gaze based pursuits input system to be an average of 19% for eight targets displayed in tandem. Özacar *et al.*’s [25] error data from direct selection tasks ranges from 4% to ~8%. The error rates from this study suggest SmoothMoves performs well enough to act as a viable companion or alternative to these approaches.
**Figure 8.** An AR interface built with SmoothMoves for an interactive lighting system. The moving controls are displayed in proximity to the light bulb they control, and users interact with these by tracking their movement with their heads.
In summary, the results of this study confirm that SmoothMoves targeting works well in two different AR scenarios and, in fact, may be particularly suitable for HMDs. This is useful as such systems already incorporate the required sensors to support the technique. On HMDs, and with target sets of between two and six in size, users can reliably (error rate of 2.6%) make selections in under two seconds, a level of performance that we believe is sufficient to support a rich range of possible interactions. The next section of this paper showcases these possibilities.
**INTERACTIVE LIGHTS USING AR AND SMOOTHMOVES**
This paper concludes with the design and evaluation of a prototype interactive lighting system that uses augmented reality for displaying moving controls, and SmoothMoves for input (see Figure 1). The system was implemented using Philips Hue smart lights [39], which were wirelessly controlled by a video see-through AR application that runs on an unmodified Microsoft HoloLens. This is a head-mounted device that combines multiple optical sensors to both sense where users are looking and map their physical surroundings. The prototype was developed using the Unity game engine [40] and the Vuforia AR platform [41]. Input was captured using the HoloLens’ standard API.
The idea of the prototype is simple. 2D moving controls are displayed in space, in proximity to the lights they control. These positions are set once, using pre-defined images or real-world objects. The controls enable the user to turn the lights on or off (Figure 8, top); to control the lights’ intensity (Figure 8, top-right); and to access two menus. The first is the *themes menu*, that features two pre-set light schemes: work (cool blue) and relaxing (warm yellow) (Figure 8, bottom-left). The second is the *color menu*, that
enables the user to scroll through different hue colors in the HSV/HSB model using continuous head movements, and to also adjust the color’s saturation (Figure 8, bottom-right). Brightness and saturation controls have two targets moving in opposite directions. Following the clockwise target increases the value of the control (e.g., makes it brighter), while following the counterclockwise target decreases it. All selections are confirmed through audio output (a click).
The motivations underlying the prototype are threefold. First, to support immediate control of smart environments with minimal action – a requirement highlighted by Koskela et al. [17] in their research on smart homes. Second, to provide uniform and hands-free control over different smart devices. And third, to support direct input in physical spaces: users simply look at the system they want to control in order to start interacting.
**Evaluation**
We evaluated the interactive lights prototype using 10 participants (4F), aged between 21 and 47 ($M = 34.3$, $SD = 8.88$). All participants were staff or students at a local institution. Based on a 7-point scale (low to high), participants rated their experience with AR at 2.5 ($SD = 1.51$); with HMDs at 2.8 ($SD = 1.55$); with smart lights at 2 ($SD = 1.70$); and with smart rooms at 1.8 ($SD = 0.79$). Participants interacted with the prototype in a spacious and quiet environment, where they were free to move around. Each experiment took on average 30 minutes, and was based on a participatory design technique to elicit in-depth user feedback [2]. This technique includes a *sensitization* and *elaboration* phase. In the former, participants were asked about relevant past experiences; in the latter, participants commented on the demo prototype. Each experiment started with an explanation of the prototype’s functionality and a small trial where participants were asked to turn the lights on and off until they felt comfortable with the SmoothMoves input technique. We recorded and transcribed audio of all sessions and performed a lightweight clustering of comments, reported below.
**Overall Opinions:** In general participants responded positively to the technique, describing it as a “clever” (P7), “useful” (P4), “comfortable” (P2), and a great idea overall (P1, P5, P6, P7, P10). Participants also described the interface movement as “interesting” (P1, P5, P6), “fun” (P1, P6, P7, P10), and “minimalist” (P9); and did not consider it to be invasive (P9), or much of a distraction (P1, P4, P5). P2 described the movement as “futuristic” – a way to “attract people’s attention” and “impress (house) guests”. P4 appreciated the technique’s ability to display “different options (so) close to each other”. Finally, P6 described the experience as “quite magical” – “it is almost like you are doing it psychically”. This sentiment was shared by P9: “I almost feel like it is my mind; if feels that subtle, that you (...) just will it to happen.”.
**Target Selection with the HoloLens:** Despite these positives, there were concerns about how long it took to select a target (P2), that it initially required some concentration (P6, P10) and that it was an unusual way to interact (P8). Five participants reported unintentional selection of a target at some point during the session (P3, P6, P7, P9, P10). One explanation for this is the HoloLens’ limited field-of-view. This issue is exacerbated as participants move their heads to acquire different targets – especially if the headset is not properly adjusted. P6 and P10 reported constraining their head movements because the targets “tend to appear and disappear”; and P7 did the same because the HoloLens kept “slipping down”. P10 also described the HoloLens as quite “heavy”. To minimize field-of-view issues, participants started the interaction at roughly two meters from the targets. This caused several participants to report the target trajectories as quite small (P3, P6, P10), and “sensitive” (P10) to input. Towards the end of the (short) session, these concerns began to lessen. P10 stated that “the more I did the easier it was”; and P9 ultimately “started to find [the movement] quite calming”.
**Use Scenarios:** In response to a question on practical uses of the technique, participants P1 and P7 described how SmoothMoves would be useful for the “quick things”: “I do not want to think, as you need in a smart phone application (…) I just want a button that turns on something, and then I can go back to work” (P1). P4 states that “it would definitely be useful” during hands busy activities in the home such as cleaning. Other participants saw value in terms of accessibility (P3, P5, P7, P8), or for professionals working with both hands, such as surgeons or bakers (P3). Finally, several participants envision using the technique when the hardware improves: when it is lighter (P1); when the field-of-view improves (P2); or when the device has the form factor of a normal pair of glasses (P3, P7, P9).
**Gaze = Eyes + Head:** Participants frequently commented on the naturalness and unobtrusiveness of the head movements and their tight coupling to gaze. P9 said it simply: “I do not feel I am moving my head”. Similarly, P1 observed “I do not have to [mimics a very explicit head motion], I just have to look”; and P4 “notice[d] now that while I am just trying to do it with my eyes, my head unconsciously moves in the way [of the targets]”. These quotes strongly reinforce the fundamental idea that gaze is a combination of eye and head motion – for several participants, even with instructions to move their heads, these modalities were hard to separate and distinguish.
**Multi-modal input:** Participants felt the technique could easily be integrated with other input modalities. Recognizing the potential problem of inadvertent activation, P3 and P6 proposed coarse mid-air gestures to trigger SmoothMoves controls. Other participants suggested integrating the technique with voice to specify more precise, important or detailed instructions (P6, P8, P10). Combining and comparing SmoothMoves with other input techniques is a compelling direction for future work.
Stimulus Parameters: Many participants were concerned about the size of both targets (P2, P3, P5, P6, P10), and trajectories (P3, P6, P8) and the speed at which targets moved (P8). Other participants were positive, feeling that small trajectories would require only small head movements (P5). These concerns were largely alleviated when participants moved closer to the light and targets. Suggestions for dealing with this issue included various techniques for scaling targets and trajectories based on the distance to a user. Designing and refining such techniques is clearly a next step for this work.
Continuous Input Designs: Six participants specifically appreciated the flexibility of being able to set precise colors using the continuous color adjustment menus, but there were numerous reports that the implementation was confusing. For hue, a core problem was a lack of feedback as to how this parameter would vary over time (P5, P6, P8) – one solution was to control more well understood qualities such as separate RGB channels (P7). Other users reported uncertainty they were maintaining a selection during hue adjustment (P1, P7, P8, P10), likely due to the gradual rate of change in this parameter. Situations in which two controls moved in opposite directions around the same trajectory also led to trouble for P3: “it looks like they are bouncing off each other”. In general, while participants also appreciated the audio feedback accompanying continuous parameter adjustment (P3, P7, P9), they also wanted more information in the form of visual or haptic (P8) cues.
Command Input Designs: Participants, in general, preferred the command input over the continuous input. Customizing lighting via choosing preset themes was reported to be more useful than continuous parameter adjustment (P1, P2, P3, P4, P5, P6), reflecting the general idea that SmoothMoves is more suited to quick and direct interaction (P1, P4, P7). Nesting menus was also viewed as appropriate as it avoided presenting too many simultaneous targets (P1, P4, P5, P6, P7, P8, P9) while still affording access to the most common commands quickly and easily (P1, P4, P5, P6, P8). The approach also kept things neat and tidy (P4, P5, P7, P8) and was reported to be consistent with traditional desktop computer interfaces (P7). Despite the proximity of the targets to the physical light, participants also explicitly suggested that feedback on selection be incorporated into the graphical interface (P1, P7, P8, P10).
In summary, SmoothMoves was well received by participants. Although there were some reports and worries regarding false activations, gripes about the headset and concerns about some of the specific control designs, the technique was viewed as convenient, relaxing, well suited to quick interactions in hands free situations and unobtrusive. This data provides evidence supporting the viability of the technique for real world input and points at key directions for improvement. Topics for future work include exploring integration with alternative input modalities (e.g. voice, ray pointing) and creating graphical feedback to better support different selection and activation mechanisms, such as continuous parameter adjustment.
CONCLUSION
This paper introduced SmoothMoves, the first technique that supports smooth pursuits input using head movements. The paper described a pair of lab studies. The initial study generated three contributions. First, by looking at novel movement behaviors it expanded the design knowledge of smooth pursuits input systems. Second, it demonstrated that smooth pursuits input can be easily (and affordably) supported by head-tracking. And third, it generated ideal algorithm parameters for the SmoothMoves technique. The follow-up study grounded the technique in the domain of augmented reality, capturing the error rates and acquisition times on different types of AR device (head-mounted and hand-held). Finally, a prototype system was developed to demonstrate the benefits of using smooth pursuits head movements for interaction with AR applications in the context of an interactive lighting system. A final qualitative study led to positive reports of the system’s suitability for this scenario. In contrast to smooth pursuits input systems based on eye-tracking, the SmoothMoves approach proposed in this paper can be immediately implemented on a wide range of devices that feature embedded motion sensing, such as AR headsets. The contributions of the paper, in terms of implementation, data and designs, represent concrete steps towards achieving this goal.
ACKNOWLEDGEMENTS
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (2017R1D1A1B03031364) and the ICT R&D program of MSIP/IITP. [R0190-15-2054, Development of Personal Identification Technology based on Biomedical Signals to Avoid Identity Theft].
REFERENCES
1. Emilio Bizzi. 2011. Eye-Head Coordination. In Comprehensive Physiology, Ronald Terjung (ed.). John Wiley & Sons, Inc., Hoboken, NJ, USA. Retrieved April 5, 2016 from http://doi.wiley.com/10.1002/cphy.cp010229
2. Derya Ozcelik Buskermolen and Jacques Terken. 2012. Co-constructing Stories: A Participatory Design Technique to Elicit In-depth User Feedback and Suggestions About Design Concepts. In Proceedings of the 12th Participatory Design Conference: Exploratory Papers, Workshop Descriptions, Industry Cases - Volume 2 (PDC ’12), 33–36. https://doi.org/10.1145/2348144.2348156
3. Christopher Clarke, Alessio Bellino, Augusto Esteves, Eduardo Velloso, and Hans Gellersen. 2016. TraceMatch: a computer vision technique for user input by tracing of animated controls. In UbiComp ’16: Proceedings of the 2016 ACM International Joint...
4. Nicholas Cooper, Aaron Keatley, Maria Dahlquist, Simon Mann, Hannah Slay, Joanne Zucco, Ross Smith, and Bruce H. Thomas. 2004. Augmented Reality Chinese Checkers. In *Proceedings of the 2004 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology* (ACE ’04), 117–126. https://doi.org/10.1145/1067343.1067357
5. Andrew Crossan, Mark McGill, Stephen Brewster, and Roderick Murray-Smith. 2009. Head Tilting for Interaction in Mobile Contexts. In *Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services* (MobileHCI ’09), 6:1–6:10. https://doi.org/10.1145/1613858.1613866
6. Dietlind Helene Cymek, Antje Christine Venjakob, Stefan Ruff, Otto Hans-Martin Lutz, Simon Hofmann, and Matthias Roetting. Entering PIN codes by smooth pursuit eye movements. *Journal of Eye Movement Research* 7(4), 1: 1–11.
7. Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, and Woontack Woo. 2016. Smooth Eye Movement Interaction Using EOG Glasses. In *Proceedings of the 18th ACM International Conference on Multimodal Interaction* (ICMI 2016), 307–311. https://doi.org/10.1145/2993148.2993181
8. David Dobbelstein, Philipp Hock, and Enrico Rukzio. 2015. Belt: An Unobtrusive Touch Input Device for Head-worn Displays. In *Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems* (CHI ’15), 2135–2138. https://doi.org/10.1145/2702123.2702450
9. Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements. In *Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology*, 457–466. https://doi.org/10.1145/2807442.2807499
10. Karen M. Evans, Robert A. Jacobs, John A. Tarduno, and Jeff B. Pelz. 2012. Collecting and Analyzing Eye-Tracking Data in Outdoor Environments. *Journal of Eye Movement Research* 5, 2. https://doi.org/10.16910/jemr.5.2.6
11. David G. Fleming, Gehard W. Vossius, George Bowman, and Ensign L. Johnson. 1969. Adaptive Properties Of The Eye-tracking System As Revealed By Moving-head And Open-loop Studies. *Annals of the New York Academy of Sciences* 156, 2 Rein Control,: 825–850. https://doi.org/10.1111/j.1749-6632.1969.tb14017.x
12. Yi-Ta Hsieh, Antti Jylhä, Valeria Orso, Luciano Gamberini, and Giulio Jacucci. 2016. Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses. In *Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems* (CHI ’16), 4203–4215. https://doi.org/10.1145/2858036.2858436
13. Aulikki Hyrskykari, Howell Istance, and Stephen Vickers. 2012. Gaze Gestures or Dwell-based Interaction? In *Proceedings of the Symposium on Eye Tracking Research and Applications* (ETRA ’12), 229–232. https://doi.org/10.1145/2168556.2168602
14. Jari Kangas, Oleg Špakov, Poika Isokoski, Deepak Akkil, Jussi Rantala, and Roope Raisamo. 2016. Feedback for Smooth Pursuit Gaze Tracking Based Control. In *Proceedings of the 7th Augmented Human International Conference 2016* (AH ’16), 6:1–6:8. https://doi.org/10.1145/2875194.2875209
15. Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In *Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication* (UbiComp ’14 Adjunct), 1151–1160. https://doi.org/10.1145/2638728.2641695
16. Mohamed Khamis, Ozan Saltuk, Alina Hang, Katharina Stolz, Andreas Bulling, and Florian Alt. 2016. TextPursuits: Using Text for Pursuits-based Interaction and Calibration on Public Displays. In *Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing* (UbiComp ’16), 274–285. https://doi.org/10.1145/2971648.2971679
17. Tiitu Koskela and Kaisa Väänänen-Vainio-Mattila. 2004. Evolution towards smart home environments: empirical evaluation of three user interfaces. *Personal and Ubiquitous Computing* 8, 3–4: 234–240. https://doi.org/10.1007/s00779-004-0283-x
18. Jeremy Lanman, Emilio Bizzi, and John Allum. 1978. The coordination of eye and head movement during smooth pursuit. *Brain Research* 153, 1: 39–53. https://doi.org/10.1016/0006-8993(78)91127-7
19. R. John Leigh and David S. Zee M.D. 2015. *The Neurology of Eye Movements*. Oxford University Press.
20. Otto Hans-Martin Lutz, Antje Christine Venjakob, and Stefan Ruff. 2015. SMOOVS: Towards calibration-free text entry by gaze using smooth pursuit movements. *Journal of Eye Movement Research* 8(1), 2: 1–11.
21. Robert Mahony, Tarek Hamel, and Jean-Michel Pflimlin. 2008. Nonlinear Complementary Filters on the Special Orthogonal Group. *IEEE Transactions on Automatic Control* 53, 5: 1203–1218. https://doi.org/10.1109/TAC.2008.923738
22. Microsoft. Microsoft HoloLens. *Microsoft HoloLens*. Retrieved September 20, 2016 from https://www.microsoft.com/microsoft-hololens/en-us
23. Pranav Mistry and Pattie Maes. 2009. SixthSense: A Wearable Gestural Interface. In *ACM SIGGRAPH ASIA 2009 Sketches (SIGGRAPH ASIA '09)*, 11:1–11:1. https://doi.org/10.1145/1667146.1667160
24. Louis-Philippe Morency and Trevor Darrell. 2006. Head Gesture Recognition in Intelligent Interfaces: The Role of Context in Improving Recognition. In *Proceedings of the 11th International Conference on Intelligent User Interfaces (IUI '06)*, 32–38. https://doi.org/10.1145/1111449.1111464
25. Kasim Özacar, Juan David Hincapié-Ramos, Kazuki Takashima, and Yoshifumi Kitamura. 2017. 3D Selection Techniques for Mobile Augmented Reality Head-Mounted Displays. *Interacting with Computers* 29, 4: 579–591. https://doi.org/10.1093/iwc/iww035
26. Ken Pfeuffer, Melodie Vidal, Jayson Turner, Andreas Bulling, and Hans Gellersen. 2013. Pursuit Calibration: Making Gaze Calibration Less Tedious and More Flexible. In *Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13)*, 261–270. https://doi.org/10.1145/2501988.2501998
27. Christoph Rasche and Karl R. Gegenfurtner. 2009. Precision of speed discrimination and smooth pursuit eye movements. *Vision Research* 49, 5: 514–523. https://doi.org/10.1016/j.visres.2008.12.003
28. Julie Rico and Stephen Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. In *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10)*, 887–896. https://doi.org/10.1145/1753326.1753458
29. D A Robinson. 1965. The mechanics of human smooth pursuit eye movement. *The Journal of Physiology* 180, 3: 569–591.
30. Boris Smus and Christopher Riederer. 2015. Magnetic Input for Mobile Virtual Reality. In *Proceedings of the 2015 ACM International Symposium on Wearable Computers (ISWC '15)*, 43–44. https://doi.org/10.1145/2802083.2808395
31. Sophie Stellmach and Raimund Dachselt. 2012. Look & Touch: Gaze-supported Target Acquisition. In *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12)*, 2981–2990. https://doi.org/10.1145/2207676.2208709
32. Noboru Sugie and Makoto Wakakuwa. 1970. Visual Target Tracking with Active Head Rotation. *IEEE Transactions on Systems Science and Cybernetics* 6, 2: 103–109. https://doi.org/10.1109/TSSC.1970.300283
33. Gabriel Takacs, Vijay Chandrasekhar, Natasha Gelfand, Yingen Xiong, Wei-Chao Chen, Thanos Bismigpiannis, Radek Grzeszczuk, Kari Pulli, and Bernd Girod. 2008. Outdoors Augmented Reality on Mobile Phone Using Loxel-based Visual Feature Organization. In *Proceedings of the 1st ACM International Conference on Multimedia Information Retrieval (MIR '08)*, 427–434. https://doi.org/10.1145/1460096.1460165
34. Eduardo Veloso, Markus Wirth, Christian Weichel, Augusto Esteves, and Hans Gellersen. 2016. AmbiGaze: Direct Control of Ambient Devices by Gaze. In *Proceedings of the 2016 ACM Conference on Designing Interactive Systems*, 812–817. https://doi.org/10.1145/2901790.2901867
35. Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: Spontaneous Interaction with Displays Based on Smooth Pursuit Eye Movement and Moving Targets. In *Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '13)*, 439–448. https://doi.org/10.1145/2493432.2493477
36. Chun Yu, Ke Sun, Mingyuan Zhong, Xincheng Li, Peijun Zhao, and Yuanchun Shi. 2016. One-Dimensional Handwriting: Inputting Letters and Words on Smart Glasses. In *Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16)*, 71–82. https://doi.org/10.1145/2858036.2858542
37. ES | PRODUCTS. *JINS MEME*. Retrieved July 5, 2017 from https://jins-meme.com/en/products/es/
38. Epson Moverio Next Generation Smart Eyewear - Epson America, Inc. Retrieved September 20, 2016 from http://www.epson.com/cgi-bin/Store/jsp/Landing/moverio-augmented-reality-smart-eyewear-technology.do
39. Philips Hue. Retrieved September 21, 2016 from http://www2.meethue.com/en-gb/
40. Unity - Game Engine. *Unity*. Retrieved April 4, 2017 from https://unity3d.com
41. Vuforia | Augmented Reality. Retrieved April 4, 2017 from https://www.vuforia.com/ |
Camping 5*
ERLEBNIS-SKI-RESORT
HERBST | WINTER 2020.21
AUTUMN | WINTER 2020.21
INDIVIDUELLER Campingurlaub
mit allen Annehmlichkeiten eines Superior Resorts
Individual camping holiday with all the amenities of a superior resort
WIR FEIERN
1 JAHR NEUES RESORT.
WE ARE CELEBRATING ONE YEAR OF THE NEW RESORT
Wir feiern ein Jahr voller Abenteuer, Spaß und Erlebnisse – wir feiern ein Jahr neues familien- und Erlebnisresort. Das letzte Jahr war für uns ein besonderes, denn unser neues Resort hat seine Pforten geöffnet und wir sind sehr stolz darauf, dass es Ihnen allen so gut gefällt wie uns. Wir freuen uns, Sie bald bei uns begrüßen zu dürfen.
Philipp Brückner, Johannes Ramstöck und Ihr Team vom Zugspitz Resort
We are celebrating one year of fun, adventure and experiences – we are celebrating a year as a new family and adventure resort. Last year was particularly special for us because our new resort opened its doors and we are very proud that you all like it just as much as we do! We are looking forward to welcoming you soon!
Philipp Brückner, Johannes Ramstöck and your team at the Zugspitz Resort
**INKLUSIVLEISTUNGEN**
**INCLUDED SERVICES**
### Ankommen
- Kostenloses WLAN
- Ladesation für Elektrofahrzeuge vorhanden (kostenpflichtig)
- Der Stellplatz ist am Anreisetag ab 12:00 Uhr für Sie bereit und steht Ihnen am Abreisetag bis 12:00 Uhr zur Verfügung.
- 35 Baderäume mit Dusche / WC und Bad / WC (1 x) sowie Doppel-waschbecken, große Spiegel und Föhne
- Toilettenanlagen mit Wickelkissen
- Bestens ausgestattete Küche mit Geschirrspülern, Kühlschränken, gemütlichem Esstisch
- Wasch- und Trocknenraum mit Waschmaschinen (Münzautomat)
### Your arrival
- Free WLAN
- Charging station for electric vehicles (for a fee)
- The pitch is available from 12:00 on your arrival date and should be vacated by 12:00 on the departure date
- 35 bathrooms with shower / WC and bathroom / WC (x), double washbasin, large mirror and hairdryer
- Toilet facilities with changing mats
- Ideally equipped kitchen with dishwashers, fridges, cosy dining table
- Washing and drying room with washing machines (coin operated)
### Für Kids und Teens
Mit dem Hygiene-Plus-Programm sorgen wir für Ihr Rundum-Wohlbefinden. Wir halten uns an die jeweils aktuellen behördlichen Vorschriften. Mehr dazu finden Sie auf www.zugspitz-resort.at
Our Hygiene Plus programme ensures all-round well-being. We focus on the latest regulations from the authorities. For more information see www.zugspitz-resort.at
- Kinderbetreuung ab 3 Jahren in DIDIs Kinderclub mit Spielhaus und Kleinkinderbereich
- 700 m² Fun & Action in den Indoor-Erlebniswelten mit Softplayanlage über 2 Etagen, Boulderwand, Indoor Kart Bahn
- Jugendraum mit Kickertisch, Airhockey und Billard
- Kino und Theater
- DIDIs Wasserwelten: Panorama-Erlebnis-Hallenbad, neue Kinder-Badewelt mit Piratenschiff, 5 Rutschen und Kleinkinderpool
- Abenteuerspielplatz mit Tretkart-Bahn und Trampolin
- Abwechslungsreiches In- und Outdoor Wochenprogramm
### For kids and teens
Childcare from 3 years old at DIDIs Kids’ Club with play house and infant area
- 700 m² fun & action in the indoor adventure worlds with soft play area over 2 floors, bouldering wall, indoor electro car track
- Youth room with table football, air hockey and billiard
- Cinema and theatre
- DIDIs water worlds: panoramic indoor pool, new kids water world with pirate ship, 5 slides, toddler pool
- Varied indoor and outdoor weekly programme
- Adventure playground with pedal car track and trampoline
### Bewegen und entspannen
- Panorama-Erlebnis-Hallenbad
- Beheiztes Freibad mit Verbindung zum Hallenbad
- DIDIs Wasserwelt für Kinder mit Rutschenslandchaft und Abenteuerpool
- Saunalandchaft: Saunahütte 90°, Bio-Sauna 60°, finnische Sauna, Sanarium, römisches Dampfbad, Solegrotte, Infrarotkabine, Whirlpool, Knippbecken, Frischlufttraum und Ruheraum
### Exercise and relax
- Familiensauna und -dampfbad
- Moderner Fitness- und Gymnastikraum
- Yogaraum
- wechselndes Bewegungs- und Entspannungsprogramm
- Indoor panorama experience pool
- Heated outdoor pool connected to indoor pool
- DIDIs Water World for kids with slide area and adventure pool
- Sauna area: sauna cabin 90°, bio-sauna 60°, Finnish sauna, sanarium, Roman steam bath, brine grotto, infra-red cabin, whirlpool, Kneipp pool, fresh-air room and relaxation room
- Family sauna and steam bath
- Modern fitness and aerobics room
- Yoga room
- Varied programme of exercises and relaxation
### Winterspaß abseits der Pisten
Für alle, die auch einmal abseits der Pisten etwas unternehmen wollen, gibt es in der Tiroler Zugspitz Arena genügend Möglichkeiten. Langlaufen, Rodeln, Schneeschuh- oder Winterwanderungen sind eine willkommene Abwechslung zu den Pisten schwängen.
Winter fun away from the pistes
For those who wish to explore life away from the pistes too, there are plenty of opportunities on offer in Tyrol’s Zugspitz Arena. Cross-country skiing, sledging and snow shoe and winter walks provide a welcome change from skiing down the pistes.
Grenzübergreifender Pistenspaß mit der TOP SNOW CARD.
Cross-border fun on the pistes with the TOP SNOW CARD.
Das Top-Familien-Skigebiet: Ehrwalder Alm
The top family ski area: Ehrwalder Alm
Pistenbully
Abwechslung pur mit der neuen Holz-Pistenraupe - und das direkt an der Piste.
Snow groomer
Pure variety with the new wooden snow groomer right on the ski piste.
Schneeburg
Von der Schneeballwand über die Schneehöhle bis zu den Rutschen ist dieser Abenteuerspielplatz ein Highlight für Groß und Klein.
Snow castle
This adventure playground is a highlight for all ages with the snowball wall to the snow cave and slides.
Familypark
Lust auf noch mehr Action? Auf in den Familypark für die ersten Freestyle-Versuche.
Familypark
Fancy more action? Head off for the family park for those first attempts at freestyle skiing!
Funslope
Wellen, Fun-Elemente und Steilkurven sorgen auf der „Spaßpiste“ für pures Skivergnügen.
Fun slope
Waves, fun elements and steep curves provide fun on the piste for pure skiing enjoyment.
Snowpark
Hier geht es für Anfänger sowie Profis ausserhalb von Sprüngen und Hindernissen und anschließend wieder bequem mit dem Lift zum Anfang des Parks.
Snow park
Beginners and professionals alike can tackle jumps and obstacles and then take the lift back to the start of the park in comfort.
Tirolerhaus
Direkt an der Bergstation der Ehrwalder Almbahn genießen Sie hervorragende Kulinarik in den gemütlichen Stuben oder auf der Sonnenterrasse.
Tirolerhaus
Situated at the mountain station of the Ehrwalder Almbahn, you can enjoy exceptional cuisine in the cosy lounges or on the sun terrace.
**VIELFALT IM RESORT**
**VARIETY AT THE RESORT**
**Zubuchbare Leistungen** | Bookable services
### Spa
- Massagen
- Body and Soul
- und vieles mehr
### Genusswelten
- À-la-carte Restaurant Zirbenstube
- Bar, Loungebereiche & Zigarrenlounge
- Sonnenterrasse
### Frühstück
- Genießen Sie die Annehmlichkeiten des Resorts
- Vorherige Reservierung notwendig
### Leisure worlds
- À-la-carte Restaurant Zirbenstube
- Bar, lounge areas & cigar lounge
- Sun terrace
### Breakfast
- Enjoy the amenities of the resort
- Reservation in advance required
---
### Aktiv Standard
Genießen Sie 7 Nächte am Standardstellplatz (70 - 80 m²).
**Inklusive:**
- Stellplatz- & Personengebühr
- Müllentsorgung
- Nutzung unserer Aktiv- und Vitalwelt
- DIDI's Wasser- und Erlebniswelten
- Kinderbetreuung ab 3 Jahren
- Abwechslungsreiches Familien-Aktiv-Programm
- Kostenloses WLAN auf dem Campingplatz
### Active Standard
Enjoy 7 nights at the standard camping pitch (70 - 80 m²).
**Inclusive**
- Campsite & camper fees
- Waste disposal
- Including use of our Active & Vitality World
- Including DIDI's Waterworld & DIDI's Adventure World
- Free childcare for children aged from 3 years
- Varied family activity programme
- Free WLAN
| 08.11.20 – 18.12.20 | 10.01.21 – 07.02.21 | 21.02.21 – 28.02.21 | 28.02.21 – 28.03.21 |
|---------------------|---------------------|---------------------|---------------------|
| 2 Erwachsene | 2 adults | Ab/ From 301,00 € | Ab/ From 350,00 € | Ab/ From 420,00 € | Ab/ From 315,00 € |
| 2 Erwachsene und 2 Kinder | 2 adults and 2 kids | Ab/ From 427,00 € | Ab/ From 560,00 € | Ab/ From 672,00 € | Ab/ From 483,00 € |
Die Preise verstehen sich als Ab-Preise und exklusive Ortstaxe sowie Strom.
---
### Aktiv Comfort
Genießen Sie 7 Nächte am Comfort Stellplatz (ca. 100 m²).
**Inklusive:**
- Stellplatz- & Personengebühr
- Wasser- und Abwasseranschluss
- SAT- sowie Stromanschluss
- Müllentsorgung
- Nutzung unserer Aktiv- und Vitalwelt
- DIDI's Wasser- und Erlebniswelten
- Kinderbetreuung ab 3 Jahren
- Abwechslungsreiches Familien-Aktiv-Programm
- Kostenloses WLAN auf dem Campingplatz
### Active Comfort
Enjoy 7 nights at the comfort camping pitch (approx. 100 m²).
**Inclusive**
- Campsite & camper fees
- Water and waste water connection
- Satellite connection, power connection
- Waste disposal
- Including use of our Active & Vitality World
- Including DIDI's Waterworld & DIDI's Adventure World
- Free childcare for children aged from 3 years
- Varied family activity programme
- Free WLAN
| 08.11.20 – 18.12.20 | 10.01.21 – 07.02.21 | 21.02.21 – 28.02.21 | 28.02.21 – 28.03.21 |
|---------------------|---------------------|---------------------|---------------------|
| 2 Erwachsene | 2 adults | Ab/ From 343,00 € | Ab/ From 406,00 € | Ab/ From 476,00 € | Ab/ From 350,00 € |
| 2 Erwachsene und 2 Kinder | 2 adults and 2 kids | Ab/ From 469,00 € | Ab/ From 616,00 € | Ab/ From 728,00 € | Ab/ From 518,00 € |
Die Preise verstehen sich als Ab-Preise und exklusive Ortstaxe sowie Strom.
---
The prices are starting prices excl. local tax and energy.
**DER NEUE ZUGSPITZ-SHOP**
**THE NEW ZUGSPITZ SHOP**
**JEDEN TAG GENUSS AUF'S NEUE.**
Alles, was das Herz begehrt
Unser Shop ist täglich von 07:00 bis 18:00 Uhr geöffnet. Der Eingang befindet sich direkt gegenüber der Rezeption. Dort erhalten Sie eine kleine Auswahl an Lebensmitteln und Getränken, Zeitschriften, Hygieneartikel, Tabakwaren sowie eine Coffee-to-Go-Station, Jausen-Pakete zum Mitnehmen, Souvenirs und diverses Camping-Equipment.
Everything your heart desires
Our shop is open every day from 07.00 am to 06.00 pm. The entrance is directly opposite reception. The shop offers a small range of food and drink, magazines, toiletries, tobacco, and a coffee to go station, snack boxes to take away, souvenirs, and various camping equipment.
Crispy bread rolls with a cup of fresh coffee or sweet pastry with delicious jam – giving you energy to start your day.
**CAMPING 5***
**CAMPING 5***
**Unsere Stellplätze | Our camping pitches**
**Standard:**
- 70 - 80 m²
- Stromanschluss und kostenloses WLAN
**Comfort:**
- ca. 100 m²
- Wasser- und Abwasseranschluss, SAT- und Stromanschluss, kostenloses WLAN
**Standard:**
- 70 - 80 m²
- Electricity connection and free WLAN
**Comfort:**
- approx. 100 m²
- Water and waste water connection, SAT and electricity connection, free WLAN
**WINTER 2020/21**
| | 08.11. – 18.12.20 | 18.12.20 – 10.01.21 | 10.01. – 07.02.21 | 28.02. – 11.04.21 |
|----------------|-------------------|---------------------|-------------------|-------------------|
| Standardstellplatz/Tag | 31,00 € | 15,00 € | 13,00 € | 11,00 € |
| Comfort Stellplatz/Tag | 17,00 € | 25,00 € | 21,00 € | 17,00 € |
| Erwachsener/Person/Tag | 20,00 € | 30,00 € | 25,00 € | 22,00 € |
| Kind (3-14 Jahre)/Person/Tag | 12,00 € | 22,00 € | 18,00 € | 15,00 € |
| Ortstaxe/Person/Tag (ab 15 Jahren) | 3,00 € | | | |
| Umweltbeitrag/Person/Tag | 0,80 € | | | |
| Strom kWh | 0,90 € | | | |
**Zubuchbare Leistungen – Preise gültig in allen Saisonen**
Bookable services - prices valid in all seasons
| Service | Price |
|----------------------------------------------|---------|
| Miet-Duschkabine/Tag | 12,00 € |
| Hund/Tag | 7,00 € |
| * Verpflegung - Verwöhn-HP | 64,00 € |
| Erwachsener (ab 12 Jahren) | 21,00 € |
| Kind (3-6 Jahre) | 11,00 € |
| Kind (7-11 Jahre) | 16,00 € |
| * Verpflegung - Frühstück | 43,00 € |
| Erwachsener (ab 12 Jahren) | 19,00 € |
| Kind (3-6 Jahre) | 19,00 € |
| Kind (7-11 Jahre) | 19,00 € |
* pro Person/Tag | per person/day
1.500 m²
XXXL SPIELE- & WASSERWELTEN
In DIDIs Wasser- und Erlebniswelten gibt es unzählige Möglichkeiten, um Kinderaugen zum Strahlen zu bringen. Indoor Elektro-Kartbahn, Kino, 120 m lange Wettkampfrutsche und, und, und… – Sehen Sie selbst!
At DIDIs water and adventure world, there are endless ways to fill kids with enthusiasm. An indoor electric karting track, cinema, 120 m competition slide and much more – take a look for yourself!
Buchungsinformationen
An- und Abreise
Anreisetag: ab 12:00 Uhr
Abreisetag: bis 12:00 Uhr
Sollte sich die Anreise wesentlich verzögern, ersuchen wir um Ihre Information.
Reservierung und Storno
Die Reservierung ist erst mit schriftlicher Rückbestätigung und Erhalt Ihrer Anzahlung verbindlich.
Gerne informieren wir Sie in unserer Korrespondenz.
Es gelten die Bestimmungen des Österreichischen Hotelverbandes www.hotelverband.at, siehe auch www.zugspitz-resort.at. Wir empfehlen den Abschluss einer Reiseversicherung.
Zuschläge
Orttaxe (ab 15 Jahren): 3,00 € pro Person/Tag
Zubuchbare Leistungen:
• Verwöhn-Halbpension im Hotelrestaurant
• Leih-Gasflaschenservice
• Massagen, Solarium, Vitalanwendungen
• Vespa- und E-Mountainbike-Verleih im Hotel
Datenschutz
Sollten Sie in Zukunft keine weiteren Mailings von uns wünschen, bitten wir Sie, uns zu kontaktieren: firstname.lastname@example.org oder +43 5673 2309. Sie werden dann aus unserem Mailingverzeichnis entfernt. Es gelten die Bestimmungen des Österreichischen Hotelverbandes www.hotelverband.at. Mit dieser Preisliste verlieren alle vorhergehenden Preislisten ihre Gültigkeit. Druckfehler und Irrtümer trotz sorgfältiger Durchsicht vorbehalten.
Data protection
Arrival and departure
Arrival day: from 12:00
Departure day: to 12:00
Please let us now if you will arrive much later than this.
Reservations and cancellations
Reservations are only classed as binding following written confirmation and receipt of your deposit payment. We are happy to provide further information in our correspondence. The terms and conditions of the Austrian hotel industry www.hotelverband.at apply, see also www.zugspitz-resort.at. We recommend taking out travel insurance.
Supplements
Resort tax (from 15 years old): 3.00 € per person/day
Extra bookable services:
• Spoil yourself Half-board package at the hotel restaurant
• Gas bottle hire service
• Massages, solarium, vitality treatments
• Vespa and e-mountain bike hire at the hotel
Data protection
If you no longer wish to receive any further emails from us in future, please contact: email@example.com or +43 5673 2309. You will then be removed from our mailing list. The terms and conditions of the Austrian hotel industry apply at www.oehv.at. This price list renders all previous price lists invalid. Printing and typesetting errors excepted despite careful checking.
Impressum | Imprint
Herausgeber und für den Inhalt verantwortlich: Publisher and responsible for the content:
Zugspitz Resort, ein Betrieb der Zillertaler Gletscherbahn GmbH & Co KG, GF Franz Dengg, Obermoos 1, 6632 Ehrwald, Österreich.
Konzept & Gestaltung: Oberhauser Consulting GmbH. Bildrechte: Günter Standl Fotografie, Eva trifft, Michael Huber,
Fotoatelier - Marcel Hagen, Christoph Jorda, Talia Kusztal, Albin Niederstrasser, Hari Pulko, Uli Wiesmeier,
Almholz, Tiroler Zugspitzbahn |
Continuity of lead-silver production in the area of Cartagena-La Unión (Spain) after the Phoenician trade crisis of the 6th century BC
Céline Tomczyk, Christophe Petit, María Berná, Laurent Costa, Jessica Legendre, Jesús Moratalla, Sidonie Révillon, Pierre Rouillard
To cite this version:
Céline Tomczyk, Christophe Petit, María Berná, Laurent Costa, Jessica Legendre, et al.. Continuity of lead-silver production in the area of Cartagena-La Unión (Spain) after the Phoenician trade crisis of the 6th century BC. Journal of Archaeological Science: Reports, 2024, 59, pp.104742. 10.1016/j.jasrep.2024.104742 . halshs-04718691
HAL Id: halshs-04718691
https://shs.hal.science/halshs-04718691v1
Submitted on 3 Oct 2024
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Distributed under a Creative Commons Attribution 4.0 International License
Continuity of lead-silver production in the area of Cartagena-La Unión (Spain) after the Phoenician trade crisis of the 6th century BC
Céline Tomczyk\textsuperscript{a,}\textsuperscript{*}, Christophe Petit\textsuperscript{a}, María Berná\textsuperscript{b}, Laurent Costa\textsuperscript{c}, Jessica Legendre\textsuperscript{d}, Jesús Moratalla\textsuperscript{e}, Sidonie Révillon\textsuperscript{f}, Pierre Rouillard\textsuperscript{c}
\textsuperscript{a} Université Paris 1 Panthéon-Sorbonne, UMR 7041 ArScAn, équipe archéologies environnementales, France
\textsuperscript{b} Museo Histórico de Aspe, Spain
\textsuperscript{c} CNRS, UMR 7041 ArScAn, équipe mondes grecs archaïques et classiques, France
\textsuperscript{d} Ministère de la Communication et de la Culture, Centre de recherche et de restauration des musées de France (C2RMF), France
\textsuperscript{e} Universidad de Alicante, Spain
\textsuperscript{f} Laboratoire SEDISOR, UMR 6538 Géosciences Océan, France
\textbf{Article info}
\textbf{Keywords:}
Litharge
Metallurgy
Lead isotopes
Commercial crisis
Phoenician
\textbf{Abstract}
The 6th century BCE is marked by major changes in the Mediterranean trade routes. These changes had a significant impact on the production of silver-lead in the Iberian Peninsula, which was previously thought to have come to an abrupt end.
However, the study of litharge from the early 5th century BCE to the first half of the 3rd century BCE, from three sites in the Alicante region, demonstrates that the types, textures and compositions of litharge remain unchanged after the crisis in Phoenician trade. Thus, although no production workshops have been found in the Cartagena mining district, it is possible to affirm that the cupellation processes used at the beginning of the first millennium BCE continued until the 3rd century BCE.
Lead isotopic analysis of the litharge and two lead artefacts indicate that they come from ores from the very rich mines of Cartagena-La Unión, which were extensively exploited between the 8th and 6th centuries BCE.
Despite a major decline in mining and metallurgical production and considerable changes in the networks of exchange in the Mediterranean, the same production chain persisted from lead extraction to the type of metallurgy practised. The economic crisis does not therefore lead to a cessation of production, but the quantity of lead (and silver) produced would probably be significantly lower.
\section*{1. Introduction}
Throughout the first millennium BCE, the Iberian Peninsula was the site of rich and complex interactions between indigenous, Phoenician, Greek, Carthaginian and later Roman populations (see in particular Abad Casal, 1992; Momrak, 2005; Díaz, 2015; Albeida, 2018; Rouillard, 2023).
The coastal regions of the eastern Iberian Peninsula witnessed extensive agricultural production, as evidenced by significant deforestation as early as the 9th century BCE (Pappa, 2015). Pollen records indicate a specialisation in large-scale cereal, oil, wine (Buxó, 2008) and fruit production (Pérez-Jordà et al., 2017). These agricultural activities were sometimes associated with other activities, particularly mining (Pérez-Jordà et al., 2023), especially of lead-silver. The importance of metallurgical activity was so important in the early first millennium BC that lead contamination of Phoenician-Iberian origin (~1100–800 BCE) and later of Roman origin has been detected in Greenland ice cores (McConnell et al., 2018).
However, the flourishing maritime trade from the Iberian Peninsula to the eastern Mediterranean declined in the second half of the 6th century BCE during a Phoenician trade crisis, leading to the abandonment of numerous sites in the Iberian Peninsula (Aubet, 1995; Ruiz, 2007; Puckett, 2012; Díaz, 2015; Murillo-Barroso et al., 2016). The real impact of this economic crisis remains uncertain. It could manifest itself in a severe recession (Shefton, 1995; Dietler and Lopez-Ruiz, 2009 p.4–49) or even lead to political collapse and the overthrow of the current elites (Rothenberg and Blanco-Freijeiro, 1981; Díaz, 2015).
It was not until the arrival of the Romans, at the beginning of the 2nd...
century BCE, that lead-silver production resumed on a massive scale in Iberia. At that time, the entire southwestern territory of the peninsula was exported to Rome and underwent a political, socio-economic and cultural reorganisation (Keay, 2003; Myers, 2016).
2. Successive phases of intensive exploitation and mining decline in the Alicante region
The Cartagena-La Unión mining area, which is 23 km long and 6.5 km wide, is located in the extreme west of the Murcia region (Spain), amidst gently rolling terrain bordering the Mediterranean coast. Several phases of mineralisation have been identified within these mines, but the dominant facies corresponds to a stratiform deposit within Triassic limestones. Silver-bearing galena (±sphalerite) is intercalated within thin bands of calcite and dolomite (Andreazini et al., 2015). The most significant mineralised veins can reach thicknesses of up to 10 m and have the highest lead concentrations in the Iberian Peninsula (Oen et al., 1975). The richness of these deposits, together with their geographical location, which facilitated the export of products via coastal sea routes, contributed to the establishment of the mines of Cartagena-La Unión as a leading mining district from the prehistoric period until their mass exploitation in the 20th century (Vilar, 1986; Rico et al., 2009).
2.1. Periods of intensive exploitation between the 8th and 6th centuries BCE and during the Roman period
These mines are mainly known for their intensive exploitation between the 2nd century BCE and the 1st century CE, during the Roman period (Domergue, 1990; Caparros and del Carmen, 1999; Marín et al., 2010; Fabre et al., 2014; Rico et al., 2018). Lead ingots from Cartagena were exported throughout the Mediterranean during this period (Trincherini et al., 2009; Clemenza et al., 2017; Domergue and Rico, 2024).
However, while the importance of this ancient exploitation is undeniable, recent discoveries also reveal an earlier phase of intense lead exploitation between the 8th and the second half of the 6th century BCE. The ore from the Cartagena mines was then processed in workshops located close to the coast, less than 100 km from the mines (Fig. 1). Three Phoenician smelting workshops have been discovered:
- at La Fonteta (with smelting activity probably between 800 and 550 BCE (Renzzi et al., 2009) but with occupation until the end of the 6th century (Rouillard et al., 2007))
- at Cabezo Pequeno del Estaño (750–650 BCE, Prados Martínez et al., 2018)
- at Punta de los Gavilanes (700–600 BCE, Polzer, 2014).
The presence of a fourth workshop is suspected, as paleopollution identified at the Laguna de Río Seco site suggests extractive lead metallurgy between 1150 and 550 BCE (García-Alix et al., 2013).
While Roman production was usually exported in the form of ingots, Phoenician lead was not typically exchanged in this way, but more often directly as galena or litharge. Litharges are an intermediate stage in lead-silver metallurgy. The reductive smelting of silver-bearing lead ores generates slag and bullion lead. A litharge is formed when this bullion lead is cupelled to separate the silver which is released in liquid metallic form, from the lead (which is then retained in the litharge). The reducing fusion of this litharge produced lead in metallic form.
Ships passed through along the Alicante region loaded with several tonnes of lead, as evidenced by two 7th century BCE shipwrecks discovered off the coast of the lead reduction workshop at Punta de los Gavilanes (Negueruela et al., 2004; Polzer, 2014). The first shipwreck, discovered at Bajo de la Campana, was loaded with over a ton of non-argentiferous galena from Almería (Polzer, 2014) in the form of 10,000 fragments (Polzer, 2012), accompanied by goods of exotic origin (copper and tin ingots, bronze furniture, amber and ivory (Ruiz Cabrero and Mederos Martín 2004; Nantet, 2016 p. 277–279). The second shipwreck, called Mazarrón 2, carried Phoenician amphorae and almost...
three tonnes of litharges made from lead mined in the Cartagena mining district (Renzi et al., 2009).
The Phoenician coastal navigation system (Renzi et al., 2009; Tejedor, 2018) allowed the spread of lead from Cartagena along the Spanish coasts (Murillo-Barroso et al., 2016). This trade was undoubtedly crucial as lead was used, among other things, to extract silver from jarosites mined in the south of the Iberian Peninsula (Cradlock, 2014; Murillo-Barroso et al., 2016). Lead exports to Rio Tinto’s mines to extract silver from jarosites were organised as early as the 11th-9th centuries BC (Wood and Montero-Ruiz, 2019) and intensively between the 8th and 7th centuries BC (Neville, 2007 p. 140–141; Kaufman et al., 2016).
Between the 7th and 6th centuries BCE, lead from the Cartagena mines also reached the island of Ibiza, 250 km away. Indeed, isotopic analysis of lead show that lead from Cartagena was smelted there alongside local ores (Ramon Torres et al., 2011).
Silver production from various sites in the Iberian Peninsula was traded throughout the Mediterranean until the 7th century BCE, as demonstrated by isotopic analysis of silver artefacts from the easternmost Mediterranean (Eshel et al., 2019; Wood et al., 2019).
### 2.2. Few discoveries in the 5th-3rd centuries BC
While it is undisputed that production and export were highly significant before the 7th century BCE, to date, there is little evidence of Iberian mining activity after this period.
In the 6th century BCE, mines in Ibiza ceased activity (Ramon Torres et al., 2011), as did those in Catalonia (Rafel et al., 2010) and Rio Tinto (Fernández Jurado, 1993; Pérez Macías, 1999). Many smelting sites related to the extraction of silver from jarosites also ceased operations during this century (Rothenberg and Blanco-Freijeiro, 1981; Anguilano, 2012; Wood and Montero-Ruiz, 2019). The very limited quantity of silver artifacts discovered in the Iberian Peninsula for the following centuries (5th-3rd centuries BCE) seems to suggest that silver production shifted towards the eastern Mediterranean (Murillo-Barroso et al., 2016).
In the region of Alicante, discoveries of litharges after the 6th century BCE are also very rare, and no metallurgical workshops have yet been discovered. The question of mining remains open. The discovery of rare ceramics dated between 500 and 300 BCE in some mine workings (Domergue, 1987; Salvador, 2012; Bellón Aguilara, 2013) and of litharges cakes and cupels dated from 400 BCE at Punta de Gavilanes (Ros Sala, 1993; Ros Sala et al., 2003) suggest that the Cartagena-La Unión mines may have been exploited. This exploitation, if it took place, would have been of very low intensity: a recent study of palaeopollution in the Bay of Cartagena highlights that anthropogenic lead contamination is barely detectable from around the 6th century BCE until the intense resumption of mining activities in the Roman period (Ortiz et al., 2021).
### 3. Materials and methods
Our study aims to test whether some lead was indeed produced in the Cartagena mining district between the crisis in Phoenician trade and the Roman mining revival. To this end, litharges from sites dating between the 5th and 3rd centuries BCE were analysed in order to define their composition and deduce their geographical origin. Two lead artefacts from the 5th century BCE were also analysed using EDS (SEM) and lead isotopes to determine whether they were derived from the smelting of the litharges studied (Table 1).
All the material studied comes from the Alicante region, from three sites located approximately 80 km north of the Cartagena mines.
The first of these, El Oral (San Fulgencio, Alicante) is an Iberian settlement located on the north bank of the Segura, 7 km from its mouth. The litharge fragments were found in a dwelling house (first half of the 5th century BCE), in an open-air room, mixed with iron slag, next to a stone anvil. This room probably served as a small metalworking workshop (Casal and Selles, 1993; Mira, 2001). The second site, Las Tres Hermanas (Aspe, Alicante), is an Iberian settlement dated to between the end of the 5th century BCE and the middle of the 4th century BCE. A pile of litharge cakes was found in a sloping area, against the wall of a house, but no specific structure was identified (Moratalla Jávega et al., 2023). Finally, the site of La Illeta dels Banyets, in El Campello (Alicante), occupies a small coastal island that has seen various phases of occupation (Bronze Age, Iberian, late Roman). The litharges were recovered from an artisanal facility linked to fishing, in a space dedicated to storage, with no other metallurgical elements. The level has been dated to the late 4th-early 3rd century BCE (Olcina Domenech et al., 2009).
EDS analysis were carried with a Zeiss EVO 10 scanning electron microscope (SEM) on several distinct samples to determine their composition and identify any mineral phases present. Measurements were made on a sawn, flattened surface at an acceleration voltage of 20 kV and a working distance of approximately 8.5 mm.
Lead isotope analyses were then carried out by MC-ICP-MS at the Sedior laboratory to determine the origin of the material studied.
### 4. Results
#### 4.1. Chemical composition and typology of the material examined
##### 4.1.1. Macroscopic appearance
The litharge cakes come from archaeological contexts dating from after the 6th century BCE. They have similar dimensions, with diameters ranging from 21 to 28 cm and thicknesses not exceeding 2 cm. They have a thin crust of cream-coloured surface alteration. When freshly fractured, their textures are quite similar: they show a more or less laminar structure characterised by parallel bands of whitish minerals and visible greyish spheres homogeneously dispersed in a red-orange matrix (Fig. 2). The most porous portions of the litharges are richer in lead metallic beads.
It is noteworthy that the diameter of the litharges studied corresponds to that of the litharges from the Mazarrón 2 shipwreck, dated to the 7th century BCE (Negueruela et al., 2004) and that the texture and thickness of these litharges are similar to those of the small fragments found in the La Fontena workshop, dated to the 8th to mid-6th century BCE (Renzi et al., 2009).
##### 4.1.2. Chemical composition of the litharge
The presence of silver metal beads on the surface of the litharge cakes indicates a cupellation process (in the case of the litharge from El Oral, the presence of 0.49 % silver had already been noted by Abad Casal and Selles (1993 p. 265). Analysis not identify the presence of silver in the matrix of the litharge, indicating the efficiency of the separation of lead and silver (with silver present only on the surface).
The composition of the litharge matrices is very homogeneous (Tables 2, 3 and 4) and the litharges from the three sites studied are similar. They have high lead (Pb) contents, ranging from 55 to 68 %.
The lead oxide matrix is always accompanied by silicified calcium carbonates (CaCO3 ± Si,Al) corresponding to the whitish elements sometimes visible to the naked eye (Fig. 3). Pure lead beads (corresponding to the metallic beads also visible to the naked eye) are abundant in the porosities.
The chemical composition of litharge explains the metallurgical
Fig. 2. Plan view, left: macroscopic appearance of litharge cakes from Las Tres Hermanas; cross-section, right: top – sawn surface of a porous litharge (Las Tres Hermanas), middle – highly altered surface of a non-porous litharge (Las Tres Hermanas), bottom – a fragment of litharge (altered surface) from the Campello site (La Illeta dels Banyets).
Table 2
EDS SEM analysis of Las Tres Hermanas litharges (averages over 6 analyses per sample).
| Element | C | O | Al | Si | Pb | Ca | Comment |
|---------|-----|------|-----|-----|------|------|------------------|
| Weight %| | | | | | | |
| | 6.97| 20.20| 2.39| 4.66| 55.41| 10.38| Sample L1 TH, non-porous |
| | 4.78| 19.49| 0.77| 3.03| 59.56| 11.63| Sample L2 TH, non-porous |
| | 5.11| 20.28| 1.15| 1.62| 63.37| 8.20 | Sample L3 TH, porous |
| Margin of error | 9 % | 10 % | 9–11 % | 8–6 % | 2.5 % | 5.5–12 % |
Table 3
EDS SEM analysis of litharges found at Campello (sample L CA, average of 6 analyses).
| Element | C | O | Al | Si | Pb | Ca |
|---------|-----|------|-----|-----|------|------|
| Weight %| | | | | | |
| | 4.16| 15.76| 1.54| 2.81| 67.5 | 8.23 |
| Margin of error | 9.18 % | 11.63 % | 13.38 % | 9.26 % | 3.40 % | 9.24 % |
Table 4
EDS SEM analysis of El Oral litharges (sample L EO, average of 6 analyses).
| Element | C | O | Al | Si | Pb | Ca |
|---------|-----|------|-----|-----|------|------|
| Weight %| | | | | | |
| | 3.08| 22.71| 1.21| 2.03| 59.63| 11.22|
| Margin of error | 10.62 % | 11.22 % | 16.88 % | 12.03 % | 4.52 % | 8.37 % |
process employed. A mixture of lead and silver (bullion lead) is melted in a circular reactor about 25 cm in diameter. The lining of the reactor is porous and rich in calcium. It absorbs the lead by capillarity, while the silver (in metallic form) remains on the surface. The process used corresponds to a classical cupellation process (see, for example, Izquierdo De Montes, 1997; Bayley and Eckstein, 2006; Martinón-Torres et al., 2008; L’Héritier et al., 2015; Moureau and Thomas, 2016), but the striking fact here is that the size and the composition of the litharges found at the three sites studied are almost identical.
Finally, the SEM (EDS) analysis of a small, highly altered lead ingot found at El Oral (Fig. 4) and a small drop of shapeless lead (approximately 1 cm wide by 3 cm high) found among the litharge fragments at Las Tres Hermanas indicate that they are composed of pure lead. This again emphasises the effective separation of lead and silver during the cupellation process.
The litharges from the three sites investigated are similar in type, texture and composition. They originate from the same type of cupellation process, probably using ceramic vessels of similar shape.
4.2. Identification of the origin of the lead
Although the lead-silver ore deposits of Cartagena are the closest to the sites, and the litharges have textures similar to those produced from lead from Cartagena one to two centuries earlier (Renzi et al., 2009), it cannot be ruled out that the lead contained in these litharges has other, much more distant, sources. These analyses show a very strong similarity of signatures, confirming the hypothesis of a common origin for these artefacts (Table 5).
Several hypotheses of provenance must be considered:
- The first is a local, i.e. Iberian, origin;
Fig. 3. On the left, texture and structure of a litharge: the bands of whitish minerals visible with the naked eye correspond to calcium carbonates (slightly silicified) organised in large bands, but also to small crystals (dark in EDS) interspersed in the PbO matrix; on the right, presence of pure lead beads (corresponding to the beads visible with the naked eye) in the porosity of the litharge (EDS).
Fig. 4. Lead ingot analysed (weight 8.5 g).
Table 5
Lead isotope analyses carried out as part of this study, results NIST 981 standardised.
| Site | ID | Description | $^{206}\text{Pb}/^{204}\text{Pb}$ | $2\sigma$ | $^{207}\text{Pb}/^{204}\text{Pb}$ | $2\sigma$ | $^{208}\text{Pb}/^{204}\text{Pb}$ | $2\sigma$ |
|-----------------------|------|-----------------|----------------------------------|-----------|----------------------------------|-----------|----------------------------------|-----------|
| Las Tres Hermanas | L1 TH| Litharge | 18,7175 | 0,0004 | 15,6805 | 0,0005 | 39,0188 | 0,0016 |
| Las Tres Hermanas | L2 TH| Litharge | 18,7067 | 0,0012 | 15,6851 | 0,0009 | 39,0320 | 0,0124 |
| Las Tres Hermanas | L3 TH| Litharge | 18,7045 | 0,0012 | 15,6845 | 0,0008 | 39,0283 | 0,0114 |
| Las Tres Hermanas | 18 TH| Pb artefact | 18,7069 | 0,0008 | 15,6824 | 0,0007 | 39,0229 | 0,0112 |
| El Oral | L EO | Litharge | 18,7071 | 0,0014 | 15,6850 | 0,0011 | 39,0325 | 0,0132 |
| El Oral | P EO | Pb ingot | 18,7080 | 0,0008 | 15,6792 | 0,0007 | 39,0156 | 0,0134 |
| Campello | L CA | Litharge | 18,6943 | 0,0015 | 15,6792 | 0,0011 | 39,0026 | 0,0141 |
- The second is from the Aegean region, where numerous lead-silver mines were in operation in the Cyclades, on the island of Thasos or in the Laurion area (Pernicka et al., 1981; Jones, 1982; Weisgerber and Pernicka, 1995; Mussche, 2006; Papadopoulou, 2011);
- The third hypothesis is Sardinian; indeed, lead-silver metallurgy workshops have been discovered in the Montevecchio mining basin in the south of the island (Caro et al., 2013);
- The last is Anatolian (Taurus Mountains), although the lead-silver exploitations are poorly dated (Pitarakis, 1998).
Bivariate plots normalised to the stable isotope ($^{204}\text{Pb}$) clearly show that a Sardinian origin can be rejected (the $^{206}\text{Pb}/^{204}\text{Pb}$ ratios from these deposits are too low); the hypothesis of an Aegean origin must also be rejected because the $^{206}\text{Pb}/^{204}\text{Pb}$ ratios are slightly too high. Only some
signatures from the Taurus Mountains could be close to the signatures of the artefacts in the diagram $^{208}\text{Pb}/^{204}\text{Pb}$ vs $^{206}\text{Pb}/^{204}\text{Pb}$, but they differ in the ratios $^{207}\text{Pb}/^{204}\text{Pb}$ vs $^{206}\text{Pb}/^{204}\text{Pb}$; the Anatolian origin can be rejected. The mined ores would therefore be of Iberian origin (Fig. 5).
In order to define their origin more precisely, the lead isotopic signatures of the artefacts can also be compared with different mining areas in the Iberian Peninsula. Six potential mining areas located relatively close (within 150 km) can be considered.
Among the available signatures are those from the mines of Cartagena (Graeser and Friedrich, 1970; Baron et al., 2017; Milot et al., 2021; Domergue and Rico, 2024 p. 25), which were extensively exploited in previous centuries (10th-8th centuries BC to mid-6th century BCE) to produce litharges similar to those studied. In addition, the signatures of neighbouring ore deposits include:
- The closest mining district to Cartagena: Mazarrón (data from the Oxalid database; Graeser and Friedrich, 1970; Milot et al., 2021)
- Sierra Almagrera (data from the Oxalid database; Arribas and Tosdal, 1994; Hunt Ortiz, 2003; Montero Ruiz and Murillo-Barroso, 2010; Murillo-Barroso et al., 2019);
- Sierra de Bedar (data from the Oxalid database; Montero Ruiz and Murillo-Barroso, 2010; Murillo-Barroso et al., 2019);
- Cabo de Gata (data from the Oxalid database; Hunt Ortiz, 2003);
- Sierra de Gador (Arribas and Tosdal, 1994; Montero Ruiz and Murillo-Barroso, 2010)
Furthermore, although located respectively 250 km and over 500 km from the site, the mining areas of Ibiza (data from Hermanns, 2014), Rio Tinto (Oxalid database; Marcoux, 1998; Pomies et al., 1998; Hunt Ortiz, 2003), and Catalonia (Rafel et al., 2019; Montero Ruiz, 2017; Canals and Cardellach, 1997) were also considered. This choice is based on the fact that lead mining activity has been previously confirmed in these areas during earlier periods.
This second projection shows that the most probable origin of all the artefacts is the Cartagena-Mazarrón mining area (Fig. 6). This result is interesting for a number of reasons. Firstly, it emphasises that lead objects were produced from the litharges in circulation. In addition, the litharges analysed have signatures comparable to those produced at La Fonteta one to two centuries earlier. These results support those of Renzi et al., (2009), who also attribute a Cartagena-Mazarrón origin to the litharges from the La Fonteta workshop and the Mazarrón 2 shipwreck. Those mines probably provided the lead that led to the production of litharges of comparable typology between the 8th and 3rd centuries BC.
Therefore, although no workshop has yet been discovered, it appears that the production of litharge between the 5th and 3rd centuries BC followed a similar production model that of the period before the commercial crisis: its ore source is similar, as is its texture and chemical composition.
5. Discussion: Low but continuous production
If it is confirmed that the mines of Cartagena were indeed exploited between the 5th and 3rd centuries BCE, it is very likely that the production processes of the litharges remained identical throughout the period from the 8th to the 3rd century BCE, despite the commercial crisis of the 6th century BCE. It was only during Roman exploitation that...
practices changed: the litharges then took the form of small conical rolls, one to two centimetres in diameter (Baron et al., 2017).
The litharges would have circulated on a regional scale, where they would have been used to extract metallic lead, which could have been used to shape lead artefacts or incorporated into ternary bronze alloy compositions. Some of this lead may also have been exported along new trade routes. An argument in favour of the emergence of new exchange networks is based on the analysis of silver objects found at Mas Castellar de Pontos (Catalonia, 500–200 BCE): some of these objects are thought to have the isotopic signature of lead from the mines of Cartagena (Montero Ruiz et al., 2008). Thus, Catalonia, which had previously been a producer of lead, could have imported it at the beginning of the 5th century BCE.
The sudden end of silver production in the Iberian coastal regions (from the Río Tinto mines to the island of Ibiza) certainly caused a drastic fall in lead production from the Cartagena mines, to the point where it would have been barely perceptible in the pollution of the bay. However, this study shows that production in Cartagena would have continued on a smaller scale. Some silver and lead would have been extracted from the mines and smelted using the same processes as before the crisis, probably by the same population group.
6. Conclusion
The litharges from three sites in the province of Alicante, dated to a period after the Phoenician trade crisis, show strong geochemical similarities with those produced in the same region before this crisis. The diameter, thickness, and texture of the litharges indicate a common manufacturing process involving the absorption of lead through capillarity by a porous and calcium-rich furnace lining. The presence of metallic silver on their surface indicates that they are linked to the same cupellation process.
Lead isotope analysis shows that it came from the Cartagena mines, as did the earlier litharges. This technological continuity, from the deposits mined to the metallurgical processes used to produce standardised litharges, shows that a same chaîne opératoire persisted throughout the 8th to 3rd century BCE in the region.
Estimating the production tonnages from the mines is complex, as the mining areas have been extensively reworked from Roman times to the modern period. However, sedimentary archives indicate a sharp decline in production during the 6th century BCE. Production from the Cartagena mines probably declined after this crisis, but did not cease altogether.
CRediT authorship contribution statement
Céline Tomczyk: Writing – original draft, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Christophe Petit: Writing – review & editing, Funding acquisition. María Berná: Resources. Laurent Costa: Project administration, Funding acquisition. Jessica Legendre: Methodology, Investigation. Jesús Moratalla: Validation, Supervision. Sidonie Revillon: Methodology, Investigation, Formal analysis. Pierre Rouillard: Writing – review & editing, Validation, Supervision, Project administration.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
all data are included in the manuscript
Acknowledgements
The SEM equipment used in this study belongs to the MAPS platform - Imagerie des patrimoines et spatialisation et microscopie pour les matériaux anciens de la MSH Mondes (UAR 3225). It was funded by the Île-de-France region as part of the major interest Matériaux anciens et patrimoniaux (DIM – MAP), the CNRS, and the LabEx ‘Les passés dans le présent’.
The lead isotope analyses were funded by the LabEx DynamiTe and the ArScAn laboratory. We would like to thank Zoï Tsirtsoni and Nicole Lozouet for their support in obtaining this funding.
The authors also wish to thank Romualdo Seva Román, María Dolores Landete Ruiz, and Cristina Biete Banón for the preliminary analyses conducted on the material from Las Tres Hermanas. |
Peptide-Based Strategies Against SARS-CoV-2 Attack: An Updated In Silico Perspective
G. Moroy, P. Tuffery
To cite this version:
G. Moroy, P. Tuffery. Peptide-Based Strategies Against SARS-CoV-2 Attack: An Updated In Silico Perspective. Frontiers in Drug Discovery, 2022, 2, pp.899477. 10.3389/fddsv.2022.899477. hal-04281646
HAL Id: hal-04281646
https://cnrs.hal.science/hal-04281646v1
Submitted on 13 Nov 2023
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Peptide-Based Strategies Against SARS-CoV-2 Attack: An Updated In Silico Perspective
G. Moroy and P. Tuffery*
Université Paris Cité, CNRS, INSERM, Unité de Biologie Fonctionnelle et Adaptative, Paris, France
Because of its scale and suddenness, the SARS-CoV-2 pandemic has created an unprecedented challenge in terms of drug development. Apart from being natural candidates for vaccine design, peptides are a class of compounds well suited to target protein-protein interactions, and peptide drug development benefits from the progress of *in silico* protocols that have emerged within the last decade. Here, we review the different strategies that have been considered for the development of peptide drugs against SARS-CoV-2. Thanks to progress in experimental structure determination, structural information has rapidly become available for most of the proteins encoded by the virus, easing *in silico* analyses to develop drugs or vaccines. The repurposing of antiviral/antibacterial peptide drugs has not been successful so far. The most promising results, but not the only ones, have been obtained targeting the interaction between SARS-CoV-2 spike protein and the Angiotensin-Converting Enzyme 2, which triggers cellular infection by the virus and its replication. Within months, structure-based peptide design has identified competing for picomolar candidates for the interaction, proving that the development of peptide drugs targeting protein-protein interactions is maturing. Although no drug specifically designed against SARS-CoV-2 has yet reached the market, lessons from peptide drug development against SARS-CoV-2 suggest that peptide development is now a plausible alternative to small compounds.
**Keywords:** SARS-CoV-2, peptide, *in silico*, protein-protein interaction, synthetic vaccine
1 INTRODUCTION
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is responsible for COVID-19 (coronavirus disease 2019) and spread rapidly following its emergence in Wuhan in December 2019. While the majority of COVID-19 infections are relatively mild, with most patients recovering in 2–3 weeks, a significant number of patients can develop a severe respiratory illness. Due to the instability of its genome, numerous mutations have appeared in the SARS-CoV-2. Some of them have induced traits considered beneficial for viral adaptation (Pachetti et al., 2020), such as an increase of its transmissibility, an enhancement of its ability to evade natural immunity or a decreasing susceptibility to neutralizing antibodies. Currently, the World Health Organization (WHO) has declared five variants of concern: Alpha, Beta, Delta, Gamma, and Omicron variants (Karim and Karim, 2021; Starr et al., 2021; Tegally et al., 2021; Xie et al., 2021), but the list keeps growing.
Since the end of 2020, several COVID-19 vaccines are available, enabling the reduction of the spread, severity, and death caused worldwide. A monoclonal antibody (sotrovimab) and two
combinations of two monoclonal antibodies (casirivimab/imdevimab and bamlanivimab/etesevimab) have been approved as drugs. However, their manufacturing costs are high and they are not convenient for patients since they are administered by intravenous injection. Currently, four non-biologic drugs have been approved. In October 2020, Remdesivir was the first small molecule approved by Food and Drug Administration (FDA). It is a broad-spectrum antiviral medication, which is administrated by intravenous injection. Remdesivir can be also administrated in combination with baricitinib, a drug approved for the treatment of rheumatoid arthritis. Since July 2021, the FDA authorized the use of baricitinib without remdesivir for patients requiring supplemental oxygen. Interestingly, in March 2022, the RECOVERY (Randomised Evaluation of COVID-19 Therapy) trial showed that baricitinib alone is able to reduce mortality by about 20 percent. In December 2021, Molnupiravir has been approved by the FDA for certain patients for whom other treatments are not possible. Molnupiravir is an anti-viral drug, which acts by preventing RNA virus replication. Since the end of December 2021, a fourth drug, Paxlovid, has been available for people who are at high risk of having severe COVID-19. Paxlovid contains the antiviral medications nirmatrelvir and ritonavir. Baricitinib, Molnupiravir, and Paxlovis are administrated orally to the patient.
Although vaccines and the four available drugs are great therapeutic advances to cure COVID-19, it is still necessary to develop new treatments, which are more convenient or less limiting for the patients (Fenton and Keam, 2022). Even if the small molecules are usually more easily manufactured, more stable, and less costly than biologics (Makurvet, 2021), it is important not to just focus on small molecules to design new drugs. In particular, peptide-based drugs have significant advantages such as low toxicity and better specificity.
Peptides have in recent years gained increased interest as candidate therapeutics, with presently over 80 peptide drugs on the market, more than 150 peptides in clinical development, and over 400 undergoing preclinical studies (Muttenthaler et al., 2021). Peptides are easy to develop both in terms of time and technology and are cost-effective, which makes them good candidates for the development of hits or probes up to the proof of concept. Compared to small compounds, despite they have the advantage of low intrinsic toxicity and better specificity, the main limitations of peptides have long been the mode of delivery, mostly parenteral, and the renal clearance that reduces quickly the bioavailability of the peptide drugs (Muttenthaler et al., 2021). However, recent developments have led to progress on those limitations: new modes of formulation and protection, to cite some, have resulted in improved bioavailability, biostability, and biodelivery. Some peptide drugs on the market such as the Semaglutide or plectanotide, can be administered orally (Zhang & Chen, 2021; Lewis et al., 2022). In addition, progress in cell-penetrating peptides now makes it likely to design peptides tissue, cell, or organelle-specific (Xu et al., 2019), enlarging the landscape of possible applications of peptides targeting PPIs to almost any kind of interaction. Apart from synthetic hormones (Vlieghe et al., 2010) and peptides targeting GPCRs (Davenport et al., 2020), peptides are especially well suited to target protein-protein interactions (Nevala and Giralt, 2015; Wójcik and Berlicki, 2016; Bruzzoni-Giovanelli et al., 2018). Compared to small compounds, peptides have higher molecular weights and can bind to a larger surface, which usually results in binding targets with higher specificity and affinity.
The development of peptide drugs also benefits from the progress of structural biology and structural bioinformatics. SARS-CoV-2 illustrates very well the reactivity of structural biology. Thanks to X-ray diffraction and electron microscopy, many SARS-CoV-2 protein structures have been available as soon as 2020, and most structures have been available not later than mid-2021, with some exceptions, though, as illustrated from the compendium available at RCSB or EBI (https://www.rcsb.org/news/feature/5e74d55d2d410731e9944f52; https://www.ebi.ac.uk/pdbe/covid-19). Furthermore, the structure of some protein complexes contributing to viral infection has also been solved, particularly that of the SARS-CoV-2 Spike protein receptor-binding domain (RBD) in interaction with the Angiotensin-Converting Enzyme 2 (ACE2) [PDB: 6M0J (Wang et al., 2020)]. Although not all molecular mechanisms related to SARS-CoV-2 viral infection are fully understood, this structural information has revealed precious to propose peptide candidates combating the viral attack. Concomitantly, the progress of *in silico* approaches for the prediction of peptide structure, protein-peptide, and protein-protein interactions, as well as progress in conformational sampling, including molecular dynamics (MD) simulations and docking can in principle make *in silico* protocols starting from the structure of the targets effective to identify peptide candidates.
Here, we review the various results obtained so far that exploit *in silico* protocols for peptide development in the context of the SARS-CoV-2 pandemic. We describe shortly efforts to identify immunogenic peptides from the available structures as well as efforts to identify candidate peptide drugs targeting viral infection, and we discuss their expected impact on the road to SARS-CoV-2 drug development.
### 2 THE DESIGN OF VACCINE BASED ON SYNTHETIC PEPTIDES
T cells have a crucial role in the immune response to viral infections like COVID-19. The presentation of short viral peptides by the human leukocyte antigen (HLA) complex is the first step in the development of T-cell immunity. Thus, the viral peptides presented by HLA class I molecules and HLA class II molecules can activate CD8 T cells and CD4 T cells, respectively. Once activated, CD8 T cells can recognize and kill virus-infected cells. Activated CD4 T cells act by the release of cytokines, which stimulate other immune cells to trigger the appropriate immune response.
At the beginning of 2020, understanding how the immune system reacts to the SARS-CoV-2 infection is critical to the development of vaccines. In this aim, the T cell memory in 42 patients who recovered from SARS-CoV-2 infection and 16
unexposed donors has been studied by experimental assays using peptides spanning SARS-CoV-2 (Peng et al., 2020). 41 peptides containing CD4⁺ and/or CD8⁺ epitopes have been identified. Among these peptides, three peptides deriving from spike protein, two from membrane protein, and one from nucleocapsid protein were frequently targeted by T cells, suggesting that they can trigger an immune response.
A systematic vaccine-informatics approach has been applied to the spike protein to identify antigenic peptides, which could be used to design a novel vaccine candidate (Alam et al., 2021). Based on the spike protein sequence, they applied several bioinformatics tools to highlight potential immunogenic peptides and assess their autoimmune, allergic, and toxic response. Thus, 12 antigenic peptides have been identified. They have 80%–90% identity with experimentally identified epitopes of SARS-CoV. They are predicted as nontoxic, nonallergenic, and highly antigenic. Moreover, the authors performed docking computations of eight peptides on the surface of the HLA to understand how the peptides interact with HLA. Although the authors are confident in the ability of these peptides to trigger an effective immune response against the SARS-CoV-2, no experimental confirmation supports their conclusions.
Recently, a peptide vaccine candidate, named CoVac-1, completed a phase I clinical trial (Heitmann et al., 2022). CoVac-1 is a peptide composed of multiple SARS-CoV-2 epitopes derived from various viral proteins (Figure 1), such as spike, nucleocapsid, membrane, envelope, and open reading frame 8. It contains a synthetic toll-like receptor 1/2 ligand, which acts as an adjuvant to help the vaccine produce a better immune response. From 28 November 2020 to 1 April 2021, 36 healthy adults were enrolled and received one dose of CoVac-1. The participants have been followed up until 56 days after the CoVac-1 injection. 56 adverse effects have been observed, but they were predominately mild (headache, fatigue, nausea ...) indicating that CoVac-1 has a favorable safety profile. Moreover, T cell responses were persistent 3 months following vaccination and were not affected by mutation of Alpha and Beta variants. CoVac-1 is currently in phase II clinical trial.
3 THE SEARCH FOR DRUGS COMBATING SEVERE ACUTE RESPIRATORY SYNDROME CORONAVIRUS 2 ATTACK
3.1 Structure Based Peptide Design
Figure 2 summarizes the different strategies that have been explored to block SARS-CoV-2 proliferation. The main
| Best peptide(s) | Rationale | Target | Methods | Validation | References |
|-------------------------------------------------------------------------------|------------------------------------------------|-----------------|------------------------------------------------------------------------|------------|-----------------------------|
| FLDKFNHAEAEDLFYQSSL | ACE2 fragment binding RBD | ACE2: RBD | Docking (PyDock, HADDOCK, ZDOCK) MD refinement (50 ns, Gromacs – conditions not detailed) Toxicity prediction (ToxinPred) | None | Baig et al. (2020) |
| VPEQLYCLLQKFNGEAEMLFSRS | ACE2 evolved fragment binding RBD | ACE2: RBD | MD (100 ns, NAMD2.13—CHARMM36 force field) MMGB-SA free energies Adaptative evolution based on MD | None | Chaturvedi et al. (2020) |
| TETQAKTFLDKFNHSAEDLFYQS IFEQAKTFTAQFNHEKEDELFYQS IFEQAKTFTAQFNHEKEDELFYQS EGEERIQQDKRKNEQEEDKRYORYGRGKGHQP | Evolved ACE2 fragment binding RBD | ACE2: RBD | EvoDesign (1000 independent design trajectories) Structural homology Docking (HADDOCK) $K_d$ prediction (PRODIGY) MD: (100 ns, Gromacs 2020.2) | None | Huang et al. (2020b) |
| GSHMGDAQDKLKYLVKQLERALREKKSLDELSLEELEKNPSEDALVENNRLNVENNIIVEVLRIIILEAKASAKLA | Evolved ACE2 fragment binding RBD | ACE2: RBD | Structural homology Docking (HADDOCK) $K_d$ prediction (PRODIGY) MD: (100 ns, Gromacs 2020.2) | None | Jaiswal and Kumar, (2020) |
| EDLFYQ | ACE2 fragment | ACE2: RBD | Structural analysis | IC$_{50}$: 1.9 mM Infection reduction rate: ~70% | Larue et al. (2020) |
| QAKTFLDKFNHAEAEDLFYQSSLA | ACE2 fragment binding RBD | ACE2: RBD | Fragment identification: Rosetta/PeptiDerive Local docking: FlexPepDock/Rosetta (300 models) Single point mutational scan: Rosetta/backrub | Infection reduction rate: 60% | Chatterjee et al. (2020) |
| DKEWILQKYEIMRLDELGHAEASMRVSDDLIEFMKKGD ERILLEAERLLEEVER | Mini-protein binding RBD | ACE2: RBD | Template based design: Rosetta-fragment assembly Docking: Rosetta-RifDock + de novo scaffold library | IC$_{50}$: 23 p.m. | Cao et al. (2020) |
| EEQAKTFLDKFNHAEAEDLFYQSS EEQAKTFLDKFNHAEAEDLFYQSSLASWNYNNTNITEE EEQAKTFLDKFNHAEAEDLFYQSS-G-LGKGDFR SALEEQYKTFLDKFMHELEDLLYQLAL-nh2 | ACE2 fragment | ACE2: RBD | MD: Gromacs 5.1.4, Structure Based Model/Go-model, WHAM. | None | Freitas et al. (2021) |
| QAKTFLDKFNHAEAEDLFYQ | ACE2 fragment | ACE2: RBD | User expertise based optimization Docking perturbation upon amino acid substitution (Autodock-vina, 108 peptides) MD (Gromacs 4.6.1—conditions not detailed) | IC$_{50}$: 42 nM $K_d$: 0.03 nM IC$_{50}$: 0.7 µM | Karoyan et al. (2021) |
| GARAHANSIVQOLVSEGADLVQTYVALVAALNGLEVNSR VEQNIFRQHFPNMPMHGISADEDKLAFALAGALERATRQ GHIEIHANSIVQOLVSEGADISRTLRLLFALAFLRGIEVRFSR VEQNIFRQHFPNMPMHGISSRDKLALLALLGAELALVN GAEAHANSIVQOLVSEGADLARTYALLLAATNGDRVNFSR VEQNIFRQHFPNMPMHGISADELAIALLGALERADRQ | ACE2 optimized fragment | ACE2: RBD | Scaffold library Scaffold docking (patchdock, 2000 models per scaffold) Interface design: Rosetta-fastdesign | None | Etemadi et al., 2022 |
(Continued on following page)
| Best peptide(s) | Rationale | Target | Methods | Validation | References |
|-------------------------------------------------------------------------------|------------------------------------------------|----------|-------------------------------------------------------------------------|------------|-----------------------------|
| Ace-TIEEQ-Z-KTFLDK-X-NHEAEDLFYQ-X-SLA-X-WN-nh2 X,Z: stapling residues | ACE2 optimized fragment | ACE2: RBD | Stapling | $K_D$: 2.22 $\mu$M, $IC_{50}$: 2.8 $\mu$M, Serum stability | Curelli et al. (2020) |
| IEEQAKTFLDFNHEKEDLEYQSSLASWNYNITNIT | ACE2 fragment | ACE2: RBD | Stapling | $K_D$: 2.1 $\mu$M, $IC_{50}$: 3.6 $\mu$M | Maas et al. (2021) |
| NCKUJIANOFNSAIGKIQDSSLSTASALGKLQDVWNQNAQALNTLVKQLVPRGSGGSGGSGGLEVLFQGPGINASVNIQK | Spike fragment | Fusion | Homology modeling (swissmodel) | None | Ling et al. (2020) |
| EIDRLNEVAKNLNESLIDL | linker | | | | |
| VAPGTAVLRQWLPTGTLLVDSDLNDVFVSDADSTLIG | Nsp10 fragments binding nsp16 | nsp10: nsp16 | Stable binding to nsp16 (MD) (150 ns, Gromacs, Charmm36 force field, TIP3P water) | None | Dutta and Iype, (2021) |
| KGIMMNVAKYTQLCCYLNTTLTLAVPYNDKGVAPGTAVLRQWLPGTTLVDSLNDVFVSDADSTLIG | | | | | |
strategies are to prevent virus entry into the cell and to block its replication. We discuss these in the next section. A summary of the identified peptides, the methods to identify them, and their experimental properties if measured, is provided Table 1.
3.1.1 Preventing Virus Internalization by Inhibiting the Spike-Angiotensin-Converting Enzyme 2 Interaction
SARS-CoV-22 enters cells through the interaction of the spike glycoprotein with the ACE2 human receptor (Gheblawi et al., 2020; Papageorgiou and Mohsin, 2020), making the spike/ACE2 interaction a preferential target to prevent cell infection. A structure obtained by electron microscopy at 2.9 Å resolution has been reported at the beginning of 2020 (Yan et al., 2020). Figure 3A shows that on the ACE2 side, the interaction involves mostly the N terminal helix—α₁-helix (residues 24–53, sequence: QAKTFLDKFNHEAEDLFYQSSLASWNYNTN), and to a lesser extent residues 79–83 (sequence: LAQMY) and 353–357 (sequence: KGDFR). On the spike protein side, contacts with residues from the RBD, involve mostly the stretch encompassing residues 480–501, and to a lesser extent the stretch encompassing residues 448–454 (Figure 3B). More precisely Barh et al. (2020), have suggested that effective peptides must bind to key positions of the RBD (G485, F486, N487, Q493, and Q498, T500, N501) and that F486, Q493, and N501 are critical residues.
The dominant strategy has been to design peptides directly inspired by the ACE2 peptidase, binding the RBD and thus competing with the interaction of the RBD with ACE2, preventing downstream cell penetration.
Predominantly, special attention has been brought to the 30 amino acid long α₁-helix critical for the RBD-ACE2 binding. From the analysis of the structures of SARS-CoV-2 and SARS-CoV in interaction with ACE2, Larue et al. (2020) have identified two ACE2-derived peptides able to bind Spike RBD in affinity precipitation assays (Larue et al., 2020). Those peptides have shown the ability to inhibit Spike-mediated infection with IC₅₀ values in the low millimolar range. Starting from the fragment between the residues 21 and 45 of the α₁-helix, Sitthiyotha and Chunsrivirot (2020) have used computational protein design and molecular dynamics (MD) simulations to design peptides with enhanced theoretical affinity for SARS-CoV-2 RBD (Sitthiyotha and Chunsrivirot, 2020). During this iterative process, the design focus was on positions not reported to form favorable interactions with SARS-CoV-2-RBD, thus avoiding perturbing those corresponding to existing favorable interactions. Finally, Karoyan et al. (2021) have started with a peptide mimicking the α₁-helix of hACE2, to end with the best peptide-mimics able to block SARS-CoV-2 human pulmonary cell infection with an inhibitory concentration (IC₅₀) in the nanomolar range upon binding to the virus spike protein (Karoyan et al., 2021).
Such studies were based on the implicit hypothesis that the ACE2 fragments would adopt a conformation similar to that observed in the structure of ACE2 and that the binding of the peptide alone would result in a pose similar to that observed in the complex structure. Several early docking experiments have supported this hypothesis. Jaiswal and Kumar (2020) have for instance reported that the fragments of the α₁-helix are found to bind at the α₁-helix/RBD interface (Jaiswal and Kumar, 2020). In addition, Baig et al. (2020) have reported that some peptides they designed starting from 23 amino acids of the N-terminal helix, using alanine scanning to identify critical binding positions, can maintain their secondary structure during MD simulations and provide a highly specific and stable binding to SARS-CoV-2 (Baig et al., 2020). However, things might not be so straightforward in terms of structural behavior. Freitas et al. (2021) have also focused on the α₁-helix and explored the binding and folding dynamics of the natural and designed ACE2-based peptides by MD simulations using coarse-grained representations (Freitas et al., 2021). Their results show a difference in the folding mechanisms...
of the modified peptides (a two-state folding mechanism) binding the RBD, as opposed to the naturally occurring α1 helix peptides, suggesting that amino acid substitution on the alpha-helical sequences can result in subtle changes in dynamic properties compared to the wild-type sequence. Moreover, Kuznetsov et al. (2022) have observed experimentally that a peptide comprising positions 24 to 42 of the ACE2 α1 helix can inhibit the formation of the S1-ACE2 complex in a manner dependent on the peptide concentration (Kuznetsov et al. 2022). They also observed the formation of a ternary complex suggesting that the peptide could bind to other sites besides that observed in the structure of the complex. The consequences of this observation on design are so far unknown. In summary, although effective, the peptides derived from the α1-helix, validated *in vitro* and *in cellulo* by the different groups could exert a functional effect through mechanisms that are not necessarily those expected.
Attempts to combine different fragments of ACE2 distant in the sequence have also been considered. Barb et al. (2020) have for instance considered the residues of ACE2 interacting with RBD, searched databases of known activity for peptides blocking nCoV-RBD, and proposed chimeric peptides combining several candidates. Huang et al. (2020a) have designed peptides by grafting fragments from ACE2. The initial design of a peptide combining two segments of ACE2 (a.a. 22–44 and 351–357) was followed by an iterative redesign to enhance the binding to the RBD, using an in-house effective force-field, evoEF2, to drive the optimization (Huang et al., 2020a). The effectiveness of the designed peptides has however not yet been confirmed experimentally. Chatterjee et al. (2020) have, for their part, extended the strategy of ACE2 derived peptide design to target both the RBD and recruit E3 ubiquitin ligases for subsequent intracellular degradation of SARS-CoV-2 in the proteasome (Chatterjee et al., 2020). The design was performed using a protocol relying on the Rosetta program (Rohl et al., 2004), and *in cellulo* tests showed that one peptide is able to reduce the infection rate by ~60%. Jaiswal and Kumar (2020) have extended the peptide up to a two helix bundle, using a protocol combining docking and MD simulations to identify stabilizing substitutions (Jaiswal and Kumar, 2020). The peptides showed predicted kD values on the order of 1 to a few nM, but experimental confirmation is missing. More recently, however, Zhou et al. (2021) have performed a thorough investigation of a strategy considering the two alpha-helices of hACE (Zhou et al., 2021). They concluded that the two helices cannot bind Spike when split from the ACE protein, the two peptides showing a propensity for disorder when out of ACE. They further concluded that stapling could be a relevant way to reduce the entropic cost upon binding of peptides containing one or two alpha-helices of ACE. Curreli et al. (2020); Maas et al. (2021) have reported, for lactam-stapled hACE2 peptides, experimental inhibition of the spike protein RBD-hACE2 complex formation, for concentrations on the order of 1–10 μM (Curreli et al., 2020; Maas et al., 2021).
Finally, the *de novo* design of peptides binding the RBD has also been explored. Chaturvedi et al. (2020) performed the *de novo* design of peptides targeting the RBD (Chaturvedi et al., 2020). Starting from selected ACE2 segments, natural RBD binder, the templates have been gradually modified by random mutations, while retaining those mutations that maximize their RBD-binding free energies. In this adaptive evolution, atomistic molecular dynamics simulations of the template-RBD complexes were iteratively perturbed by the peptide mutations, which were retained under favorable Monte Carlo decisions. The best candidate peptides remain however to be tested experimentally. Cao et al. (2020), have successfully designed alpha-helix bundle miniproteins encompassing the α1-helix, with median inhibitory concentration (IC$_{50}$) values between 24 p.m. and 35 nM (Cao et al., 2020). The most potent exhibits a median inhibitory concentration (IC$_{50}$) of close to 0.16 ng ml$^{-1}$. The experimentally determined structure of the interaction between the peptides and the RBD is in excellent agreement with the computational models.
Of note, the interaction between RBD and ACE2 might be more complex than anticipated and involve more partners. Recently, another direction has been proposed by Beddingfield et al. (2021). It does not target directly the RBD ACE2 interface, but instead the spike protein and ACE2 interactions with the α5β1 integrin. Using docking, they identified three candidate binding sites for a candidate peptide active *in vitro* with an IC$_{50}$ value of 3.16 μM. These results however require more explorations.
### 3.1.2 Preventing Virus Internalization by Preventing the Formation of the Fusion Core
The SARS-CoV-2 spike proteins consist of two subunits named S1, which contains the RBD, and S2 (**Figure 4A**). S2 is highly conserved among SARS-Like Coronaviruses and the mechanism responsible for the virus internalization is expected to be common to those viruses. After interacting with ACE2, the spike protein is cleaved into S1 and S2, and S2 undergoes a conformational rearrangement mediating viral fusion and cell entry. S2 is composed of a fusion peptide (FP), two heptad repeats (HR1 and HR2), a transmembrane domain (TM), and a cytoplasmic domain fusion (CP). After cleavage, the FP is inserted into the target cell membrane, which results in HR1 and HR2 forming a 6-helix bundle (**Figure 4B**). The formation of this bundle brings the cellular and viral lipid bilayers into proximity, which initiates the membrane fusion process (Belouazard et al., 2012). Preventing the formation of the helix bundle has been described as a possible strategy to block membrane fusion and prevent the entry of the virus into cells. Ling et al. (2020) have explored the design of peptides mimicking HR2 and binding HR1, to block the fusion process (Ling et al., 2020). After modeling the structure of the 6 helix bundle using homology with other viruses of the family, they investigated the binding energy of HR1 to HR2 and conversely concluded that HR2-derived peptides are probably more efficient than HR1-derived peptides to prevent the formation of the 6-helix bundle and viral infection. Efaz et al. (2021) analyzed 17 SARS-CoV-1 HR2-derived fusion inhibitor peptides known to show effective antiviral activity against the HR1 of SARS-CoV-1 and SARS-CoV-2 (Efaz et al., 2021). Using MD simulations and monitoring the free energy landscape of their binding with HR1, they identified the two best candidates. Experimental validation is however not provided at this time.
3.1.3 Towards Targeting Intra-cellular Interactions
Targeting cell penetration of the virus is not the only strategy that could reveal effectiveness. Once the virus has entered the cell, other protein-protein interactions have been considered of potential interest.
One of these is the interaction between nsp10 and nsp16 (Figure 5). The virus replicates in the cytoplasm, and cannot access the capping machinery of the host located in the nucleus. To compensate the virus encodes its own capping enzymes, and several nsp such as nsp14 and nsp16, which are involved in viral RNA capping. Nsp16 has a binding pocket for S-adenosyl-L-methionine (SAM) which acts as a methyl group donor for the 2’-O-methylation reaction. This pocket is stabilized by the interaction with another nsp, nsp10 and consequently, the inhibition of the nsp16/nsp10 interaction is a possible strategy to prevent virus replication. Dutta and Iype (2021) have analyzed the binding interface of the nsp10/nsp16 complex [PDB id: 6W4H (Rosas-Lemus et al., 2020)] to identify peptides of nsp16 blocking the interaction between nsp10 and nsp16 (Dutta and Iype, 2021). Combined docking with MD simulations, they prospectively analyzed the binding of several candidates and concluded that two of them two and five were stable and able to bind to the nsp16 interacting region of nsp10, thus potentially preventing the interaction between the two proteins. Again, experimental confirmation is still required.
Another interaction is that of the PDZ binding domain of the envelope protein (E-protein) of SARS-CoV-2 with PALS1. The presence of PDZ binding motifs (PBM) that bind specific cellular PDZ domain proteins is frequent in viruses, leading to pathogenic dysfunctions of these proteins. The E-protein of SARS-CoV-2 has such a PBM known to interact with PALS1. Despite the PBM/PDZ interactions being usually weak, Toto et al. (2020) have proposed to design peptides mimicking SARS-CoV-2 E-protein targeting the PDZ domain of PALS1 (Toto et al., 2020). PALS1 participates in the maintenance of epithelial polarity, and it has been suggested that E-protein/PALS1 interaction is involved in the degradation of the integrity of the lung epithelia, resulting in dramatically increased viral dissemination. Analyzing the structure of the E-protein, they identified peptide mimics of the E-protein, assessed their relative affinity for PALS1 compared to the equivalent peptides of SARS-CoV-2, and concluded an increased affinity of the E-protein of SARS-CoV-2 for PALS1. However, no successful inhibitors have been obtained to date.
3.2 Searching for Natural Peptides Active Against Severe Acute Respiratory Syndrome Coronavirus 2 Infection
Not considering the structural information available, a direction that has repeatedly proven effective is the search among natural peptides known to have biological activities. The urgency to respond to the SARS-CoV-2 pandemic has non surprisingly stimulated the search for such candidates, although in some cases no clear rationale underlying the search existed, leading
to mostly conceptual studies. It is for instance the case for the search for active peptides in the colostrum and milk. Çakır et al. (2021) have considered peptides from the goat milk whey fraction obtained by enzymatic digestion and assessed their potential combining *in silico* data-based prediction and docking against ACE2 and DPP-4 enzymes (Çakır et al., 2021). Pradeep et al. (2021) have reported a study in the same orientation, starting from peptides previously identified from Buffalo colostrum and milk, targeting entry points such as ACE2, Spike, TMPRSS, Cathepsin-L, or Furin, the endosomal maturation components such as AAK1, GAK, PIKfyve or TPC2, the replication transcription complex (PLpro, clpro, nsp12, nsp13) and Virion Assembly (N Protein) combining docking with MD simulations, assessing the stability of the binding of the peptide with the target (Pradeep et al., 2021). Although both studies identified some candidates of interest, these remain to be further assessed *in vitro* and *in vivo*. Yu et al. (2021) have prospectively analyzed the potential of peptides resulting from the *in silico* digestion of Tuna myosin to block ACE2 (Yu et al., 2021).
Several studies have considered targeting side effects of the virus infection, instead of directly addressing it. For instance, the antioxidant and anti-inflammatory effects of grehlin, have been considered to reduce the complications of the SARS-CoV-2 (Jafari et al., 2021). As well, since cardiovascular diseases are strong negative prognostic factors since they exacerbate the effects of the viral infection and lead to worse outcomes, it has been suggested that natriuretic peptides could exert a key protective role toward the virus infection whereas an impairment of NPs release contributes to the virus deleterious effects (Rubattu et al., 2021).
Anti-Microbial Peptides (AMPs) are another class of peptides with potential interest. Indeed, among the close to 3,200 AMPs discovered, close to 200 have also been reported to have antiviral activities (Wang, 2020; Mousavi Maleki et al., 2021). However, these peptides seem to have varied mechanisms of action in varied contexts. The interaction of Nisin, a food-grade antimicrobial peptide produced by lactic acid bacteria, with ACE2, has been assessed using modeling and docking. The results suggest that Nisin could act as a competitor of the RBD to bind ACE2 (Bhattacharya et al., 2021).
Finally suppressing the activity of the PLpro enzyme by using potential plant-derived protease inhibitor peptides has also been considered. (Moradi et al., 2022) have tested 11 plant-derived peptides selected from the literature that could potentially inhibit protease activity. Docking experiments suggest that VcTI from Veronica hederifolia provides effective molecular interactions at both the liable Zn site and the classic active site of PLpro. These results remain to be confirmed experimentally.
Overall, however, the exploration of natural peptides has so far led to few promises, if any.
### 4 DISCUSSION
In the context of the SARS-CoV-2 pandemic, the urgent need to identify means to fight against the SARS-CoV-2 attack has raised an unprecedented effort, based on a wide panel of strategies. These encompass drug and vaccine development, or drug repurposing. Among these, peptide-based development was a possible direction to consider. Here, we have reviewed how *in silico* protocols have contributed to such structure-based development. It is obvious that peptide drug development does not require, *per se*, *in silico* approaches. It could proceed for instance by developing and screening experimentally peptide libraries. For instance, Rathod et al. (2020) have searched peptides from the AntiViral Peptide Database (AVPdb), with a repurposing perspective (Rathod et al., 2020). As well, cyclic peptides targeting the RBD have been experimentally designed using mRNA display (Norman et al., 2021), and both linear and cyclic peptides targeting the M\textsuperscript{pro}protease have been identified using *in vitro* screening (Pisarchik, 2021). As summarized in Table 1, it is however striking that the vast majority of studies have consisted of structure-based *in silico* design.
Indeed, the knowledge of the structure of the RBD in interaction with ACE2 has revealed extremely valuable to the design of candidate peptide drugs. *In silico* protocols and particularly MD simulations have made it possible to analyze in detail the interaction between the RBD and ACE2 to identify the key residues of the interaction. Their use has in turn led to the development of peptides validated *in vitro* as binders of the RBD, just using the expertise of some researchers focusing on substitutions at not essential sites for the interaction (Karoyan et al., 2021), or using stapling to optimize the binding (Curelli et al., 2020; Maas et al., 2021). More sophisticated studies have combined the use of docking and MD simulations to explore the stability of the binding of evolved peptides to the RBD. For the docking, peptides traditionally pose specific problems compared to small compounds due to their larger flexibility. To address this issue, it is striking to note that various, mostly flexible, docking approaches have been considered, often complemented by MD/refinement protocols to sample the conformational flexibility of the poses (Table 1). Probably here, the helical conformation of the ACE2 fragment helps making docking easier. Varied MD protocols have also been employed, using explicit or implicit solvent models, and sometimes sophisticated protocols such as umbrella sampling (Ling et al., 2020) or WHAM (Freitas et al., 2021). Finally, it is noticeable that the use of the Rosetta software (Leaver-Fay et al., 2011) has led to the effective design of several peptides binding the RBD with very low IC\textsubscript{50} values, on the order of a few tens of pM (Cao et al., 2020), which is remarkable. The size of the peptides matters however, longer peptides tend to have lower IC\textsubscript{50} values. Larue et al. (2020) reported that a 6 amino acid ACE2 fragment with a mM order IC\textsubscript{50} is able to reduce cell infection by approximately 70% (Larue et al., 2020), while Chatterjee et al. (2020) reported 23 amino-acid peptides able to reduce the infection rate by 60% (Chatterjee et al., 2020), Kuznetsov et al. (2022) reported that a 19 amino acid peptide has an IC close to μM (Kuznetsov et al., 2022), and Karoyan et al. (2021) described a 27 amino acid with IC\textsubscript{50} on the order of nM (Karoyan et al., 2021). The miniproteins designed by Cao et al. (2020) have much longer sizes, with over 55 amino
acids for the best ones (Cao et al., 2020). Finally, strategies to stabilize the peptides using stapling end with peptides having IC$_{50}$ values on the order of a few $\mu$M.
The identification of peptides able to compete with the RBD/ACE2 interaction, and even to slow down viral infection reduction in cellular assays does not mean however that peptide drugs are close to getting on the market. The more divergent the sequences are from the natural sequence, and the longer they are, the more likely they could become associated with adverse effects, particularly in terms of the immune response. Finally, the longer they are, the more costly they become, which could become an obstacle to their development. To a lesser extent, studies are now escaping the RBD-ACE2 interaction to tackle other interactions that occur internally in the cells. These studies also benefit from the 3D information available, although not all the structures of the proteins in interaction are known. These developments have started later on and will be confronted with harder challenges for cell internalization.
As for synthetic vaccine development, many studies have proposed candidate immunogenic peptides, that for their vast majority, lack experimental validation. It is nevertheless seizing that *in silico* approaches now address a wide range of considerations, including MHC I, MHC II as well as CD4 or CD8 immune response. Nevertheless, the use of peptides as a vaccine for SARS-CoV-2 treatment seems to be promising. Thus, CoVac-1, which is a combination of viral proteins is in phase II clinical trials. To note, EpiVacCorona, a peptide-based vaccine is already accepted as medicine in Russia since December 2020. Because questions arose about how the peptides were selected and what is its real immunogenicity, EpiVacCoron is used almost exclusively in Russia.
Another point to consider is the emergence of variants. Drugs and vaccines that have been developed for the wild-type SARS-CoV-2 may become less effective. For example, all the variants of concern have mutations in the spike protein that enable to partially escape the immune response and/or to increase the interaction with ACE2. The efficacy of drugs that target spike protein could be therefore drastically affected by the mutations. In this case, the use of therapeutic peptides is particularly relevant to face these issues, since the peptides of interest can be adapted for a given variant with appropriate changes in the peptide sequence.
To summarize, the SARS-CoV-2 pandemic has highlighted the reactivity of the actors of drug development. Vaccine development has proven very effective and fast, whereas synthetic peptide vaccines are still under development. For chemical drug development, drug repurposing has been so far the more effective strategy. Peptide drug development assisted by *in silico* analyzes has proven very reactive, able to identify promising candidates within a few months. It keeps progressing on new targets (Chan et al., 2021). The same responsiveness has been observed to search for small compounds, although through much higher investment. So far, none has been able to propose convincing enough drugs able to reach the market. It will be interesting to reconsider, hopefully in a few months, lessons from drug development against SARS-CoV-2 in terms of drug development strategy.
**AUTHOR CONTRIBUTIONS**
GM and PT contributed to conception of this review. GM and PT wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. GM and PT contributed equally to this work.
**FUNDING**
This work was supported, in part, by IdEX Université Paris Cité ANR-18-IDEX-0001 (HS-PS project, No. IdEx-2021-I-053).
**ACKNOWLEDGMENTS**
The authors gratefully acknowledge the financial support of the Université Paris Cité, the CNRS institute, and the INSERM institute.
**REFERENCES**
Alam, A., Khan, A., Imam, N., Siddiqui, M. F., Waseem, M., Malik, M. Z., et al. (2021). Design of an Epitope-Based Peptide Vaccine against the SARS-CoV-2: a Vaccine-Informatics Approach. *Briefings Bioinforma.* 22 (2), 1309–1323. doi:10.1093/bib/bbaa340
Baig, M. S., Rajpoot, S., Saqib, U., and Saqib, U. (2020). Identification of a Potential Peptide Inhibitor of SARS-CoV-2 Targeting its Entry into the Host Cells. *Drugs R. D.* 20 (3), 161–169. doi:10.1007/s40268-020-00312-5
Barh, D., Tiwari, S., Silva Andrade, B., Giovannetti, M., Almeida Costa, E., Kumavath, R., et al. (2020). Potential Chimeric Peptides to Block the SARS-CoV-2 Spike Receptor-Binding Domain. *F1000Res* 9, 576. doi:10.12688/f1000research.24074.1
Beddinglefield, B. J., Iwanaga, N., Chapagain, P. P., Zheng, W., Roy, C. J., Hu, T. Y., et al. (2021). The Integrin Binding Peptide, ATN-161, as a Novel Therapy for SARS-CoV-2 Infection. *JACC Basic Transl. Sci.* 6 (1), 1–8. doi:10.1016/j.jacbts.2020.10.003
Belouzard, S., Millet, J. K., Licitira, B. N., and Whittaker, G. R. (2012). Mechanisms of Coronavirus Cell Entry Mediated by the Viral Spike Protein. *Viruses* 4 (6), 1011–1033. doi:10.3390/v4061011
Bhattacharya, R., Gupta, A. M., Mitra, S., Mandal, S., and Biswas, S. R. (2021). A Natural Food Preservative Peptide Nisin Can Interact with the SARS-CoV-2 Spike Protein Receptor Human ACE2. *Virology* 552, 107–111. doi:10.1016/j.virol.2020.10.002
Bruzzoni-Giovaneli, H., Alezra, V., Wolfli, N., Dong, C.-Z., Tuffery, P., and Rebollo, A. (2018). Interfering Peptides Targeting Protein-Protein Interactions: the Next Generation of Drugs? *Drug Discov. Today* 23 (2), 272–285. doi:10.1016/j.drudis.2017.10.016
Çakır, B., Okuyan, B., Şener, G., and Tunali-Akbay, T. (2021). Investigation of Beta-Lactoglobulin Derived Bioactive Peptides against SARS-CoV-2 (COVID-19): In Silico Analysis. *Eur. J. Pharmacol.* 891, 173781. doi:10.1016/j.ejphar.2020.173781
Cao, O., Goreshkin, I., Coventry, B., Case, J. B., Miller, L., Kozodoy, L., et al. (2020). De Novo design of Pimocolar SARS-CoV-2 Miniprotein Inhibitors. *Science* 370 (6515), 426–431. doi:10.1126/science.abd9909
Chan, H. H., Moesser, M. A., Walters, R. K., Malla, T. R., Twidale, R. M., John, T., et al. (2021). Discovery of SARS-CoV-2 M Pro Peptide Inhibitors From
Starr, T. N., Greaney, A. J., Dingens, A. S., and Bloom, J. D. (2021). Complete Map of SARS-CoV-2 RBD Mutations that Escape the Monoclonal Antibody LY-CoV555 and its Cocktail with LY-CoV016. *Cell Rep. Med.* 2 (4), 100255. doi:10.1016/j.xcrm.2021.100255
Tegally, H., Wilkinson, E., Giovanetti, M., Iranzadeh, A., Fonseca, V., Giandhari, J., et al. (2021). Detection of a SARS-CoV-2 Variant of Concern in South Africa. *Nature* 592 (7854), 438–443. doi:10.1038/s41586-021-03402-9
Toto, A., Ma, S., Malagrinò, F., Visconti, L., Pagano, L., Stromgaard, K., et al. (2020). Comparing the Binding Properties of Peptides Mimicking the Envelope Protein of SARS-CoV and SARS-CoV-2 to the PDZ Domain of the Tight Junction-associated PALS1 Protein. *Protein Sci.* 29 (10), 2038–2042. doi:10.1002/pro.3936
Vlieghe, P., Lisovskiy, V., Martinez, J., and Khrestchatisky, M. (2010). Synthetic Therapeutic Peptides: Science and Market. *Drug Discov. Today* 15 (1–2), 40–56. doi:10.1016/j.drudis.2009.10.009
Wang, G. (2020). The Antimicrobial Peptide Database Provides a Platform for Decoding the Design Principles of Naturally Occurring Antimicrobial Peptides. *Protein Sci.* 29 (1), 8–18. doi:10.1002/pro.3702
Wang, X., Lan, J., Ge, J., Yu, J., and Shan, S. (2020). Crystal Structure of SARS-CoV-2 Spike Receptor-Binding Domain Bound with ACE2 Receptor. *Nature* 581 (7807), 215–220.
Wójcik, P., and Berlicki, I. (2016). Peptide-based Inhibitors of Protein–Protein Interactions. *Bioorg. Med. Chem. Lett.* 26 (3), 707–713.
Xie, X., Liu, Y., Liu, J., Zhang, X., Zou, J., Fontes-Garfas, C. R., et al. (2021). Neutralization of SARS-CoV-2 Spike 69/70 Deletion, E484K and N501Y Variants by BNT162b2 Vaccine-Elicited Sera. *Nat. Med.* 27 (4), 620–621. doi:10.1038/s41591-021-01270-4
Xu, J., Khan, A. R., Fu, M., Wang, R., Ji, J., and Zhai, G. (2019). Cell-penetrating Peptide: a Means of Breaking through the Physiological Barriers of Different Tissues and Organs. *J. Control. Release* 309, 106–124. doi:10.1016/j.jconrel.2019.07.020
Yan, R., Zhang, Y., Li, Y., Xia, L., Guo, Y., and Zhou, Q. (2020). Structural Basis for the Recognition of SARS-CoV-2 by Full-Length Human ACE2. *Science* 367 (6485), 1444–1448. doi:10.1126/science.abb2762
Yu, Z., Kan, R., Ji, H., Wu, S., Zhao, W., Shuiun, D., et al. (2021). Identification of Tuna Protein-Derived Peptides as Potent SARS-CoV-2 Inhibitors via Molecular Docking and Molecular Dynamic Simulation. *Food Chem.* 342, 128366. doi:10.1016/j.foodchem.2020.128366
Zhang, H., and Chen, S. (2022). Cyclic Peptide Drugs Approved in the Last Two Decades (2001–2021). *RSC Chem. Biol.* 3 (1), 18–31. doi:10.1039/d1cb00154j
Zhou, P., Wang, H., Chen, Z., and Liu, Q. (2021). Context Contribution to the Intermolecular Recognition of Human ACE2-Derived Peptides by SARS-CoV-2 Spike Protein: Implications for Improving the Peptide Affinity but Not Altering the Peptide Specificity by Optimizing Indirect Readout. *Mol. Omics* 17 (1), 86–94. doi:10.1039/d0mo00103a
Zhou, T., Tsybovsky, Y., Gorman, J., Rapp, M., Cerutti, G., Chuang, G.-Y., et al. (2020). Cryo-EM Structures of SARS-CoV-2 Spike without and with ACE2 Reveal a pH-dependent Switch to Mediate Endosomal Positioning of Receptor-Binding Domains. *Cell host microbe* 28 (6), 867–879. doi:10.1016/j.chom.2020.11.004
**Conflict of Interest:** The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
**Publisher’s Note:** All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
*Copyright © 2022 Moroy and Tuffery. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.* |
TET3 inhibits TGF-β1-induced epithelial-mesenchymal transition by demethylating miR-30d precursor gene in ovarian cancer cells
Zhongxue Ye¹,², Jie Li¹,², Xi Han¹,², Huilian Hou³, He Chen¹,², Xia Zheng¹,², Jiaojiao Lu¹,², Lijie Wang¹,², Wei Chen⁴, Xu Li¹,²* and Le Zhao¹,²*
Abstract
Background: Abnormal DNA methylation/demethylation is recognized as a hallmark of cancer. TET (ten-eleven translocation) family members are novel DNA demethylation related proteins that dysregulate in multiple malignancies. However, their effects on ovarian cancer remain to be elucidated.
Methods: The changes of TET family members during TGF-β1-induced epithelial-mesenchymal transition (EMT) in SKOV3 and 3AO ovarian cancer cells were detected. TET3 was ectopically expressed in TGF-β1-treated ovarian cancer cells to examine its effect on TGF-β1-induced EMT phenotype. The downstream target of TET3 was further identified. Finally, the relationships of TET3 expression to clinic-pathological parameters of ovarian cancer were investigated with a tissue microarray using immunohistochemistry.
Results: TET3 was downregulated during TGF-β1-initiated epithelial-mesenchymal transition (EMT) in SKOV3 and 3AO ovarian cancer cells. Overexpression of TET3 reversed TGF-β1-induced EMT phenotypes including the expression pattern of molecular markers (E-cadherin, Vimentin, N-cadherin, Snail) and migratory and invasive capabilities of ovarian cancer cells. miR-30d was identified as a downstream target of TET3, and TET3 overexpression resumed the demethylation status in the promoter region of miR-30d precursor gene, resulting in restoration of miR-30d (an EMT suppressor of ovarian cancer cells proven in our previous study) level in TGF-β1-induced EMT. We further found that TET3 expression was decreased in ovarian cancer tissues, especially in serous ovarian cancers. The overall positivity of TET3 was inversely correlated with the grade of differentiation status of ovarian cancer.
Conclusion: Our results revealed that TET3 acted as a suppressor of ovarian cancer by demethylating miR-30d precursor gene promoter to block TGF-β1-induced EMT.
Keywords: Ovarian cancer, Methylation, Epithelial-mesenchymal transition, TET3, TGF-β1, miR-30d
Background
Ovarian cancer is the most lethal gynecological tumor and ranks the fifth in the cause of death for women suffered from tumor. It is estimated that there are 21,290 new ovarian cancer cases and 14,180 deaths in the United States in 2015 [1]. The poor prognosis of ovarian cancer patients is mainly attributed to cancer metastasis and recurrence. Epithelial-mesenchymal transition (EMT) is a dynamic process mediating ovarian cancer metastasis, among others. Exploration of signaling pathways involved in EMT process will shed light on the molecular mechanisms of metastasis.
EMT refers to the transformation of epithelial cells into fibroblast-like cells in physiological and pathological processes, characterized by loss of epithelial markers, acquisition of mesenchymal molecules and enhancement of cell mobility [2]. Various cytokines and growth factors, including transforming growth factor β (TGF-β),
are key agents for EMT initiation and maintenance. Three isoforms of TGF-β are identified, and TGF-β1 is the most classical and frequently used EMT-inducer [3, 4].
Increasing evidence shows that aberrations in DNA methylation status are associated with tumor progression and prognosis of patients [5]. DNA methyltransferases (DNMTs) are major molecules controlling DNA methylation [6, 7]. Laterly, ten-eleven translocation (TET) family members (TET1-3) which can modify 5-methylcytosine (5-mC) by oxidation to 5-hydroxymethylcytosine (5-hmC) and further 5-formylcytosine (5-fC) and 5-carboxycytosine (5-caC) are identified and expand the understanding about mechanisms of DNA demethylation [8–10]. TETs are dysregulated in multiple malignances including breast cancer [11], hepatocellular carcinoma [11], melanoma [12] and glioma [13]. For example, decreased TET1 mRNA level is correlated with poor survival of breast cancer patients [14], and the same goes for TET2 in colorectal cancer [15].
Aberrant DNA methylation/demethylation is implicated in TGF-β1-induced EMT [16–18]. TGF-β1 triggers *TIP30* (gene coding HIV-1 Tat interactive protein 2) hypermethylation by upregulating DNMT1 and DNMT3A to induce EMT and metastasis in esophageal carcinoma [19]. However, few researches are performed to elaborate the role of TETs in TGF-β1-induced EMT. Here we report the epigenetic regulation of TET3 on miR-30d in TGF-β1-induced EMT in ovarian cancer cells, highlighting the potentiality of TET3 to be used as a prognostic biomarker or a therapeutic target for ovarian cancer.
**Methods**
**Cell culture and TGF-β1 treatment**
Human ovarian cancer cell line SKOV3 was obtained from the Shanghai Cell Bank of Chinese Academy of Sciences (Shanghai, China), and 3AO was from the Shandong Academy of Medical Sciences (Jinan, China). Cells were incubated in RPMI 1640 (GIBCO, Grand Island, NY USA) supplemented with 10 % newborn bovine serum (GIBCO, Grand Island, NY, USA) at 37 °C in 5 % CO₂. When treated with 10 ng/ml TGF-β1 (PeproTech, Rocky Hill, USA), cells were maintained in media containing 1 % newborn bovine serum for indicated time before harvested.
**Quantitative real-time PCR (qRT-PCR)**
Total RNA was extracted from cells using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s instructions. Concentration and quality of total RNA were assessed by absorbance at 260 nm and the ratio of 260/280, respectively, on a UV spectrophotometer (BioRad Inc., Hercules, CA, USA). For mRNA detection, first-strand cDNA was synthesized using a PrimeScript™ RT reagent Kit with gDNA Eraser (Takara, Dalian, China). Quantitative real-time PCR was performed using a SYBR Premix Ex Taq™ II kit (Takara, Dalian, China) on a CFX96 real-time PCR system (Bio-Rad, Hercules, CA, USA). TET1, TET2 and TET3 were normalized to β-actin, while miR-30s were normalized to small nuclear U6. Relative gene expression was calculated automatically using $2^{-\Delta \Delta Ct}$. PCR primers for TET1, TET2, TET3 and β-actin were synthesized by Beijing Genomics Institute (Beijing, China), and primer sequences were listed in Table 1. Primers for miR-30a, 30b, 30c, 30d, 30e, and U6 reverse transcription and amplification were designed and synthesized by Ribo-Bio Co., Ltd. (Guangzhou, China).
**Western blot**
Total protein was collected from cells by RIPA lysis buffer containing protease inhibitors (Roche, Indianapolis, IN, USA) and 1 mM PMSF on ice. Protein concentration was measured using the BCA-200 Protein Assay kit (Pierce, Rockford, IL, USA). After heat denaturation at 100 °C for 5 min, proteins were separated by electrophoresis on 10 % SDS–PAGE gels and then transferred onto nitrocellulose membranes (Pall Life Science, Port Washington, NY, USA). The membranes were blocked with 5 % non-fat milk at room temperature for 1 h, and then incubated overnight at 4 °C with rabbit anti-human TET3 (Abcam, 1:1000), E-cadherin (Cell Signaling Technology (CST, 1:1000), Vimentin (CST, 1:500), N-cadherin (CST, 1:1000), Snail (CST, 1:300) and mouse anti-human β-actin (CST, 1:1000). After washing with TBST, the blots were incubated with horse radish peroxidase (HRP)-conjugated goat anti-rabbit or anti-mouse IgG. Blots were visualized using ECL reagents (Pierce, Rockford, IL, USA) by a chemiluminescence imaging system (Bio-Rad, Richmond, CA, USA). The results were quantified by Image J software.
**Plasmid transient transfection**
The human TET3 expression vector FH-TET3-pEF was obtained from Addgene. SKOV3 and 3AO cells were
| Genes | Primer sequences (5'-3') | Length of PCR product (bp) |
|-------|--------------------------|----------------------------|
| TET1 | F: CCCGAATCAAGCGGAAGAATA | 101 |
| | R: TACTTCAGTTTGCAACGGT | |
| TET2 | F: TCTTCTCCCTGGAGAACAGCTC| 146 |
| | R: TGCCTGGGACTGCTGATGACT | |
| TET3 | F: GTTCTGTGAGCATGTACTTC | 93 |
| | R: CTTCCTCTTTGGGATTGTGCC | |
| β-actin | F: TCCCTGGAGAAGAGCTACGA | 194 |
| | R: AGCACTGTGTTTGGCGTACAG | |
seeded into 6-well plates until 70%–80% confluence and transiently transfected with FH-TET3-pEF or empty vector using the X-treme GENE HP DNA Transfection Reagent (Roche, Indianapolis, IN, USA).
**miR transient transfection**
miR-30d mimic and negative control were purchased from Ribo-Bio Co. Ltd. (Guangzhou, China). SKOV3 and 3AO cells were seeded into 6-well plates to reach 40%–50% confluence after 24 h and then transiently transfected with 100 nM miR-30d mimic or negative control using the X-treme GENE siRNA Transfection Reagent (Roche, Indianapolis, IN, USA). After 24 h of transfection, the cells were treated with 10 ng/ml TGF-β1 for another 48 h.
**Cell migration and invasion assay**
After transient transfection of FH-TET3-pEF or empty vector and treatment of TGF-β1 for 48 h, cells were trypsinized and counted. A total of $1 \times 10^5$ cells (for migration assay) or $4 \times 10^5$ cells (for invasion assay) in 100 μl serum-free medium was added into micelles (Millipore Co., Bedford, MA, USA) without (for migration assay) or with (for invasion assay) Matrigel (Becton Dickinson Labware, Bedford, MA, USA) coated. 500 μl of medium containing 20% newborn bovine serum was added into the bottom chambers as the chemotactic factor. After incubation for 24 h (for migration assay) or 48 h (for invasion assay) at 37 °C in 5% CO₂, cells remaining on the upper surface of the filter were removed using cotton swabs. Then the migrated cells were fixed using methyl alcohol and stained using 0.1% crystal violet. Migratory (or invasive) cells were counted and averaged from images of five random fields (original magnification × 200) captured using an inverted light microscope. The mean values of three duplicate assays were used for statistical analysis.
**DNA bisulfite modification and methylation-specific PCR (MSP)**
Cells treated by 10 ng/ml TGF-β1 for 48 h in 24-well plates were resuspended with cold PBS for ~6 × 10⁶/ml. DNA bisulfite modification and purification were performed using an EZ DNA methylation-Direct kit (Zymo
---
**Fig. 1** TET3 was downregulated in TGF-β1-treated ovarian cancer cells. SKOV3 and 3AO cells were maintained in 1640 medium containing 1% newborn bovine serum with or without 10 ng/ml TGF-β1 for 48 h. **a** Quantitative real-time PCR showed that TET3 was significantly decreased at mRNA level in cells treated by TGF-β1, and TET1 was also downregulated in TGF-β1-treated SKOV3 cells. **b** Western blot results and (**c**) the quantitative analysis revealed that TET3 protein was decreased in cells stimulated by TGF-β1. All experiments were carried out in triplicate and the results were presented as means ± SD. *P < 0.05, **P < 0.01, t-test
Research Corporation, Irvine, California, USA) according to the instruction. Concentration of DNA was evaluated by absorbance at 260 nm on a UV spectrophotometer (BioRad Inc., Hercules, CA, USA). The set of primers for miR-30d gene was flanking the 3 kb region of the 5’ upstream region from the start of pre-miR-30d sequence. The primers for methylation-specific PCR were designed by MethPrimer and the sequences were as follows: methylated (M)-forward (F): 5’-TTGAGATAGGGTTTTATTTTGTCGT-3’; methylated (M)-reverse (R): 5’-TAATAACATACGATCCCAACTATTTCG-3’; unmethylated (U)- forward (F): 5’-TGAGATAGGGTTTTATTTTGTGTGT-3’; unmethylated (U)- reverse (R): 5’-ATACATACAATCCCAACTATTCCAAA-3’. DNA amplification was performed with Epi Taq HS (Takara Biotechnology Co. Ltd., Dalian, China) under the following condition: 94 °C for 5 min; 30 cycles of 94 °C for 30 s, 50 °C for 30 s, 72 °C for 30 s; and 72 °C for 10 min. The PCR products were separated by 2.0 % agarose gel electrophoresis and visualized by a chemiluminescence imaging system (Bio-Rad, Richmond, CA, USA).
**Immunohistochemistry**
Human ovarian cancer tissue microarray was purchased from Shanghai SuperChip Biotech Co. Ltd. (Shanghai, China) and rabbit antibody to TET3 used for immunohistochemistry was purchased from Genetex (Alton PkwyIrvine, CA, USA). The tissue array was dewaxed in
---
**Fig. 2** Overexpression of TET3 reversed TGF-β1-induced EMT in ovarian cancer cells. Cells were exposed to negative control, negative control plus TGF-β1 and FH-TET3-pEF transiently transfection plus TGF-β1, respectively. **a** Quantitative real-time PCR showed that transfection of FH-TET3-pEF rescued TET3 mRNA level in TGF-β1-treated cells. **b** Western blot results and **c** the quantitative analysis indicated that transfection of FH-TET3-pEF rescued TET3 protein level in TGF-β1-treated cells. Meanwhile, E-cadherin decrease and N-cadherin, Vimentin and Snail increase caused by TGF-β1 were reversed by TET3 overexpression. All experiments were carried out in triplicate and the values were showed as means ± SD. *P < 0.05, **P < 0.01, t-test
xylene, rehydrated in a descending alcohol series. Antigen retrieval was performed by heating the tissue section in 0.01 M citrate buffer (pH 6.0) in a steamer for 90 s. Detection of antigen was carried out through incubation with anti-TET3 antibody (1:250) for 2 h at room temperature, followed by incubation with HRP-labeled secondary antibody at room temperature for 30 min. Signal was generated by incubation with DAB. Slide was counterstained with hematoxylin, dehydrated in an ascending alcohol series, and mounted for analysis. Digital images were acquired using a section microscope scanner (Leica MP SCN400, German). Membrane, cytoplasm or nuclear staining was considered positive for TET3. For statistical analysis, extent (the percentage of positive cells) and intensity of staining were obtained by 2 pathologists. Intensity was semiquantitatively scored as weak (1 point), moderate (2 points), or strong (3 points). For an individual case, the immunohistochemical composite score was calculated based on the extent multiplied by the intensity score.
**Statistical analysis**
The graphical presentations were performed using GraphPad Prism 5.0. Data were presented as the means ± SD and were analyzed using SPSS 22.0 software (Chicago, IL, USA). Statistical differences were tested by Chi-square test, two-tailed t-test, one-way ANOVA test or Fisher’s Exact test. Differences were considered significant at $P < 0.05$ (*) or highly significant at $P < 0.001$ (**).
**Results**
**TET3 was reduced in TGF-β1-treated ovarian cancer cells**
We first examined the expression of TET family members in TGF-β1-treated ovarian cancer cells. As shown by qRT-PCR results (Fig. 1a), TET1 and TET3 were downregulated while TET2 was remained unchanged in SKOV3 and 3AO cells treated by TGF-β1. And the most significant TGF-β1-induced reduction was seen in TET3 mRNA in both cells. TET3 protein level was then detected, and consist with mRNA reduction, TET3 protein was decreased in TGF-β1-stimulated ovarian cancer cells.
**Fig. 3** Overexpression of TET3 antagonized TGF-β1-enhanced motility and invasion of ovarian cancer cells. **a** In vitro migration assay showed that cell motility was promoted when exposed to TGF-β1, which was inhibited by TET3 overexpression (original magnification × 200). **b** In vitro invasion assay showed that TET3 overexpression quenched stimulation effect of TGF-β1 on cell invasion (original magnification × 200). All experiments were performed in triplicate and data were showed as means ± SD. *$P < 0.05$, **$P < 0.01$, t-test
(Fig. 1b and c), indicating the potential involvement of TET3 in TGF-β1 signaling.
**TET3 overexpression reversed TGF-β1-triggered EMT in ovarian cancer cells**
Since TGF-β1 could induce EMT in ovarian cancer cells in our previous study [20], we speculated that TET3 might participate in TGF-β1-triggered EMT. To test this possibility, FH-TET3-pEF was transfected into TGF-β1-treated cells to resume the expression of TET3 (Fig. 2a). For the record, since the recombinant FH-TET3-pEF plasmid contained no fluorescence tag, the transfection efficiency was evaluated by TET3 level in TET3-transfected cells relative to negative control cells using
---
**Fig. 4** TET3 was an upstream regulator of miR-30d. **a** Quantitative real-time PCR showed that decreased miR-30s by TGF-β1 was reversed by restoration of TET3 both in SKOV3 and 3AO cells. **b** Western blot results and (**c**) the quantitative analysis indicated that resume of miR-30d could not reverse the downregulation of TET3 induced by TGF-β1 in ovarian cancer cells. **d** MSP assay proved that methylation of miR-30d precursor gene was increased in TGF-β1-treated cells. **e** Quantitative analysis of MSP results showed that methylated proportion of miR-30d precursor gene in FH-TET3-pEF transfected and TGF-β1 co-treated cells was lower than TGF-β1-treated cells. *P < 0.05, **P < 0.01, t-test
real-time PCR and western blot (Additional file 1: Figure S1A and S1B). Restoration of TET3 antagonized TGF-β1-triggered EMT as illustrated by reversion of E-cadherin downregulation and N-cadherin, Vimentin, and Snail upregulation (Fig. 2b and c). In parallel, TET3 recovery greatly counteracted TGF-β1-stimulated enhancement of migration (Fig. 3a) and invasion (Fig. 3b) of both SKOV3 and 3AO cells. Taken together, these results showed that TET3 was a negative regulator of TGF-β1-induced EMT in ovarian cancer cells.
**TET3 upregulated miR-30d to inhibit TGF-β1-induced EMT in ovarian cancer cells**
We previously found that miR-30 family members were downregulated in TGF-β1-treated ovarian cancer cells and restoration of miR-30d blocked TGF-β1-induced EMT [20]. We supposed that TET3 might oppose TGF-β1-induced EMT by increasing miR-30d. Consistent with our supposition, overexpression of TET3 resumed TGF-β1-downregulated miR-30s (Fig. 4a). In addition, co-treatment of miR-30d mimic could not reverse TGF-β1-rendered TET3 decrease (Fig. 4b and c), indicating TET3 was an upstream regulator of miR-30d, not vice versa. The MSP assay further verified that methylation of miR-30d precursor gene was increased in TGF-β1-treated cells (Fig. 4d), which was abrogated by overexpression of TET3 (Fig. 4e). Of note, TET3 overexpression alone could increase expression level of miR-30d and decrease methylation level of miR-30d precursor gene (Additional file 2: Figure S2A and S2B). These data demonstrated that TGF-β1-decreased TET3 contributed to higher methylation level of miR-30d precursor gene, subsequently caused miR-30d downregulation in ovarian cancer cells.
**TET3 was decreased in ovarian cancer tissues and negatively correlated with pathological grade**
We further assessed TET3 expression with a human ovarian cancer tissue microarray, and finally obtained TET3 protein status in 67 ovarian cancer samples and 14 normal ovarian samples. TET3 mainly located in cytoplasm of both normal and tumor cells (Fig. 5). To quantify the differential TET3 expression among different subgroups, a semiquantitative scoring system was introduced by multiplying the positive extent with the intensity score. No significant difference in the overall positivity of TET3 was found between normal and cancerous tissues \((P = 0.724)\). However, the immunohistochemical composite scores of cancer samples was lower than that of normal tissues \((P = 0.0269)\) (Table 2). TET3 expression level in various histopathological subtypes of ovarian cancer was further analyzed. As Table 2 shown, the average score for serous ovarian cancer was significantly lower than that for normal tissues \((P = 0.0401)\), while no statistic differences were found between normal tissues and other subtypes of ovarian cancer. Clinicopathological correlation analysis of TET3 level to ovarian cancer showed that the overall positivity of TET3 was inversely associated with the grade of differentiation of malignant cells \((P = 0.024)\). Although no significant differences of immunohistochemical composite score were found among different differentiation status, the poorest differentiated tissues presented with lowest immunohistochemical score \((P = 0.2409)\) (Table 3).
**Discussion**
With the deepening of studies about epigenetics and tumorigenesis, it has been admitted that abnormal DNA
Table 2 Positivity and composite scores of TET3 in ovarian cancer tissues
| Subtypes of ovarian cancer | Overall Positivity<sup>a</sup> | P value<sup>b</sup> | Immunohistochemical composite scores |
|----------------------------|---------------------------------|---------------------|--------------------------------------|
| | | | Mean | SD | P value<sup>c</sup> |
| Normal (n = 14) | 12 (85.7 %) | | 1.21 | 0.90 | |
| Ca (n = 68) | 52 (76.5 %) | 0.724 | 0.72 | 0.72 | 0.0269* |
| EOC | | | | | |
| Serous cancer (n = 37) | 29 (78.4 %) | 0.707 | 0.71 | 0.70 | 0.0401* |
| Mucinous cancer (n = 8) | 8 (100 %) | 0.515 | 1.16 | 0.78 | 0.8932 |
| Endometrioid cancer (n = 5)| 4 (80 %) | 1.000 | 1.10 | 1.06 | 0.8182 |
| Clear cell cancer (n = 5) | 5 (100 %) | 1.000 | 0.86 | 0.67 | 0.4345 |
| Germ cell tumor | | | | | |
| Dysgerrminomas (n = 3) | 1 (33.3 %) | 0.121 | 0.10 | 0.17 | 0.0541 |
| Endodermal sinus tumor (n = 1) | 0 | | | | |
| Immaure teratomas (n = 2) | 1 (50.0 %) | 0.350 | 0.40 | 0.56 | 0.2402 |
| Granulosa cell tumor (n = 3) | 2 (66.7 %) | 0.456 | 0.40 | 0.35 | 0.1508 |
<sup>a</sup>Percentage of cases with more than 5% positive cells. <sup>b</sup>Fisher’s Exact Test (two-tailed), compared with the positive percentage of normal tissues. <sup>c</sup>t-test (two-tailed), compared with the composite scores of normal cases. * P < 0.05
Methylation/demethylation is a hallmark of cancer [21]. In addition to DNMTs, TETs are novel regulators of DNA methylation/demethylation status. Growing evidences suggest that deregulation of TETs and TET-mediated DNA demethylation takes part in tumor development and progression [14, 22–25].
In our study, we found that TET3 was decreased in ovarian cancer tissues, as well as in TGF-β1-treated ovarian cancer cells. Loss of TET3 might result in poorer histopathological grade in ovarian cancer patients. It was reported that TET3 was reduced in TGF-β1-activated human hepatic stellate cells (LX-2 cells), which played a critical role in liver fibrosis. Silencing of TET3 inhibited apoptosis, promoted proliferation and induced cell fibrosis in LX-2 cells by downregulating long non-coding RNA (lncRNA) HIF1A-AS1 [26]. In our experiments, TGF-β1 reduced TET3 in human ovarian cancer cells, and TET3 overexpression blocked TGF-β1-induced EMT via resuming the demethylation status of pre-miR-30d promoter region. As fibrosis was also closely connected to EMT, we speculated that TET3 could be a suppressor of EMT functioning in different tissues and EMT-associated events. In both studies, TET1 and TET2 remained almost unchanged during TGF-β1 stimulation. It might be attributed to tissue or cell specificity. Preview studies indicated that TET1 and TET2 mainly acted in embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs) and primordial germ cells (PGCs) [27–29], while TET3 was the only member identified in mouse oocytes and one-cell zygotes [30]. Although the expression pattern of TETs changed during development, differences still existed in diverse tissues and cells.
Our findings indicated that reduction of TET3 could be a result of TGF-β1 stimulation. To date, it was unclear how TGF-β1 decreased TET3. Recent studies showed that TETs were direct targets of multiple microRNAs (miRs), suggesting the proteins to be post-transcriptionally regulated by miRs [31]. miR-26, implicated in various cancers as an oncogene or tumor suppressor [32, 33], could decrease expression of all members of the TET family in vertebrates [34]. Another example was miR-29 that directly targeted TET1 in lung cancer cells [35], and all TET family members in human dermal fibroblasts and vascular smooth muscle cells [36]. Interestingly, miR-29 was a critical mediator in TGF-β/Smad signaling [37]. Thus, we presumed that TET3 reduction in our model could be a result of miR dysregulation. Nevertheless, TET3 could be also controlled by DNA methylation/demethylation, as found in clinical samples [15]. Illumination of the
molecular underpinnings of TGF-β-induced TET3 reduction would contribute to understanding the regulatory network in TGF-β-stimulated EMT.
Conclusions
Our results indicated that TET3 declined in TGF-β1 stimulation and TET3 overexpression inhibited TGF-β1-induced EMT and EMT-mediated metastasis of SKOV3 and 3AO cells by demethylating miR-30d precursor gene, indicating a novel mechanism of epigenetic regulation in ovarian cancer. Targeting the TGF-β1-TET3-miR-30d signaling axis might be a promising therapeutic strategy for ovarian cancer treatment.
Additional files
Additional file 1: Figure S1. Transfection efficiency of recombinant expression plasmid for TET3. **a** Quantitative real-time PCR showed that TET3 was increased by about 100 times in TET3-transfected SKOV3 cells relative to negative control cells. **b** Western blot results and the quantitative analysis revealed that TET3 protein was increased by about 3 times in TET3-transfected SKOV3 cells relative to negative control cells. All experiments were carried out in triplicate and the results were presented as means ± SD. *P < 0.05, t-test. (JPG 132 kb)
Additional file 2: Figure S2. The effect of TET3 overexpression on miR-30d. **a** Quantitative real-time PCR showed that miR-30d was increased by ectopic expression of TET3 in SKOV3 cells. **b** MSP assay found that methylation of miR-30d precursor gene was decreased in TET3-overexpressed SKOV3 cells. (JPG 123 kb)
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
ZY and JL performed cell culture and western blot; XH and LW did methylation-specific PCR; HH and HC performed immunohistochemistry. XZ and JL performed qRT-PCR; XL, WC and LZ were involved in the experimental design and data analysis. ZY and LZ wrote the manuscript. All authors read and approved the final manuscript.
Acknowledgments
This work was financially supported by the National Natural Science Foundation of China (No.30973429).
Author details
1Center for Translational Medicine, the First Affiliated Hospital of Xi'an Jiaotong University, 277 West Yanta Road, Xi'an, Shaanxi 710061, China. 2Key Laboratory for Tumor Precision Medicine of Shaanxi Province, the First Affiliated Hospital of Xi'an Jiaotong University, 277 West Yanta Road, Xi'an, Shaanxi 710061, China. 3Department of Pathology, the First Affiliated Hospital of Xi'an Jiaotong University, 277 West Yanta Road, Xi'an, Shaanxi 710061, China. 4Center of Laboratory Medicine, the First Affiliated Hospital of Xi'an Jiaotong University, 277 West Yanta Road, Xi'an, Shaanxi 710061, China.
Received: 1 February 2016 Accepted: 28 April 2016 Published online: 04 May 2016
References
1. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2015. CA: A Cancer Journal for Clinicians. 2015;65:5–29.
2. Kalluri R, Weinberg RA. The basics of epithelial-mesenchymal transition. J Clin Invest. 2009;119:1420–8.
3. Katz LH, Li Y, Chen JS, Munoz NM, Majumdar A, Chen J, Mishra L. Targeting TGF-beta signaling in cancer. Expert Opin Ther Targets. 2013;17:743–60.
4. Kaufhold S, Bonavida B. Central role of Snail1 in the regulation of EMT and resistance in cancer: a target for therapeutic intervention. J Exp Clin Cancer Res. 2014;33:62.
5. Dawson MA, Kouzarides T. Cancer epigenetics: from mechanism to therapy. Cell. 2012;150:12–27.
6. Holliday R, Pugh JE. DNA modification mechanisms and gene activity during development. Science. 1975;187:226–32.
7. Riggs AD. X inactivation, differentiation, and DNA methylation. Cytogenet Cell Genet. 1975;149–25.
8. Tahiliani M, Koh KP, Shen Y, Pastor WA, Bandukwala H, Brudno Y, Agarwal S, Iyer LM, Liu DR, Aravind L, Rao A. Conversion of 5-methylcytosine to 5-hydroxymethylcytosine in mammalian DNA by MLL partner TET1. Science. 2009;324:930–5.
9. Ito S, D'Aleste AC, Tararova OV, Hong K, Sowers LC, Zhang Y. Role of Tet proteins in 5mC to 5hmC conversion, ES-cell self-renewal and inner cell mass specification. Nature. 2010;466:1129–33.
10. Ito S, Shen L, Dai Q, Wu SC, Collins LB, Swenberg JA, He C, Zhang Y. Tet proteins can convert 5-methylcytosine to 5-formylcytosine and 5-carboxylcytosine. Science. 2011;333:1300–3.
11. Yang H, Liu Y, Bai F, Zhang JY, Ma SH, Liu J, Xu ZD, Zhu HG, Ling ZQ, Ye D, Guan KL, Xiong Y. Tumor development is associated with decrease of TET gene expression and 5-methylcytosine hydroxylation. Oncogene. 2013;32:663–9.
12. Lian CG, Xu Y, Ceol C, Wu F, Larson A, Dresser K, Xu W, Tan L, Hu Y, Zhan Q, Lee CW, Hu D, Lian BQ, Kleeff S, Yang Y, Neiswender J, Khorasani AJ, Fang R, Lezzano C, Duncan LM, Scolyer RA, Thompson JF, Kakavand H, Houvras Y, Zon LI, Milhim MC, Jr., Kaiser UB, Schattton T, Woda BA, Murphy GF, Shi YG. Loss of 5-hydroxymethylcytosine is an epigenetic hallmark of melanoma. Cell. 2012;150:1135–46.
13. Muller T, Gesii M, Waha A, Isselstein LJ, Luxen D, Freihoff D, Freihoff J, Becker A, Simon M, Hammes J, Denkhaus D, zur Muhlen A, Pietsch T. Nuclear exclusion of TET1 is associated with loss of 5-hydroxymethylcytosine in IDH1 wild-type gliomas. Am J Pathol. 2012;181:675–83.
14. Hsu CH, Peng KL, Kang ML, Chen YR, Yang YC, Tsai CH, Chu CS, Jeng YM, Chen YT, Lin FM, Huang HD, Liu YY, Teng YC, Lin ST, Lin RK, Tang FM, Lee SB, Hsu HM, Yu JC, Hsiao PW, Juan LJ. TET1 suppresses cancer invasion by activating the tissue inhibitors of metalloproteinases. Cell Rep. 2012;2:568–79.
15. Rawluszko-Wieczerok AA, Siera A, Horbacka K, Horst N, Krokowicz P, Jagodziński PP. Clinical significance of DNA methylation mRNA levels of TET family members in colorectal cancer. J Cancer Res Clin Oncol. 2015;141:1379–92.
16. Zhang Q, Chen L, Helfand BT, Jang TL, Sharma V, Kozlowski J, Kuzel TM, Zhu JJ, Yang XJ, Javonovic B, Guo Y, Lonnning S, Harper J, Teicher BA, Brendler C, Yu N, Catalona WJ, Lee C. TGF-beta regulates DNA methyltransferase expression in prostate cancer, correlates with aggressive capabilities, and predicts disease recurrence. PLoS One. 2011;6:e25168.
17. Cardenas H, Vieth E, Lee J, Segar M, Liu Y, Nephew KP, Matei D. TGF-beta induces global changes in DNA methylation during the epithelial-to-mesenchymal transition in ovarian cancer cells. Epigenetics. 2014;9:1461–72.
18. Kogure T, Kondo Y, Kakazu E, Ninomiya M, Kimura O, Shimosegawa T. Involvement of miRNA-25b in epigenetic regulation of transforming growth factor-beta-induced epithelial-mesenchymal transition in hepatocellular carcinoma. Hepatol Res. 2014;44:907–19.
19. Bu F, Liu X, Li J, Chen S, Tong X, Ma C, Mao H, Pan F, Li X, Chen B, Xu L, Li E, Kou G, Han J, Guo S, Zhao J, Guo Y. TGF-beta1 induces epigenetic silence of TIP30 to promote tumor metastasis in esophageal carcinoma. Oncotarget. 2015;6:2120–33.
20. Ye Z, Zhao L, Li J, Chen W, Li X. miR-30d Blocked Transforming Growth Factor beta1-Induced Epithelia-Mesenchymal Transition by Targeting Snail in Ovarian Cancer Cells. Int J Gynecol Cancer. 2015;25:1574–81.
21. Rengucci C, De Maio G, Casadei Gardini A, Zucca M, Scarpi E, Zingaretti C, Foschi G, Turneidi MM, Molinari C, Saragoni L, Puccetti M, Amadori D, Zoli W, Calistri D. Promoter methylation of tumor suppressor genes in pre-neoplastic lesions; potential marker of disease recurrence. J Exp Clin Cancer Res. 2014;33:65.
22. Ko M, Huang Y, Jankowska AM, Pape UJ, Tahiliani M, Bandukwala HS, An J, Lamperti ED, Koh KP, Ganetzky R, Liu XS, Aravind L, Agarwal S, Maciejewski JP, Rao A. Impaired hydroxylation of 5-methylcytosine in myeloid cancers with mutant TET2. Nature. 2010;468:839–43.
23. Abdel-Wahab O, Mullally A, Hedvat C, Garcia-Manero G, Patel J, Wadleigh M, Malinger S, Yao J, Kilpivaara O, Bhat R, Huberman K, Thomas S, Dolgalev I, Heguy A, Paietta E, Le Beau MM, Beran M, Tallman MS, Ebert BL, Kantarjian HM, Stone RM, Gilliland DG, Crispino JD, Levine RL. Genetic characterization of TET1, TET2, and TET3 alterations in myeloid malignancies. Blood. 2009;114:144–7.
24. Takayama K, Misawa A, Suzuki T, Takagi K, Hayashizaki Y, Fujimura T, Homma Y, Takahashi S, Urano T, Inoue S. TET2 repression by androgen hormone regulates global hydroxymethylation status and prostate cancer progression. Nat Commun. 2015;6:8219.
25. Lorsbach RB, Moore J, Mathew S, Raimondi SC, Mukatira ST, Downing JR. TET1, a member of a novel protein family, is fused to MLL in acute myeloid leukemia containing the t (10;11) (q22;q23). Leukemia. 2003;17:637–41.
26. Zhang QQ, Xu MY, Qu Y, Hu JJ, Li ZH, Zhang QD, Lu LG. TET3 mediates the activation of human hepatic stellate cells via modulating the expression of long non-coding RNA HIF1A-AS1. Int J Clin Exp Pathol. 2014;7:7744–51.
27. Guibert S, Forne T, Weber M. Global profiling of DNA methylation erasure in mouse primordial germ cells. Genome Res. 2012;22:633–41.
28. Costa Y, Ding J, Theunissen TW, Faidla F, Hore TA, Shlaha PV, Fidalgo M, Saunders A, Lawrence M, Dietmann S, Das S, Levasseur DN, Li Z, Xu M, Reik W, Silva JC, Wang J. NANOG-dependent function of TET1 and TET2 in establishment of pluripotency. Nature. 2013;495:70–4.
29. Vincent JJ, Huang Y, Chen PY, Feng S, Calvopina JH, Nee K, Lee SA, Le T, Yoon AJ, Faull K, Fan G, Rao A, Jacobsen SE, Pellegrini M, Clark AT. Stage-specific roles for tet1 and tet2 in DNA demethylation in primordial germ cells. Cell Stem Cell. 2013;1:270–8.
30. Iqbal K, Jin SG, Pfeifer GP, Szabo PE. Reprogramming of the paternal genome upon fertilization involves genome-wide oxidation of 5-methylcytosine. Proc Natl Acad Sci U S A. 2011;108:3642–7.
31. Cheng J, Guo S, Chen S, Mastriano SJ, Liu C, D’Alessio AC, Hysollt E, Guo Y, Yao H, Megola CM, Li D, Liu J, Pan W, Roden CA, Zhou XL, Heydari K, Chen J, Park IH, Ding Y, Zhang Y, Lu J. An extensive network of TET2-targeting microRNAs regulates malignant hematopoiesis. Cell Rep. 2013;5:471–81.
32. Zeitels LR, Acharya A, Shi G, Chivukula D, Chivukula RR, Anandam JL, Abdelnaby AA, Balch GC, Mansour JC, Yopp AC, Richardson JA, Mendell JT. Tumor suppression by miR-26 overrides potential oncogenic activity in intestinal tumorigenesis. Genes Dev. 2014;28:2585–90.
33. Huse JT, Brennan C, Hambarzumyan D, Wee B, Pena J, Rouhanifard SH, Sohn-Lee C, le Sage C, Agami R, Tuschl T, Holland EC. The PTEN-regulating microRNA miR-26a is amplified in high-grade glioma and facilitates gliomagenesis in vivo. Genes Dev. 2009;23:1327–37.
34. Fu X, Jin L, Wang X, Luo A, Hu J, Zheng X, Tsark WM, Riggs AD, Ku HT, Huang W. MicroRNA-26a targets ten eleven translocation enzymes and is regulated during pancreatic cell differentiation. Proc Natl Acad Sci U S A. 2013;110:17892–7.
35. Morita S, Horii T, Kimura M, Ochiya T, Tajima S, Hatada I. miR-29 represses the activities of DNA methyltransferases and DNA demethylases. Int J Mol Sci. 2013;14:14647–58.
36. Zhang P, Huang B, Xu X, Sessa WC. Ten-eleven translocation (Tet) and thymine DNA glycosylase (TDG), components of the demethylation pathway, are direct targets of miRNA-29a. Biochem Biophys Res Commun. 2013;437:368–73.
37. Zhang Y, Huang XR, Wei LH, Chung AC, Yu CM, Lan HY. miR-29b as a therapeutic agent for angiotensin II-induced cardiac fibrosis by targeting TGF-beta/Smad3 signaling. Mol Ther. 2014;22:974–85. |
Voices United
Friday, July 25, 2014
Market Square Presbyterian Church, Harrisburg
Voices United is a showcase for the various LGBTQ and Allied choruses and performers of the Harrisburg area. It is an evening of music that expresses the deepest sentiments of the individual groups that comprise the Gay, Lesbian, Bisexual, Transgender and Questioning communities and explores our common experiences with our Straight allies.
Pride Festival of Central PA - Visit the CPWC booth
Saturday, July 26, 2014 at Riverfront Park, Harrisburg
CPWC Spaghetti Dinner
Saturday, November 15, 2014; 4:30 to 8:30
Colonial Park UCC
CPWC Pancake Breakfast
Saturday, April 25, 2015; 7:30 AM
Colonial Park UCC
The Central Pennsylvania Womyn’s Chorus rehearses on Monday nights (with holiday breaks) and the board meets monthly at Colonial Park United Church of Christ, 5000 Devonshire Road, Harrisburg.
CPWC is a proud member of:
Supporting GLBT choruses as we change our world through song.
Sister Singers Network
Pennsylvania Association of Nonprofit Organizations
The Central Pennsylvania Womyn’s Chorus brings together a diverse group of women, united by the joy of singing, to celebrate and empower women and to affirm a positive image of lesbians and feminists.
CENTRAL PA WOMYN’S CHORUS PRESENTS...
SHE-BOP,
the Beat Goes On!
SATURDAY
MAY 31, 2014
UNITY CHURCH
927 Wertzville Road
Enola, PA
Show Time: 7:30 pm
SUNDAY
JUNE 1, 2014
UNITARIAN UNIVERSALIST CONGREGATION OF YORK
925 S. George St.
York, PA
Show Time: 3:00 pm
www.cpwchorus.org
The Central Pennsylvania Womyn’s Chorus is proud to present our sponsor!
Central PA’s Best Storage Value!
CAPITAL SELF STORAGE
Clean ■ Dry ■ Fenced ■ Economical ■ Convenient ■
Monthly & Long Term Leases ■ Autobilling ■ Business Deliveries
24 hr Access ■ Resident Manager
Call or stop by for special rates & discounts
Office hours: Mon-Fri 9-6 Sat 9-2
3861 Derry Street
Harrisburg, PA 17111
(717) 564-9707
In Loving Memory
Barbara A. Nissley
1946-2014
Daniel Snyder
DRS Printing Services Inc.
6 North Grantham Road
Dillsburg, PA 17019
717.502.1117
email@example.com
Lori Baker Pizzarro
The Design Department
PO Box 480
Mechanicsburg, PA 17055
(717) 580-8070
(717) 620-3489 Fax
Graphic Design from Concept Through Completion
firstname.lastname@example.org
www.DesignDepartment.biz
I’m never too busy for your referrals!
Pamela Johnson
REALTOR®
email@example.com
530 N. Lockwillow Avenue
Harrisburg, PA 17112
Office: (717)657-4700
CELL: (717)395-8574
The Harrisburg Gay Men’s Chorus celebrates the GLBT experience in song and performance, fosters a greater appreciation of the male choral musical tradition, nurtures members in their artistic and personal growth, and enriches the region through entertainment and positive interaction.
harrisburgmenschorus.org
JOIN THE HARRISBURG GAY MEN’S CHORUS, MCC, AND THE CENTRAL PA WOMYN’S CHORUS AT VOICES UNITED ON JULY 25, 2014.
CPWC COOKS
and you’re invited
to the annual Spaghetti Dinner
Saturday, November 15, 2014
4:30 - 8:30
Colonial Park UCC
The Womyn’s Chorus could not continue to sing without the support of those individuals, companies, and organizations who helped this season.
**Donors and Supporters**
- Capital Storage, Derry St.
- Battlefield Bed & Breakfast
- Dennis Foreman, Home Improvement
- J.A. Sharp, jeweler
- The Jigger Shop, Mt. Gretna
- Theatre Harrisburg
- Harrisburg Gay Men’s Chorus
- Pride Festival of Central Pennsylvania
- Lucy Twitchell
- Arleen Shulman
- Cynthia Swanson
- Mary Nancarrow
- Marlene Kanuck
- Joanne Semones
- John Folby
- Carol Nodgaard
- Peg Welch
**Volunteers**
- Jane Brickley
- Julie Metzger
- Fern Gaffey
- Ginny DeChristopher
- Donna Stewart
- Ruth Nancarrow
- Linda Mussoline
- Sarah Dewey
- Donna Gomboc
- Matt Sykes
- Deb Wasileski
- Nick Wasileski
- Maya Wasileski
- Dave Johnson
We thank Colonial Park United Church of Christ, Unity Church of Harrisburg and Unitarian Universalists of York for hosting our rehearsals and events, as well as The Patriot News, pennlive.com, the York Daily Record, The Paxton Herald, WITF, and WHTM for publicizing our events. Special thanks to Paul Foltz for his help with our production.
---
**Colonial Coffee Shop**
*John and Helen Tsoukalos*
*Owners*
**Hours:**
- M-F: 6 am - 8 pm
- Sat, Sun: 7 am - 2 pm
938 S. George St.
York
www.yorkcolonialcoffeeshop.com
717-854-0956
---
**We can't wait to do the TIME WARP with the Central PA Womyn's Chorus!**
*Kelly Jean McEntee & Angela Dicks*
*Congratulations!*
Welcome to “She-Bop, The Beat Goes On”!
We are delighted that you are here!
Tonight we enjoy a soundtrack of music from the 50’s, 60’s, and 70’s that many of us grew up with, or grew to appreciate. After all, how can you not relate to Leslie Gore’s pre-feminist anthem, *You Don’t Own Me*? Or cheer on young lovers at the *Chapel of Love*? Or feel like dancing to the *Time Warp*? We invite you to relax, remember and have fun!
Please join us at Voices United on Friday, July 25, when the Womyn’s Chorus will sing with the Harrisburg Gay Men’s Chorus, MCC and others at Market Square Presbyterian Church at 8 p.m. to help kick off the Central PA PRIDE Festival. It’s always an inspiring experience.
Thank you for being the reason why we sing!
Cynthia Swanson
CPWC President
---
**AEDS SAVE LIVES!**
These devices have a proven track record of saving lives in public places as well as in the workplace. They can do the same for you and your employees. Please consider installing AEDs in your workplace.
*Occupational Safety and Health Administration (OSHA)*
Call today for AED pricing and training packages. Learn how you and your staff can help save a life during a cardiac emergency.
**CPR-NOW,**
The First Response Team™
3540 Pebble Ridge Drive
York, PA 17402
Phone (717) 577-0418
Fax (717) 757-4255
firstname.lastname@example.org
---
**Vito’s**
Pizza & Beer
1734 S. Queen St.
York, Pa
717-843-1143
Monday - Thursday:
11:00 AM to 12:00 Midnight
Friday & Saturday:
11:00 AM to 1:00 AM
Closed Sunday
www.vitospizzaandbeer.com
---
Happy 60th Birthday, Cynthia!!!
Time Warp (1974) .................................................. Richard O’Brien
from Rocky Horror Picture Show
Your hosts: Deb Glorius and Cathy Nelson
We Love the 50’s .......................................................... Arr. Jay Althouse
Let the Good Times Roll; Sixteen Candles; Lipstick On Your Collar; Hold Me, Thrill Me, Kiss Me; Shake, Rattle and Roll
A Teenager In Love (1959) ........................................... Doc Pomus and Mort Shuman
One Fine Day (1963) .................................................... Gerry Goffin and Carole King
Girl From Ipanema (1963) .......................... Vinicius de Maraes and Antonio Carlos Jobim
Chain of Fools (1967) .................................................. Don Covay
Soloist: Cathy Nelson
You Don’t Own Me (1963) ........................................... John Madara and Dave White
The Shadow Of Your Smile (1965) .................. Paul Francis Webster and Johnny Mandel
from The Sandpiper
Soloists: Cheryl Huber and Laura Dalton
I Say a Little Prayer (1966) ................................. Burt Bacharach and Hal David
Intermission
Join Us in Worship
Sundays at 10 am
Midweek Service—Wed., 7 pm
First Sunday Breakfast,
June 1
Serving starts 8 am
No Charge
All Are Welcome!
Sharing Our Caring
An evening of support, dinner & community for folks affected by HIV/AIDS, friends, and family.
2nd Mondays, 6:30 pm
***Hosted by CPWC on June 9***
Who joins the Central PA Womyn’s Chorus?
People who are . . .
GAY
petite
straight
young or old
plus short tall
volunteers
SINGERS
who share a love of women’s choral music
and a vision of women’s diversity and empowerment.
Find your voice with us!
this ad sponsored by Cheryl and Carl Huber
The Central Pennsylvania Womyn’s Chorus brings together a diverse group of women, united by the joy of singing, to celebrate and empower women and to affirm a positive image of lesbians and feminists.
Program Cover and Poster Art: Lori Pizzaro, Design Department
Concert Program: Arleen Shulman
Narrator: Marlene Kanuck
Supporting Cast: Cynthia Swanson, Mary Nancarrow, Donna Stewart, Pamela Johnson, Kelly McEntee
1. What African language is used in *The Lion Sleeps Tonight*?
2. Where is Ipanema?
3. Who sang *One Fine Day*?
4. Hold Me, Thrill Me, Kiss Me didn’t make it to the top 100 until what year?
5. When did Sonny and Cher break up?
6. What political events inspired the Beatles’ song *Blackbird*?
7. Which of the concert’s songs is the most recorded in pop history?
8. Which of the concert’s songs is based on a single minor (Aeolian) chord?
9. Where is Penny Lane?
10. How tall is Paul Simon?
11. How many boys in the original Beach Boys? How many were related to each other?
12. What was Dionne Warwick’s first hit?
**Answers**
1. Zulu
2. Brazil
3. The Chiffons
4. 1965
5. 1975
6. Civil rights movement
7. Yesterday
8. Chain of Fools
9. Liverpool, England
10. 5’3”
11. Five. Four were relatives (three brothers and a cousin)
12. Don’t Make Me Over in 1962
---
**Central Pennsylvania Womyn’s Chorus**
Victor Fields, **Artistic Director**, is a versatile choral conductor, organist, pianist and teacher. He is currently the Associate Conductor and Staff Accompanist for the Harrisburg Choral Society. He is also the Director of Music and Organist at St. Paul’s Lutheran Church, York, Organist and Pianist at Temple Beth Israel, York, as well as an Adjunct Professor of Music at Mount St. Mary’s University in Emmitsburg, Maryland. He has performed solo keyboard recitals in the Northeast, the South and California, and was a featured organ soloist with the York Symphony Orchestra in 2013. He has traveled and performed with various choral organizations in the US and in six European countries. He earned his music degrees and training from Mansfield University (PA), The Peabody Conservatory of Music and the University of Cincinnati, College-Conservatory of Music. He has been a prize winner of several organ competitions from the Harrisburg, Baltimore and Richmond Chapters of American Guild of Organists, and the recipient of several competition scholarships and music awards from the Peabody and Cincinnati Conservatories.
Jordan R Markham, **Accompanist**, is an alumnus of *The Peabody Conservatory of The Johns Hopkins University*. He is a classically trained singer. He was a chorister at The National Cathedral and a chorister and soloist in The Handel Choir of Baltimore for two seasons. Throughout the past decade, Mr. Markham has performed with The Baltimore Symphony Orchestra, at Carnegie Hall, The Boston Symphony Hall, and The Jackie Gleason Theatre with auditioned choirs led by some of the most esteemed directors. In more recent years he has performed on-stage with the Peabody Opera department in Mozart’s *Die Zauberflöte* and *Così fan tutte*, Verdi’s *La Traviata*, and Leoš Janáček’s *The Cunning Little Vixen*. He currently serves as organist and choir-master at Gloria Dei Lutheran church in Arnold, Maryland.
Renee Bartholomew, percussion and Patricia George, percussion and bass, have played with many area bands. They often join CPWC for its concerts. |
Low carbon city: Obstacles and solutions identified by the city partners
Summary
An analysis based on a set of identified obstacles
This set of 20 obstacles aims at giving a frame to the Low-Carbon activities of the local authorities. The ways local authorities address these obstacles and the specific local answers given to these challenges are a good indicator of the strategy in place in each city.
This set of obstacle was defined by the project partners during a series of workshops and were finally validated on 28 September 2017 during the project workshop in Suceava.
Among this set, each city chose 10 obstacles to tackle.
According to the local context, this analysis has been done in house or externalised to a subcontractor. Many cities have done this work closely with their local stakeholders’ groups.
Therefore, these obstacles are seen as thematic fields looked at in order to understand the activities in place to tackle them. Therefore, we want to focus on the solutions proposed. Of course, we do not want to focus on the problems but on how to overcome them. Using the obstacles is only a mean to classify these solutions and maybe to identify common responses/strategies between the MOLOC partners.
1.1. Silo thinking
Lack of coordination / lack of leadership capacity and know-how for complex, cross-sectoral processes
1. Make sure that a common (understandable) language is used
2. Remind their responsibilities to the local authority and the local stakeholders
3. Ensure internal communication and cooperation between services
4. Ensure cooperation and coordination between public bodies (and their competences)
5. Refer to a shared transversal vision within all activities
1.2. Changing behaviours / mobilisation
Lack of understanding towards stakeholders and citizens – tools/methods for dialogue and co-construction
6. Overcome lack of stakeholder’s participation into the low-carbon strategy (including changing behaviours between stakeholders)
7. Ensure involvement of citizens and users in the low-carbon strategy (including ownership and empowerment of citizens)
8. Use analysis methods from sociology and social sciences
9. Deal with conflicts of interest and lobbies
10. Address concretely the question of unsustainable behaviour and lifestyles (NIMBY, etc.)
1.3. Political vision
Strong political commitment - Low carbon actions are part of all strategic plans
11. Follow a long-term approach
12. Cope with political changes
13. Ensure commitment and motivation to the low-carbon strategy
1.4. Implementation
Socio-economic and technical arguments – lack of allocated budget – demonstration
14. Use of proper indicators (including indicators for evaluation and monitoring)
15. Build a sustainable financial strategy (including human resources)
16. Evaluate the economic and social aspects (including a global approach)
17. Cope with / go beyond national legislation
18. Select actions that have highest “low carbon” potential (justify choosing specific actions)
19. Test replicable pilot projects
20. Make action attractive (identifying the right incentives, communicating)
| 1.1. Silo thinking | Hamburg | Katowice | Lille | Suceava | Torino |
|-------------------|---------|----------|-------|---------|--------|
| 1. Make sure that a common language is used | | | | | |
| 2. Remind the local authority and the local stakeholders their responsibilities | | | | | |
| 3. Ensure internal communication and cooperation between services | ☢️ | ☢️ | ☢️ | ☢️ | ☢️ |
| 4. Ensure cooperation and coordination between public bodies | | | | | |
| 5. Refer to a shared transversal vision within all activities | | | | ☢️ | ☢️ |
| 1.2. Changing behaviours / mobilisation | Hamburg | Katowice | Lille | Suceava | Torino |
|----------------------------------------|---------|----------|-------|---------|--------|
| 6. Overcome lack of stakeholder’s participation into the low-carbon strategy | ☢️ | ☢️ | ☢️ | ☢️ | ☢️ |
| 7. Ensure involvement of citizens and users in the low-carbon strategy | ☢️ | ☢️ | ☢️ | ☢️ | ☢️ |
| 8. Use methods of analysis from sociology and social sciences | | | | | |
| 9. Deal with conflicts of interest and lobbies | ☢️ | | | ☢️ | |
| 10. Address concretely the question of unsustainable behaviour and lifestyles | ☢️ | | | ☢️ | |
| 1.3. Political vision | Hamburg | Katowice | Lille | Suceava | Torino |
|-----------------------|---------|----------|-------|---------|--------|
| 11. Follow a long-term approach | | | | ☢️ | |
| 12. Cope with political changes | | | | ☢️ | |
| 13. Ensure commitment and motivation to the low-carbon strategy | ☢️ | ☢️ | ☢️ | ☢️ | |
| 1.4. Implementation | Hamburg | Katowice | Lille | Suceava | Torino |
|---------------------|---------|----------|-------|---------|--------|
| 14. Use of proper indicators | ☢️ | ☢️ | | ☢️ | |
| 15. Build a sustainable financial strategy | ☢️ | ☢️ | ☢️ | ☢️ | |
| 16. Evaluate the economic and social aspects | | | | | |
| 17. Cope with / go beyond national legislation | | | | | |
| 18. Select actions that have highest “low carbon” potential | ☢️ | ☢️ | ☢️ | ☢️ | |
| 19. Test replicable pilot projects | | | | ☢️ | |
| 20. Make action attractive | ☢️ | ☢️ | | | |
Remind their responsibilities to the local authority and the local stakeholders (Obstacle 2)
OBSTACLE: Local authority and local stakeholders “forget” the responsibility they have when leading a low-carbon strategy. Sectoral policies seem to continue to obey their own goals and habits inherited from the past.
Addressed by the cities of Lille and Suceava
Definition of the obstacle - context
In Lille, it seems that local authorities are engaging in actions to develop a low-carbon strategy. Consciousness and responsibility exist in the implementation of a low-carbon city, and policies are being developed. These policies are becoming a priority of the municipality since the City engaged its staff in the application to become European Green Capital 2021.
The main challenge is to make the low-carbon strategy more visible as well as a clear priority. This is true for institutional communication which is very limited and the lack of visibility of key initiatives involving inhabitants. Initiatives exist, but are not communicated within a global sustainable city approach.
In Suceava, a series of strategies aimed at reducing carbon emissions have been developed through which the authorities undertake their policies for reducing carbon emissions. A Local Sustainable Energy Action Plan (SEAP) for the Municipality of Suceava has been developed. Still, the implementation of SEAP proposals has not been monitored.
The main concern is to integrate different sector-specific strategies with specific objectives into a single global strategy at the local level to reduce CO₂ emissions and to bring visibility towards potential stakeholders. There is only a small number of stakeholders involved. Others levels of governance are not involved in the SEAP.
Horizons & solutions
In Lille, the application to the title of European Green Capital has given a frame to many actions and initiatives.
Within the Suceava Municipality administration, an interdepartmental working group has been set up to produce the reference data and the list of possible actions aimed at reducing carbon emissions. This working group is made up of the Heads of the Environment Office and of the European Communication / Projects Bureau, coordinating between different sectors and stakeholders involved in the development of individual projects. Nevertheless, dialogue with other stakeholders is deficient.
The local authority has developed a low-carbon policy / strategy. At present, actions have already begun to implement practical measures to reduce carbon emissions.
Ensure internal communication and cooperation between services
(Obstacle 3)
OBSTACLE: Inherent silo mentality, and consequently a lack of cross-sectoral processes, within the local authority
Addressed by the cities of Hamburg, Katowice, Lille, Torino
Definition of the obstacle - context
In Hamburg, leadership in ecological questions is located within the Department for Environment and Energy. The communication between the different actors is going on over some levels. On the working level in most cases directly between the different services and actors involved. But also formal communication is happening on the lead level of the different departments and within the Senate of Hamburg.
In Katowice, the city is facing difficulties in coordinating and monitoring the progress of implementation of the LCEP. The main problem is the lack of procedures according to which departments responsible for the implementation of tasks would be required to consult tasks and report their implementation status and the achieved results.
In Lille, some difficulties encountered in systematically and not only opportunistically engaging all the services concerned. Due to budget cuts and reduced human resources, services have less opportunity to get involved in cross-sectoral working groups and some key services are less active.
In Torino, the local Public Administration has organized an inter-departmental roundtable which consist on Environment, Private Households, Infrastructure, Mobility, Social Policies, Information Systems, Urban Planning, Green, Public Construction, Civil Protection, and Energy Management departments.
The administrative apparatus is currently complex and there is a strong difficulty in making efficient cooperation.
Horizons & solutions
In Hamburg, through the climate plan, the local authorities are focusing primarily on key areas, independently of the Federal government. To that regard, strategic clusters were formed, integrating and supporting cross-sectoral collaboration in these key areas: Transformation of urban spaces (city/neighbourhood development), green economy, Hamburg as a role model and Climate communication.
The Coordination Centre for Climate Issues creates the necessary working structures with the participation of the relevant sectoral ministries, public enterprise and the affected target groups from the private sector and will report on joint routes to achieve the targets in the next update of the Hamburg Climate Plan.
Set up ok **key partnerships** eg collaborate with the ZEBAU GmbH, a company which establishes and promotes the use of renewable energy sources as well as construction of energy-efficient buildings, operating with multiple stakeholders in the field of politics and administration, science and research and planning and construction.
In Katowice, the Mayor created in 2014 the **Working Team** responsible for the implementation and realization of the Low Carbon Economy Plan (LCEP). Many actions have been taken to improve internal communication:
- introduction of an electronic document flow system,
- participation in conferences and trainings,
- organization of meetings between the City Hall and external entities,
- organization of meetings of managers and heads of departments,
- submission of proposals on the meeting of the Mayor of Katowice for broader discussion and eventual acceptance
- established templates of reporting tables and deadlines for their fulfilment for a given year, defining the institutions and units responsible for a given area of activity
In future it is necessary to improve the communication procedures.
In Lille, real efforts have been made to promote internal dialogue and collaboration. But the trigger for such collaboration is an immediate subject, such as the candidacy for Green Capital, which is often driven by the DGA, and also the renewal of EEA (Cit'ergie in French context).
In Torino, the coordination among services to reach a local strategy for climate change is leaded by the **environmental sector of the City** that promoted **TAPE Plan**. Within the environmental sector, other services involved are the offices for Environmental education, Sustainable mobility, Energy Manager, Economic Development, Education.
The current administration is the **first systematic and formalized tentative to use an inter-sectoral approach** for the policy-makers. The assessment of indicators, the creation of a common database and approach to the policy, the definition of key indicators and the common vision on how to reach the target **is perceived as fundamental for the administration**.
There are **no specific indicators to measure and evaluate** the cooperation between different departments of the public administration.
OBSTACLE: The transversal vision about low carbon is not sufficiently shared within the local authority
Addressed by the cities of Suceava, Torino, Hamburg
Definition of the obstacle - context
In Torino, since it is an earlier phase of the application of coordination among services to reach the national strategy on Climate Change, the operational activities among services are reduced to some common meetings based upon an agreement to share data, approaches, and goals to develop specific guidelines for a low carbon strategy. Actually, Urban Planning services are working to find practical binding rules for environmentally sustainable land use planning regulations.
In Suceava, many planning documents/strategies elaborated by other institutions are not correlated to the carbon reduction policies. And so far, there is no cooperation between departments to develop a such shared vision.
Hamburg follows a clear target and has a transversal vision regarding the reduction of its CO₂ emissions, optimising its infrastructure, working with Universities and research centres as well as increasing the use of public transport. This vision has been established and emphasised by the State of Hamburg as a ground policy. This vision is mainly explained in the Hamburg climate plan of 2013. This Plan has been developed across departments and by incorporating the districts as well as in cooperation with the city’s stakeholders.
Horizons & solutions
In Torino, an inter-department roundtable, focused on climate change has been created. The departments involved are Environment, Private Households, Infrastructure, Mobility, Social Policies, Information Systems, Urban Planning, Green, Public Construction, Civil Protection, and Energy Management. + TAPE office?
The city’s administration of Suceava has preoccupations to improve the degree of sharing of the cross-cutting vision on low carbon emissions. The Moloc Project itself has already contributed to the sharing of this vision, both between the local authorities’ own departments (public transport service, urban planning, energy department, sanitation service, public lighting service), as well as to other stakeholders participating in working groups (regional development agency, environmental protection agency, utilities suppliers – gases, heat, other cities in the region, etc.). The actions carried out in the working groups contributed to strengthening communication skills of the participants on reducing carbon emissions.
In Hamburg, through its different partaking projects, the city has consistently learned from other partners on how to deal with systemic problems. Thereby, one of the main focus is to increase communication skills between the partners, local stakeholders and authorities by giving them the needed tools.
Overcome lack of stakeholder’s participation (Obstacle 6)
OBSTACLE: Local stakeholders do not participate to the low-carbon strategy of the territory and in consequence do not adapt their habits/behaviours accordingly
Addressed by all the cities
Definition of the obstacle - context
In Torino, there is a **lack of an organized shared platform** to make the stakeholders participate in the low-carbon strategies. Moreover, the participative approaches are normally **time-consuming** involving effective stakeholders. The difficulty of the inclusion of **conflicting point of views** and then aggregation of stakeholders’ preferences in a participative decision-making context is another issue to be tackled.
In Suceava, there is a **lack of interest from local actors**, no culture” of reducing carbon emissions. For most of citizens issues related to CO₂ reduction are purely formal. There is no local authority policy or strategy that aims to attract and motivate stakeholders to reduce carbon emissions.
In Lille, despite very good cooperation on certain projects or initiatives, we do not see any real territorial dynamics that could encourage local actors to become proactive and committed actors. Civil society involvement is still too much done through **consultative, top-down processes**.
In Hamburg, a wide field of different types of stakeholders has been assembled. The District of Altona could have brought as one stakeholder (the district within Hamburg where they are going to implement the action plan from MOLOC).
In Katowice, stakeholders do not engage in low-carbon policies. Despite public disclosure of the LCEP in 2014 and the update in 2017, no comments or proposals for changes were received from non-governmental organizations and organizations of public benefit. NGOs concern about low carbon policy is weak. Nevertheless, accordance with the Statute of the City, **consultations with residents can be performed**. During the update of the "Low-carbon economy plan for the city of Katowice" and "Assumptions for the plan of supply of heat, electricity and gas fuel for the city of Katowice", **residents and stakeholders can participate in the planning and energy management process**. Information about investment tasks planned for implementation in the city are provided for them – documents are publicly available and consulted socially. **Cooperation with non-governmental organizations** includes cooperation: a) financial in the form of commissioning non-governmental organizations to carry out public tasks selected as a result of the competition, and b) non-financial - information, organizational and other. Such cooperation is regulated.
In Torino, the definition of a list of local stakeholders involved in the low-carbon strategy needs long time, spent in the creation of involvement and relationship. Some stakeholders are consolidated and collaborate usually with the environment department. Several projects developed by the city created a network of local stakeholder composed by private companies, public administration departments, cultural institutions, university and local association of citizens.
In Suceava, dialogue efforts via media channels and different European projects are developed. Creating working groups at the level of each project involving stakeholders from the above mentioned categories of interest.
In Lille, many projects and initiatives have successfully brought local actors together but they lack visibility on the low-carbon side, which hinders a real dynamic.
In Hamburg, direct one on one dialogues with the different stakeholders have been implemented.
The city of Katowice participates in ongoing events to connect with stakeholders, for example: The New Economy Forum, Conferences for local governments, Cyclical meetings of the Committee on Local Energy Policy in the Silesian Union of Municipalities and Districts.... The staff of the City Hall participate in meetings to which they are invited: bodies of Auxiliary Councils or debates by non-governmental organizations. In order to broaden the cooperation with stakeholders, the city has joined several international projects.
Ensure involvement of citizens and users in the low-carbon strategy (Obstacle 7)
OBSTACLE: Lack of expertise and practice when it comes to working with the civil society in a low carbon strategy instead of just providing top-down information
Addressed by all the cities
Definition of the obstacle – context
Hamburg has a long tradition of participatory processes and initiatives, including citizens in decision making. Today, the city organises workshops, a learn process where the civil societies can interact directly with the stakeholders and get the information.
In Katowice, communication with citizens and organizations is one of the strategic objectives of the LCEP.
The city has a mining tradition and their inhabitants are accustomed to coal as a (cheap and commonly available) source of energy. The barriers are mainly related to financial aspects and also to insufficient knowledge about the health and environmental effects of coal combustion in inefficient heating installation, a lack of knowledge about new RES-based technologies, as well as insufficient knowledge about the possibilities of co-financing activities related to the increase of energy efficiency, replacement of heat sources and the use of RES, as well as technical conditions for the modernization of existing heating installations.
Lille is a pioneer city in citizen participation: charter of participatory democracy, several successful initiatives notably a participatory budget but all these initiatives would require more financial resources.
In Suceava, there is a fairly limited experience in working with civil society, not a large mass of organizations and citizens involved in such actions. The stake lies in acquiring expertise and experience in involving citizens and users.
In Torino, citizens are informed about low carbon policies by some formal channel such as:
- Web page of environment department
- Front office for distribution of informative material and free consulting, for example the energy office (Energy Counter)
- Direct communication between users and administration, for example firstname.lastname@example.org is an operative email about slow mobility
- In some cases, public administration meets single citizens for petition/complaint (not only about environmental issues). To meet single citizens is the occasion for solving specific problems, but also occasion to inform and give advice on environmental aspect. Many initiatives implemented through the tape Plan...
Horizons & solutions
In Hamburg, since the beginning the city has involved civil societies within the scope of decision making. In fact, the participatory process was expanded by the Coordination centre for climate issues in 2014 regarding the climate masterplan of Hamburg, thereby increasing the range of stakeholders and topics and including them to the whole process.
In Katowice, consultations with residents are organized to increase social participation and involve the society in the decision-making process (urban internet platform – open meetings, working teams). A system of subsidies has been implemented to help residents to change their heating system and to finance energy retrofitting. Many tools exist to improve the dialogue/participation:
- Residents Service Office in Katowice, where residents have the opportunity to make calls to employees of the Office, employees answer questions, provide explanations or refer to those responsible for the content of the matter.
- The civic budget (5th edition in 2018) with the largest funds per capita among provincial cities in Poland, with one of the highest voter turnout in the country.
- Drafts of resolutions for which residents can express their opinions.
- The service Naprawmyto.pl (which can be translated into 'let's fix it'), for mapping defects in public space identified and reported by citizens.
- The local initiative.
- The portal "Energy and Environment in Katowice" for the educational activities and should become a source of information on "Municipal energy".
- The application 'wCOP drzewo' ('dig a tree') was launched – an application that allows citizens to indicate places for planting trees.
- The Municipal Energy Center was opened (in Sept. 2018) to serve residents through energy consulting.
In Lille, consultation is still too infrequently bottom up. The selection of projects via the participatory budget model makes it possible to initiate a change on this subject. This method should be extended to other subjects such as the redevelopment of public spaces or temporary urban management.
In Suceava, dialogues via the following interfacing capacities have been implemented: days of open gates; deliberative opinion polls; organization of consultative committees; juries of citizens; referendum.
The City of Turin and the Metropolitan City are now promoting new tools to communicate to citizens, integrating ICT and interactive consultation models. Furthermore, it is necessary to reinforce information activities and public debates starting from the questions specifically requested by participants. There are some experimental projects related to involvement of citizens but usually they do not have bequests on ordinary activities. Ecological Sundays is one the initiatives, which is organized by the City of Turin.
Deal with conflict of interest and lobbies (Obstacle 9)
OBSTACLE: Conflict of interest and lobbies (internal and external) prevent the development of a strong low carbon policy
Addressed by the cities of Suceava and Hamburg
Definition of the obstacle - context
In Suceava, the stake lies in convincing the carriers and industry to support carbon reduction policies by presenting the benefits of implementing these policies.
The Federal State of Hamburg has a long history of green involvement and even more since the Green party is part of the coalition agreement with the socialists. Also, strong inveterate environment associations as NABU (Nature and Biodiversity Conservation Union) and BUND (Union for Environment and Nature conservation Germany) are important stakeholders, the primary influencers in the city, protecting above all the Climate and the people, focusing on specific actions to prevent any harm to the environment.
Horizons & solutions
In Suceava, in all the projects that have been carried out or ongoing projects, the administration is working to reduce the impact of the actions of different interest groups that have disjointed interests in reducing carbon emissions.
In Hamburg, the local authority has focused on how at best communicate with stakeholders to respond to their demands
Organize debates, implement plans.
To promote a low carbon strategy, the city has issued an air pollution control plan for 2018, strongly required by the BUND, and safeguarded by the BUE (environmental authority). To that regard, the plan includes also traffic regulation measures to reduce emissions, a project supported by the environment associations and other local stakeholders.
Address concretely the question of unsustainable behaviour and lifestyles (Obstacle 10)
OBSTACLE: the difficulty of addressing the question of unsustainable behavior and lifestyles (NIMBY, etc...) and give incentives to change
Addressed by the cities of Hamburg, Lille, Torino
Definition of the obstacle - context
Hamburg tries to involve every citizen to change their habits, in particular regarding recycling projects, to teach them about recycling issues, how to increase recycling behaviours, and implement educational programmes to incorporate such behaviours in the citizen’s everyday life. The city has a lot of freedom, having the possibility to enact legislations to surpass these systemic problems through enforcement (procedures, construction permits...) and especially punish the wrongful behaviours.
Examples of actions: direct communication with the companies, inform the people by putting in place some awareness in schools and education.
In Torino, residential sector is responsible for 40% of CO₂ total emissions. Bonuses, incentives and deductions for the residential sector are provided the Budget Law 2018.
Lille seeks to understand what would accelerate change, encourage new eco-responsible behaviors, compatible with a low-carbon approach, on a massive scale.
Horizons & solutions
In Hamburg has put some efforts together to promote its green strategy, by leading important projects in regard of emissions reduction. In fact, the districts have pushed forward their recycling strategies and responsibilities, increasing waste sorting to ease the process and also through refundable bottle installations, which improve plastic recycling.
Furthermore, the companies are also cooperating to reduce their emissions, setting real targets for the future and including it to their strategy.
In Turino, beyond financial incentives, the environment department developed «Adotta comportamenti sostenibili» meaning choose sustainable behaviors, which is a web page with different guides on how everyone can do something in the daily life. This project provides brief guidelines for specific occasions in daily life taking into account four main actions ‘Lower – Turn Off – Recycle – Walk: how to adopt sustainable behaviour at work, at home or in the city.
Examples of actions:
- Ecological Sundays is an initiative dedicated to health, wellbeing and relation between health and environment.
- Punti Acqua SMAT (box for distribution of drinkable water) where citizens can use Km0 water reducing the impact of plastic bottles. The box offers information about quality of water and initiative of water saving;
- The metropolitan network of Green Public Procurement started in 2003 and now has more than 40 partners. Citizens are also involved in the activity of planting new trees as an occasion to generate new sink areas for pollution
In Lille, l’Agenda des solutions sets out the overall framework for action to change the behavior of citizens and municipal officials to develop low-carbon actions. Several initiatives such as Positive energy family challenge, Carbon conversations and regular information and event on energy, mobility, waste, urban agriculture.
Follow a long term approach (Obstacle 11)
OBSTACLE: Decisions are taken while taking into account only short term outputs/consequences. How to encourage local authorities to take long-term approach into consideration when taking decisions having an impact concerning the low-carbon objectives?
Addressed by the cities of Katowice, Suceava
Definition of the obstacle - context
In Suceava, the planning documents realized at local level aimed at developing the short- and medium-term development of policies targeted to reducing carbon emissions, generally action plans being guided by the existing sources of financing for the pre-integration and post-integration phases of Romania in the European Union.
This is an obstacle because it wasn't implemented and monitored an overall long-term vision, regarding the cutting down of carbon dioxide emission at local level.
In Katowice, the strategy is specified in long-term documents such as the LCEP and the Plan for the supply of heat, electricity and gas fuels. Such documents are developed taking into account the development plans of energy companies and energy systems development strategies.
Long-term goals are primarily related to:
- elimination of coal-based heat sources or replacement of old inefficient coal-burning devices with more ecological but still coal-based boilers;
- installation of renewable energy sources;
- reduction of energy consumption by thermal insulation of buildings,
- reduction in the consumption of energy and water carriers through the use of monitoring and operating systems in public buildings.
The instruments used are mainly financial incentives and information as well as educational activities are also carried out.
Long-term activities are associated with risks, such as obtaining funds for investments or problems with finding contractors. The potential risk to health is a strong motive to action, therefore low-carbon policy is prioritized together with anti-smog policy. But after achieving a good air quality (in term of particulate matters), there can be problem with continuation of low-carbon policy.
There is a lack of involvement of individual energy users - individuals and companies involved in low-carbon policy. It is influenced by high costs and lack of conviction of residents to renewable energy sources. Despite the possibility of obtaining subsidies, majority of people are not interested.
Horizons & solutions
In Suceava, within the Municipality administration, a working group has been set up to produce the reference data and the list of possible actions aimed at reducing carbon emissions. This working group includes the heads of the Environment Bureau as well as the Communication Office / European Projects Office, having the role of coordination between the different sectors and stakeholders involved in the development of individual projects. This working group should consider in the future that the next generation of strategies (Sustainable Urban Mobility Plan, Integrated Urban Development Strategy, Sustainable Energy Action Plan etc.) also include the long-term analysis (by 2050). Cooperation between different departments works very well.
In Katowice, barriers in the city administration are associated with the gaining budgetary and non-budgetary funds for the implementation of the LCEP tasks. The city applies for funds for realization the LCEP. Possibilities of using private funds in the form of public-private partnership are considered and analyzed.
An important barrier is related to the lack of capabilities of the city authorities of influencing external energy companies operating on market to participate in creating and implementing a long-term low-carbon strategy. The activities planned at the LCEP involve the cooperation of gas and heat supply companies. In practice, the change in the type of fuel used by the user from coal to gas or network heating depends on the supplier's acceptation of the new connection to the grid and the cost-effectiveness of the new point. The implementation of planned activities related to the transformation of the coal-based economy towards other fuels, including natural gas, is ineffective in cases when the gas company considers that the new connection is not economically viable. Representatives of the city participate in conversations and meetings of residents with energy enterprises to facilitate reaching agreement.
Cope with political changes (Obstacle 12)
OBSTACLE: Political discontinuity (following changes of majority in local authorities) as well as short term political time horizons and objectives are often not in line with the long-term, continuous effort that an energy transition towards a low-carbon future demands
Addressed by the cities of Lille, Torino
Definition of the obstacle - context
In Lille, housing and social mix policy followed for several years has sometimes led to the secondary importance of sustainable development and the construction of a low-carbon city. It is therefore necessary to ensure the municipality's commitment on these issues in the medium and long term.
In Torino, Through the years the administration of the city has shown a strong effort for the deployment of an environmental strategy and the promotion of a more sustainable development. In 2016, after a long period of political stability, there has been a drastic change in leadership at the level of local administration. Even if the attention to environmental problems is not totally new, in order to face current challenges and issues (e.g., climate change, air pollution) the new government has decided to invest more attention and resources in the deployment of a more comprehensive low carbon strategy.
Horizons & solutions
In Lille, this policy now seems to be evolving to take more account of sustainable development (see obstacle 13).
In Torino, the recent political changes have not influenced significantly on the implementation of low-carbon policies. However, the focus of the political strategies has been slightly shifted in order to address some main issues such as mobility, energy.
Creation of an inter-departmental roundtable, focused on climate change. Different departments and services have been involved to cooperate and to investigate current policies and good practices, to update and prepare a more effective plan, with mitigation and adaptation measures, and to coordinate the actions of each actor involved. This working group has recently been instituted.
Ensure commitment and motivation to the low-carbon strategy (Obstacle 13)
OBSTACLE: The low-carbon strategy exists officially but is not a priority. Neither the local authority, nor the local stakeholders are committed to it.
Quite often, local authorities decide on some climate change strategies without real motivation. In some cases, the law obliges local authorities to elaborate plans. In some cases, plans exist to be part of a trend. The same is true for other organisations or business. Greenwashing is an issue to be dealt with. How to make sure that a low-carbon strategy meets commitment of local actors and that stakeholders keep motivated to achieve such a strategy?
Addressed by the cities of Hamburg, Katowice, Lille, Suceava
Definition of the obstacle – context
In Hamburg, if the commitment and motivation of the stakeholders are not high enough, then the local administration is to blame. The Senate and of course the city are the ones who implement such strategies and their duty is to make sure everyone is going the right path. Therefore, they have to work together, combine strength and reach these common targets. To that regard, the administration and each district have to increase the communication possibility and ability between the stakeholders, creating otherwise a too broad picture.
In Suceava, there are specific preoccupations of the local authority to reduce carbon emissions, but these are found in different plans and strategies, not in a single strategy dedicated solely to the goal of reducing carbon emissions. A local integrated strategy that includes all measures aimed at reducing carbon emissions in various fields (transport, residential buildings, public buildings, industry, other areas) has not yet been achieved.
There are several measures / projects with objectives targeted to carbon emission reducing that have been proposed in action strategies / action plans and are at an advanced stage of maturity. This demonstrates a strong commitment and a strong motivation of local authority for carbon reduction strategies.
In the current situation, the only entity involved in developing carbon reduction strategies and implementing them is the local authority. Stakeholders are not interested in this global issue. The main causes include lack of communication, lack of information about the measures that can be applied and the negative consequences of not applying them. There are no issues of conflict or lack of trust.
In Lille, housing and social mix policies followed for several years has sometimes led to the secondary importance of sustainable development and the construction of a low-carbon city. It is therefore necessary to ensure the municipality's commitment on these issues in the medium and long term.
The city of Katowice was one of the first cities in Poland which developed a LCEP, a voluntary plan. But the local authority has limited organizational and financial capacity to implement the low-carbon policy. The first important barrier is related to a lack of human resources. In case of air protection, the emission of substances harmful to health, such as sulphur oxides, particulate matter, heavy metals, is controlled and limited (for
example in The Clean Air for Europe programme, CAFE). The city authorities are obliged to refer to the limit values of pollutant concentrations, but for carbon dioxide emissions such limit values do not exist. Greenhouse gas emissions are not part of the environmental impact assessment of projects, nor are the basis for decisions and permits. Neither entrepreneurs nor local governments treat low-emission strategy as a priority. It is crucial for them to meet numerous legal requirements, to incur costs required by law, e.g. health and safety and environmental charges. In practice, the city administration has limited personal resources, and is responsible for the implementation of many tasks from different areas which reduces the possibilities for effective monitoring of results.
Horizons & solutions
In Hamburg, the local authorities are also involved to make some progress in the future, working together with different departments as the one for environment and energy and also for economy, education, etc. To lead a green policy, the city has put a lot of confidence in its industries, based on voluntary commitment. Voluntary commitments enable companies to decide for themselves which measures will achieve climate change mitigation targets most successfully. Voluntary commitment enables companies to make an active contribution to climate change mitigation. They are thus acknowledging their responsibility for protecting the natural environment and at the same time the future of the business and employment centre of Hamburg. To let industries set their own goals, is a way for the city to show some flexibility on how each of the stakeholders is implementing a green policy. The highest reductions in CO₂ emissions were achieved through voluntary commitment by industry (with 88,000 tonnes) and by the expansion of large-scale bioenergy plants (with 81,000 tonnes).
In Suceava, the lack of local involvement does not have a decisive influence on the city administration that is in the process of developing a project that will implement an integrated action plan for all measures aimed at reducing carbon emissions. This plan will also include an energy audit to determine the areas of intervention and implementation needs for the actions to be proposed in the plan. There are several departments involved that provide the data needed to analyse the current situation and will be consulted when drawing up the plan. Cooperation between the departments involved works well without communication bottlenecks. The local authority has undertaken in all projects targeted to carbon reduction campaigns for the population and other stakeholders. Information sessions were organized, the information was disseminated on the authority's website and in the local press. For the integration strategy, similar actions will be organized. The strategy will be subject to public consultation and debate. As a cross-cutting effect, local culture will be developed to involve citizens and other stakeholders in carbon reduction issues.
In Lille, the urban plan document of the Metropolitan area “PLU2” is included new or stricter environmental recommendations. Commitments have been made, notably with the Covenant of Mayors, to achieve long-term objectives. A replication of the public building renovation scheme in other sectors would ensure the continuity of a low-carbon strategy.
In Katowice, involvement in low-carbon policy requires multidirectional activities: economic, legal, and social. In order to increase motivation and involvement in the low-carbon strategy, the city implements activities in five areas related to air protection: 1. Support for the poorest inhabitants; 2. Introduction of new legal regulations; 3. Increased investments in municipal facilities; 4. Subsidies for residents; 5. Pro-ecological education. Activities
related to sustainable transport are following: replacing of 100 buses with new vehicles – ecological or electric for over PLN 120 million, construction of 4 transport hubs for over PLN 200 million, new tram line, city bike network.
Use of proper indicators (Obstacle 14)
OBSTACLE: Relevant indicators for sustainability or low-carbon approach are difficult to find. Often indicators are not comparable, or data is missing. Most of the existing methodologies are discussed. However, it is important to measure and evaluate progress and local authorities are looking for methods to support this work.
Addressed by the cities of Hamburg, Katowice, Suceava, Torino
Definition of the obstacle - context
Hamburg has been creating its own software to collect data and information, to increase data and indicator sharing. The Senate has commissioned the Coordination Centre for Climate Issues for the recording and evaluation of monitoring the measures, financial controlling and CO₂ monitoring. The indicators are shared between the different departments. The administration wanted to gather all the data to monitor progress and enhance the controlling measures.
Katowice has identified these main problems: data availability, the lack of procedures of reporting and gathering necessary data as well as the lack of devices enabling measuring the real effects instead of calculated, estimated effects. Various indicators are used for different tasks and areas of activities but there is no common methodology for converting, for example, kWh of electricity into CO₂ emissions. The lack of such methodology leads to discrepancies and incomparable results. There are no procedures imposing the obligation to report emissions of greenhouse gases to the city. Such procedure was implemented by the Silesian Voivodeship in case of air monitoring in the Air Protection Program, but it doesn’t cover carbon dioxide. There is an obligation to report data by business entities, which is a valuable source of data and methodology.
In Suceava, there are difficulties in identifying the most suitable indicators for describing actions aimed at reducing carbon emissions. Given the multiple fields in which carbon emissions are being pursued, it is difficult to find indicators that are relevant in all areas at the same time and local actors are also confronted with the difficulties. Other local actors are also confronted with the same difficulties.
In Torino, the hard process of choosing the relevant indicator from numerous ones and set up an accurate benchmark for each of those indicators can be a major obstacle. In the environment department, currently, the benchmarks are deriving from the national normative and standards. The integration of citizen-led and expert-led approaches for sustainable indicators development is another challenge to be considered within the assessment framework.
Hamburg has to upgrade its transparency policy by supplying the citizen with readable facts, giving a sense to these numbers. Overcome the problem of non-proper indicators by setting achievable goals and increase communication between the stakeholders to share their results.
In Suceava, the choice of data and the calculation of the indicators are done according to the methods set out in the monitoring plan of each strategy. There is a close cross-cutting collaboration between the different departments that own and manage the data on which the indicators are calculated.
Torino is trying to select the most relevant and significant indicators, and it is defining the relative benchmarks. The first step is to analyze the baseline scenario and identify the relative obstacles. The city is organizing different stakeholders’ roundtables involving the experts and public administration to improve the use of sustainability indicators. The city administration is trying to address the above-mentioned obstacles setting up different focus groups involving different stakeholders from different services. There are many initiative and projects that take into account the issue regarding the use of proper indicators: CESBA MED, the Action Plan for Sustainable Energy of the City of Turin.
Katowice has started to prepare a functional model for monitoring activities defined in the LCEP as well as updating and evaluating the document. One of the objectives is also to establish cooperation with energy companies in order to establish rules for obtaining data on fuel and energy consumption.
A special Task Force was established in the city with employees of the following departments: Environmental Development Office, Department of Buildings and Roads, City Development Department, Faculty of Transport and European Funds Department. The Environmental Development Office is responsible for the monitoring and reporting on the implementation of tasks in the budget year. Most of the indicators are collected and calculated by the Environmental Development Office.
Build a sustainable financial strategy (Obstacle 15)
OBSTACLE: The needed investments for thermal retrofitting of buildings, local renewables, public mobility infrastructures, etc. are considerable and available business models are not appropriate
Addressed by the cities of Hamburg, Katowice, Lille, Torino
Definition of the obstacle - context
Hamburg has to increase its cooperation with the different departments responsible for funding activities. The city actively collaborates with the IFB, the investment bank for the city of Hamburg but also the fiscal authority, the one for economy and environment. The Federal State of Hamburg benefits of its own funding programmes, using Federal funds.
In Katowice, decreasing population, variable national policies and the limited financial framework of European programs do not provide a sustainable financial strategy for a low-carbon economy. The climate and energy budget is related to the low-carbon economy plan. The tasks planned for implementation at the LCEP are included in the current yearly city budget, as well as the long-term financial forecast. **The stability of funding sources is not ensured**. They are financed partly from the city budget, and others from external funds, so obtainment of this funds determines implementation of these tasks.
In Lille, today, there is no clearly identifiable cost accounting for actions to tackle climate change and energy transition. It is difficult for the citizen or local partners to assess the City’s investments in the field. Similarly, as there is no major political programme, the financial resources are dispersed over a large number of projects which, although relevant, make it difficult to measure the financial resources implemented.
In Torino, the main obstacle is that **the Return of Investment (ROI) is long**. Sometimes, bigger projects become funded rather than the small ones since many decentralized small projects struggle to find finance models as they are too small to be taken separately. For this reason, there is a need to use the centralized retrofitting actions for the possible buildings in the same district. For the low-carbon strategies **there is not a specific budget** even if it is acknowledged that a specific budget is needed in order to realize these strategies, and also to increase the personnel skills. The lack of budget leads to have fewer human resources who deals with low-carbon strategies.
Horizons & solutions
In Hamburg, the financing of sustainability projects and **strategy has been transferred mainly to the IFB**, the investment bank in Hamburg which is also managing the funds for a low carbon strategy. The last years, companies, locals and authorities have focused especially on **replacement investment and to guarantee finance approaches on targeted projects**. To that regard, stakeholders concentrated on how to improve housing systems by making products more energy efficient through refurbishment works, putting in place green roof strategies combining the urban development policy aim of the growing densely-populated city with environmentally friendly building and work along with companies as Aurubis AG to fund programmes for renewable heating systems.
In Katowice, the implementation of the low-carbon strategy is monitored by the Environmental Development Office and the Working Team. Financial indicators of tasks are included in the LCEP. There is another possible mechanism to finance public tasks from private capital within a public-private partnership. However, the use of this instrument is not popular in Poland yet.
Lille is beginning to implement new innovative financing models such as *intracting* - an internal fund for the energy renovation of municipal buildings. This could make it possible to replicate it for other projects.
Torino is preparing the energetical document attachment, named “Energetic Annex”. This document will provide the guidelines to the citizen in order to support them to requalify their own apartments in terms of energy retrofitting.
Evaluate the economic and social aspects (Obstacle 16)
OBSTACLE: The evaluation of the actions and projects often only base on pure financial aspects and do not take into consideration neither externalities (negative or positive) applying on the local territory nor a long term approach.
Addressed by the city of Torino
Definition of the obstacle - context
In Torino, one of the main obstacles is the lack of effective communication with the citizens and the stakeholders. It would be important to properly collect their feelings, willingness and the expected impact towards a post-carbon city. Moreover, from the point of view of the stakeholders and public administration there is a difficulty to adopt involving methods as qualitative/quantitative methodologies very well known in the research field that could help to overcome this obstacle. All of this is traduced in a scarcity of available quantitative and qualitative data.
Another obstacle for the Public Administration is constituted by a lack of proper indicators able to measure social and economic externalities. In this sense, to produce and find new indicators, it is necessary to activate new researches and to do that a huge amount of money is required.
Horizons & solutions
In Torino there is a local reflection about innovative ways to approach economy and sustainability shared among public administration, researchers and third sector. Action Plan Torino 2030 (a sustainable and resilient vision of the future), supported by the metropolitan urban centre, aims at realizing a city focused on the citizen's well-being. To do so, the plan provides coordination and communication tables to reach economic and environmental sustainability introducing circular economy.
In the urban planning field, a powerful instrument is constituted by the SEA (Sustainable Environmental Assessment – in Italian VAS, Valutazione Ambientale Strategica), which objective is to assess the environmental effects of plans improving the public participation.
Sometimes the new well-being indicators (plus the traditional demographic indicators) provided by National Institute of Statistics are used to evaluate the change in the attitudes of the citizens. However, they proved to be not sufficient to consider the overall sustainable behavior.
Another interesting indicator is the so-called “Time budget” also produced by ISTAT: the objective of the analysis is to understand the habits of the citizens in their personal life, every day for a year. Unfortunately, those kinds of analyses are very expensive for the public administration and therefore not so much used. Currently, due to the recent change in the Turin administration, a shared strategy in terms of monetary and non-monetary indicators has not been defined.
Select actions that have highest “low carbon” potential (Obstacle 18)
OBSTACLE: Most often, action aims to the so-called low hanging fruits, while the major needed measures are ignored because they are hard to carry politically, are difficult to implement, or necessitate deep societal and economic changes.
Addressed by the cities of Hamburg, Katowice, Lille, Suceava
Definition of the obstacle - context
Hamburg has prioritised individual action areas to enable measures to be taken. The different stakeholders implementing the climate plan are highly interested in statistics to monitor the ongoing progress the city is facing. Working along with research institutes, the city has drawn up a comprehensive strategy and defined targets which have been implemented with the assistance of national funding programmes and European funded ones as the ERDF Program 2014-2020. Today progress has been seen in increasing green mobility (e.g. car-sharing, bike stations and battery stations) and incorporating the action plan to the local economy.
In Katowice, the actions planned for implementation at the LCEP were selected based on the inventory results. The activities with the largest possible ecological effect were selected – so that the most beneficial ratio of ecological efficiency to the costs incurred is achieved. On the other hand, tasks planned under the LCEP are limited as a result of the limited competences of the local authority. In addition to the LCEP, an important document is the Air Protection Program for the area of the Śląskie Voivodeship aimed at achieving the levels of permissible substances in the air. The program does not cover greenhouse gas emissions. Nevertheless, activities undertaken under this program are beneficial to a low-carbon economy.
A Lille several exemplary low-carbon projects (rehabilitation of the Fives Cail brownfield, Operation Concorde, heritage renovation scheme, development of the green belt) have been developed, but communication on the "low-carbon" aspect of these projects needs to be improved.
In Suceava measures that have a major impact on the reduction of carbon emissions require considerable efforts in financially, socially and politically terms. Limited economic resources mainly generate difficulties. Investments in environmentally friendly public transport and infrastructure are already identified as having great potential for reducing carbon emissions. The prioritization of projects is made in the context of multi-sector analyzes, however not globally, but in the areas for which there are strategies.
Hamburg is already engaged in plenty of projects to promote low-carbon strategies for the near future. We have been working with local authorities within the Hanseatic city to develop a “Climate smart city”. Projects as the INTERREG IVC project CLUE (Climate Neutral Urban Districts in Europe) which finished at the end of 2014, developed a good practice guide with recommendations on the integration of climate factors in urban development.
But the city is facing also some issues, when measures are ignored or contested by some stakeholders, often environment associations, who are torn between moving forward and act in a sense for the environment but at the same time against. eg in 2008, when the airport was finally linked to the city by metro. On one side, it increased the use of public transport to get to the airport but it also boosted the air traffic in Hamburg, arising the CO₂ emissions.
In Katowice, actions that have highest “low carbon” potential need activation of citizens. The competences of the City Office concern to the highest degree the public sector. The activities planned in the LCEP included energy management and thermo-modernization of public buildings. However, the share of public buildings in the total emission of the city is very low, however such action should be perceived as a good practice to follow by owners of private buildings. The activities related to the replacement of heating sources and thermo-modernization have the highest potential of carbon reduction in the housing sector. To overcome the barrier related to the impact on the housing sector, the city offers subsidies, but also conducts numerous promotional and educational campaigns aimed at residents to increase their involvement, raise awareness of possible activities, increase knowledge about energy efficiency etc. Various events, workshops, Energy Days, the Eco-responsible picnic. In order to overcome administrative barriers related to insufficient knowledge of residents about energy technologies and financing possibilities, the Municipal Energy Center (MCE) was opened in September 2018. The facility will be a place where residents can obtain necessary information, regarding, for example, available subsidies, heat sources, pro-ecological attitudes.
A Lille, the selection criteria for low-carbon lighthouse projects have yet to be determined. It is important to systematically display energy and environmental ambitions on urban projects.
In Suceava, local stakeholders are involved into prioritising actions by consultation and public debates. Successful prioritization of projects that have already been implemented:
- Electromobility – electric vehicles for a "green" municipality;
- Modern and efficient management of public lighting in the Suceava Municipality;
- Capitalization of the historical monument Suceava Royal Court for the local, regional and national tourist circuit using alternative energy sources.
Test replicable pilot projects (Obstacle 19)
OBSTACLE: Implement a low carbon strategy at local level request to follow a step by step approach. Conduct a pilot low carbon project could help the local authority and stakeholders to experiment together and demonstrate the relevance of action.
Addressed by the cities of Katowice, Lille, Suceava
Definition of the obstacle - context
Katowice identified 3 main barriers to conduct successfully a pilot low carbon project:
- lack of strategy for the dissemination and implementation of the results of pilot projects,
- lack of effective cooperation between public and private sectors (local government - business-science) and information flow on the results of pilot projects,
- lack of appropriate financial and legal incentives for implementation, including public-private partnership.
There is no special strategy for implementing pilot projects, the use of their results in subsequent projects. Pilot activities were included in LCEP with the aim not only to implement them, but also that their implementation will allow to test the basic elements and develop a concept for implementing similar projects on a larger scale.
There is no special pilot project team. Pilot projects included in LCEP, depending on the area they concern, are implemented by various departments of the city.
In Lille, many innovative low-carbon projects are being developed: the multifunctional district of Five Cail, the Maison de l'Habitat durable, a functional prototype of the semi-detached house and Live Tree, which is a living laboratory for the energy and societal transition in the Vauban district. Exchanges between project teams avoid to repeat certain mistakes. The difficulty lies in the post-construction phase, in monitoring and evaluating these innovative projects.
In Suceava the implementation of these 2 pilot projects - "Modern and efficient public lighting management in Suceava Municipality" and "Suceava Electromobility / electrical vehicles for a green Municipality (e-Vehicles)" - has been achieved successfully. The obtained results confirm significant decreases in costs, energy consumption and greenhouse gas emissions which confirms the viability and the success of the project, and takes a step towards becoming a "smart city".
Katowice
There is no procedure/instruction to assess the replication potential of pilot projects or follow-up activities. The replication potential of pilot projects related to energy management in buildings is assessed on the basis of the results obtained, described in contractor reports, taking into account:
- effectiveness of the actions undertaken
- usefulness
- efficiency in terms of energy (energy savings), economic (costs and savings) and environmental (reduction of pollution and greenhouse gas emissions).
In addition, opinions of users and building administrators regarding the thermal comfort of rooms and the ease of use of the monitoring system are taken into account.
The city:
- disseminates information and knowledge related to energy management in buildings addressed to local stakeholders (these activities could be included in the operation of the newly opened Municipal Energy Center),
- takes part in current events enabling contact with representatives of science, business, administration and social organizations,
- participates in international projects related to the thematic area of pilot activities, such as MOLOC, AWAIR and AdaptCity.
Lille, more quantified indicators are needed to assess these innovative projects and promote their replicability.
In Suceava, the pilot projects are managed by the European Integration and Development Strategies Department. There is no transversal team to handle only these pilot projects. The management is usually provided by people from this department.
There are laws / strategies / programs in the higher governance levels that support and encourage local authorities to implement pilot projects with carbon reduction targets.
Make action attractive (Obstacle 20)
OBSTACLE: Unsustainable lifestyles and decisions are the majority. How to communicate so as to provoke change of habits/attitudes?
Addressed by the cities of Hamburg, Katowice, Torino
Definition of the obstacle - context
In Hamburg helped by innovation and new services, notably within its duty of care, the administration could amplify either the use of public transport, or investment in electric-cars, or furthermore intensify its green message through education and within the companies and industries, to help make some larger efforts.
In Katowice there are three most important reasons that significantly hinder the achievement of the LCEP:
- A low level of ecological awareness of the local community, inappropriate attitudes and behaviours, low knowledge about emission sources, threats and health effects
- Financial issues: high costs of energy carriers, high-efficiency devices and RES, energy poverty of a large group of residents
- Technical issues: technical condition of houses, public buildings and heating systems, lack of a widely-developed energy management system, bad condition of infrastructure and supply networks with heat and gas, which makes it difficult to shut down local coal-fired boilers.
These three obstacles refer to all stakeholders.
In Torino, one of the main obstacles for the public administration is the lack of proper social and economic indicators able to catch effects of planning policies. Another obstacle is the lack of citizens’ participation in the communication tables.
The most effective activities are supported by local associations, neighborhoods and citizens’ unions as bottom-up activities.
There is not a specific public office dedicated for the communication purposes regards to low carbon strategies and initiatives. But there is a participatory association related to the public administration, named “Urban Center Metropolitano” that has the goal to communicate projects and initiatives of urban regeneration and to promote involvement of citizens. The association reports that there is a lack of interest by public administration in initiatives related to the communication of low carbon projects.
Historically, the public administration used to involve citizens in urban regeneration projects. Periodically the public administration organizes communication tables and/or online participation sessions but the actual participation to those initiatives is quite low.
In Hamburg, within ten years the city has gained in attractiveness in Germany but also in Europe. The city has shown real results toward ambitious targets, having improved our citizen’s life for the short and long term. Regarding the implementation of green strategies on a local scale, the city has the ability to have direct contact with the multiple stakeholders, and so will create a trustful partnership and implement one policy for the whole, by reaching to everyone.
In Katowice, actions of awareness raising and financing incentives:
- Expansion of the information and education portal
- Organization of education and information campaigns on effective use of energy, reduction of pollutant emissions, and renewable energy sources
- Conducting social campaigns related to effective and environmentally-friendly transport
- Campaign for children
- Continuation of activities related to co-financing the replacement of heat sources in single and multi-family residential buildings
- Fighting energy poverty: financial help for the least affluent inhabitants
- Continuation of thermomodernization of public buildings and implementation of energy management system
- Investments in the sector of sustainable transport
- Establishment of the Municipal Energy Center in September 2018
In Torino, the inter-departmental round tables are the main instrument used to share and define policies among departments. A new effective strategy needs to be defined considering also the dimension of the public administration. The city managed BSinno-Boosting Social Innovation project which purpose is the creation of a network able to allow public administrations to promote social innovation, through and outside the public sector, to create an urban social innovation ecosystem that can effectively help public authorities to become European hubs and propose models of public and private social innovation. |
IRTE
Vehicle Rollover
IRTE (Institute of Road Transport Engineers), is one of the most respected names in UK transport, and has always been recognised as an impartial voice of industry.
IRTE publishes an industry leading technical journal, *Transport Engineer*, every month. Its web version [www.transportengineer.org.uk](http://www.transportengineer.org.uk) contains searchable editorial archive, daily online news updates, a supplier directory, e-zine newsletter, jobs, events, whitepapers and more.
IRTE hosts regular technical seminars and forums and works alongside DfT to promote efficiency and best practice. Recent events have covered biofuels, trips and falls from vehicles, truck operation, fuel efficiency and the Road Safety Act.
IRTE’s technical committee produces regular industry guidance on key topics. See page 16 for IRTE’s range of guidance documents, all of which are free to download from [www.soe.org.uk](http://www.soe.org.uk).
IRTE members come from a wide variety of transport-related roles. These include workshop managers, fleet engineers, transport managers, company directors, apprentices and technicians in the light and heavy goods, vehicle, and bus and coach sectors.
IRTE is one of four professional sectors within the professional engineering institution the Society of Operations Engineers. The Society supports and encourages its 16,000 members throughout their careers and is committed to their ongoing growth and professional development. It is licensed by the Engineering Council for registration at EngTech, IEng and CEng levels and by the Society for the Environment at CEnv level. It has recently been granted a licence at REnvP level.
We’re at the frontline of the construction and infrastructure industries, producing and supplying an array of construction materials. With over 200 sites and around 3,700 dedicated employees, we’re home to everything from aggregates, asphalt, ready-mixed concrete and precast concrete products. On top of that, we produce, import and supply construction materials, export aggregates and offer national road surfacing and contracting services. A full range of products which will help you work sustainably, safely, professionally and profitably.
We’re also a proud member of LafargeHolcim, which is the leading global building materials and solutions company with around 70,000 employees in over 80 countries. It holds leading positions in all regions with a balanced portfolio of developing and mature markets.
CEMEX is a global building materials company that provides high-quality products and reliable services. CEMEX has a rich history of improving the wellbeing of those it serves through innovative building solutions, efficiency advancements, and efforts to promote a sustainable future.
In the UK, CEMEX UK operates a comprehensive national supply network to ensure that quality materials and services are available to customers locally. Our reputation for reliability and unrivalled technical expertise has been built up over 80 years serving the UK construction industry. CEMEX employs around 2200 people in the UK who work to our core values that underpin our commitment to corporate and social responsibility and sustainability.
Hanson UK is a leading supplier of heavy building materials to the construction industry. It is split into four business lines – aggregates (crushed rock, sand and gravel), concrete, asphalt and contracting and cement – which together operate over 300 manufacturing sites and employ more than 3,500 people. For more information, visit: www.hanson.co.uk
Hanson UK is part of the HeidelbergCement Group, one of the world’s largest integrated manufacturers of building materials and solutions, with leading market positions in aggregates, cement and ready-mixed concrete. Around 54,000 people at more than 3,000 locations in over 50 countries deliver long-term financial performance through operational excellence and openness for change. At the centre of its actions lies responsibility for the environment and HeidelbergCement leads the cement sector in the area of climate protection, having been awarded a place on CDP’s 2019 and 2020 Climate Change A-list. It has committed to reduce net CO2 emissions per tonne of cement by 30 per cent by 2025 (based on 1990 figures) and will realise its vision of carbon neutral concrete by 2050 at the latest. For more information visit: www.heidelbergcement.com
TJ Transport is the bulk haulage arm of TJ, serving the construction, building materials and waste industries with external transport solutions for their products and waste. Over 20 years of working closely with these industries has established TJ Transport as the leading bulk haulage provider in the Southern Region. For more information visit www.tj-waste.co.uk/tj-transport-company
Preface
There are several thousand Large Goods Vehicle (LGV) and Public Service Vehicle (PSV) road traffic accidents reported in the UK every year. Studies indicate that around 5% of these involve a vehicle rollover. Although the proportion of rollover incidents is low, the outcomes are profound. The mass and forces involved can result in significant impacts on people, property and accompanying vehicle and asset damage. The combined costs, injury, damage, recovery, consequential loss and reputational damage, places the issue into a category demanding preventive action.
Additionally there are significant detrimental effects on other road users with delays and associated expense as well as clean-up and road repair costs.
Vehicles with a particularly high centre of gravity, for example concrete mixers, and those with reduced rigidity such as articulated vehicles are more susceptible in certain conditions to rollover because of their design configuration, shape and load position.
Rollovers typically occur during cornering, rapid lane or road position changes, and low or adverse road surface grip conditions. This is where centrifugal force acting through a vehicle’s centre of gravity causes it to lean. The magnitude of the centrifugal force will increase as speed and turning angle increase, resulting in a rollover.
‘Rollover threshold’ is the term for a truck’s ability to resist rollover. The value is derived from the lowest point of centrifugal acceleration, which causes the truck to tip over when travelling consistently along a curved path. A vehicle’s rollover threshold is directly affected by the way in which the vehicle is set-up (loads, tyre pressure, suspension etc.).
Driver behaviour and error are the main cause of rollovers, often due to speed, distraction, fatigue, load condition, interpretation of road layout, weather and mechanical condition. It is therefore key that drivers are supported and educated in the risks. They must be trained and actively managed to reduce and eliminate rollover events.
| Section | Page |
|-------------------------------|------|
| Why do rollovers occur? | 6 |
| Rollover prevention | 10 |
| Future developments | 14 |
| Legislation | 15 |
| References | 16 |
| Sponsor | 16 |
Why do rollovers occur?
Design, structure, load and centre of gravity place LGV’s more at risk of rolling over, particularly when cornering or encountering soft verges. Drivers need training in the fundamentals of the causes of rollovers. They must understand the risk and limitation of their vehicle and how speed, load, mechanical condition, distractions, road and weather conditions affect vehicle stability. Appreciating the very serious outcomes of a rollover occurring, and the importance of wearing seat belts, are crucial when encouraging driver best practice.
> The Physics
The taller and more top heavy an object, with a higher centre of gravity, the more likely it is to tip over
If the centre of gravity falls outside the base of the structure, it will topple over
Trucks will rollover if the centre of gravity moves outside the base of the vehicle
Risk factors affecting rollover events:
> **Driver error**
There are several factors that can be attributed to driver error. The most common is insufficient or ineffective training. Misjudging a corner can result in the vehicle entering too fast.
Lack of attention can also contribute to vehicle rollover. Drowsiness, distractions or simply not assessing the path ahead can result in sudden awareness of danger, leading to a rapid steering input to avoid the danger, destabilising the vehicle. There are also situations where the driver either runs the vehicle onto a soft verge or gets pulled into a rollover condition by the run-off. Impacting a kerb or a sudden load shift can also undermine stability.
> **Sudden direction change**
When drivers are faced with an unexpected event they can react instinctively and often take rapid evasive action. This might destabilise a vehicle and can create a rollover situation. Drivers must remain aware of the road conditions including other road users and adopt a dynamic risk assessment approach to driving. Being aware of the surroundings and the impact they can have on vehicle condition is key to providing opportunities to manage and eliminate a possible rollover.
Excess speed
Excessive speed for the conditions. The level of speed is relatively low at roundabouts, corners and bends and is a primary cause of rollover. If approach speeds are too high the likelihood increases.
Cornering
A high proportion of rollovers occur during cornering. Due to the higher centre of gravity, and low rollover threshold, entering a corner at excessive speeds encourages the vehicle to lean and increases the risk of a rollover.
Oversteering
A variety of factors can lead to oversteering. As well as entering a corner at excessive speeds, or a sudden awareness of danger, it can also happen as a result of changing lanes too abruptly. Over correcting, where the driver turns too much and follows this up with corrective steering that exceeds the stability characteristics of the vehicle.
Soft verges
If the driver accidently runs off the road onto soft or uneven ground, it is instinct to quickly turn the steering wheel to bring the truck back on the road. However this is one of the worst things to do at highway speeds – the driver needs to allow the truck to slow to a safe speed and then make a gradual, controlled return to the highway if possible.
Jack-knifing
Jack-knifing is the folding of an articulated vehicle so that it resembles the acute angle of a folding pocket knife. The primary reasons for jack-knifing are the level of road surface grip or equipment failure. Wheels lock due to braking and poor grip from adverse driving conditions. Depending on the speed the vehicle is travelling, jack-knifing can result in vehicle rollover.
Load
The height of the centre of gravity of the load directly affects the vehicle’s centre of gravity – therefore altering the rollover threshold. This can be because the load is inadequately secured or loaded incorrectly. Certain vehicles are at greater risk; for example concrete mixer trucks with a high centre of gravity and moving load, tippers which are loaded excessively to one side and double-deck trailers particularly if a larger percentage of the load, or a heavy load, is incorrectly placed on the top deck of the trailer. Tall pallet loads with poor load security can move and alter the centre of gravity creating instability. Failure to accommodate and correct load distribution on multi-drop work can result from poor delivery planning and inadequate training.
Road design and conditions
Road design and conditions can also contribute significantly to vehicle rollovers. Roundabouts, adverse cambers, on and off slips, dual carriageway contra-flow lane changes, bends and multiple bends, soft or damaged verges on narrow roads create conditions that can contribute to rollover. These conditions are often not conducive to a stable LGV. Road designs do not always have LGVs in mind and are not always effectively sign posted.
Off road conditions vary from ideal paved surfaces right through to those that are unable to support the weight of large vehicles. Rain or surface water can destabilise these access paths or roads and drivers must remain vigilant to the conditions.
Adverse weather
The most obvious weather associated with vehicle rollover is strong wind. The probability of a vehicle rolling over in windy conditions is increased where there is a high centre of gravity.
Other weather conditions affecting the road surface (snow, rain, ice) contribute to vehicle rollovers. If the contact between the tyre and road surface is inhibited, skid conditions may result. Some high-sided vehicles and trailers, especially when not carrying a full load, are more susceptible to rollover.
Mechanical condition
It is extremely important to have the appropriate suspension settings aligned to different situations. Incorrectly set ride height, incorrect condition and pressures for air suspension units, and failure to reset the ride height control valve after loading/unloading all increase the likelihood of vehicle rollover.
Tyres may also be a factor; several cases of vehicle rollover have been traced back to under-inflated tyres. Cornering with under-inflated tyres results in the vehicle leaning more. Worn tyres also pose a problem. The cornering ability of a vehicle can be affected by the limited grip a worn tyre offers especially on low friction surfaces.
Brakes can contribute to rollover risk; for a driver to have maximum control over a vehicle, it is important that the braking system is in correct working order. Anti-lock braking systems (ABS), electronic braking systems (EBS) and electronic stability programs (ESP) all help in preventing vehicle rollover, as they can automatically adjust the braking pattern for each wheel, giving the driver greater control.
It should be noted that the combined effects of ABS, EBS, ESP, yaw rate sensors and steering angle sensors can apply corrective action to assume control from the driver and reduce the chance of rollover.
Rollover prevention
There are many ways in which vehicle rollover can be prevented; however, the most important is improving driver behaviour. Competent and proficient operators recognise their responsibility and statutory obligation to ensure drivers are adequately trained. Educating drivers about the risk of vehicle rollover, and the ways in which they can prevent or limit the chances of it happening will help reduce the number of accidents each year.
Vehicle design is another key part of the solution; developing and specifying equipment with the aim of lowering the centre of gravity, would help to reduce incidents.
Vehicle maintenance forms an integral part of preventing vehicle rollovers. The roadworthiness of vehicles depends on them being mechanically sound and fit for purpose. Regular inspections, both daily pre-use checks and scheduled periodic maintenance inspections are essential. They differ in scope and depth but each type provides a means to verify the mechanical condition of the vehicle.
> Daily pre-use vehicle checks
Drivers must be trained and competent in good visual inspection of the vehicle. Visual checks are required before use of many vehicle elements but from a rollover prevention point of view they must include:
- steering
- suspension
- body and load security
- brakes
- tyres
- wheels and fixings
Varying weather conditions in winter, particularly ice forming on the top of vehicles can increase instability. Drivers have an obligation to report concerns and operators must ensure compliance with effective reporting and rectification processes.
> Periodic maintenance inspections
A structured and scheduled maintenance plan must be established for every vehicle. Periodic maintenance inspections (PMIs) are vital and are a statutory obligation. Operators, along with their transport managers, maintenance providers (workshop technicians) and drivers are responsible for checking and providing systems that ensure road worthiness.
Facilities for conducting periodic inspections must be appropriate and provide access to inspect the underside of the vehicle. The IRTE Workshop Accreditation scheme provides an assurance that inspection maintenance and repair facilities are adequate and fit for purpose. Correct tools and equipment to maintain and repair vehicles where necessary, are vitally important in producing a roadworthy vehicle and the IRTE Workshop Accreditation scheme can provide assurance in this respect.
Vehicle repair and maintenance technicians must be trained, familiar and competent with inspection processes, reporting and repair practices. All irtec technicians are extensively trained, competent and committed to ongoing professional development. They are independently and periodically
assessed to ensure their skills remain valid and appropriate.
Inspection frequency is established and forms the basis for scheduled inspection and maintenance programmes. The time between inspections is determined by vehicle use, distance and type of work. The programme must be developed, documented and outcomes of both before and after vehicle conditions recorded by the operator.
There are a number of practical steps operators and drivers can take to reduce the risk of a vehicle rollover. These include basic vehicle design – is it capable of coping with the physical forces of normal operation? How will the vehicle be driven and operated? Will the mechanical condition of the vehicle along with any imposed load affect stability?
The vehicle must be fit for purpose – low centre of gravity designs help when the proposed load has a high centre of gravity, for example step frame semi trailers. Operators can improve vehicle safety through design and should consider specifying or adopting systems and equipment that go above and beyond the minimum laid down by legislation.
Vehicle rollover can be prevented by educating and improving the skills of drivers, to alter their behaviour on the road. Training and education is fundamental to drivers understanding the risks and, crucially, what they can do to minimise them.
Ensuring the mechanical condition of the vehicle is to a level where safety is not compromised is key and can be achieved with both scheduled and driver checks.
Finally, the load itself often has a considerable part to play in rollover prevention. Car transporters and double deck trailers with lower decks empty and the top deck still loaded illustrate examples of extremely high centre of gravity and heightened risk of instability. Loads with high or varying fluidity (concrete mixers), petro-chemicals or hanging foodstuffs, are all examples where additional safeguards are required in both vehicle design and operation.
> Points to consider for reducing the chances of vehicle rollover
Driver awareness is paramount and they must remain vigilant to the changing conditions – road layout, traffic density, speed. Driver distraction from in-cab technology must be considered and steps taken to remove, manage or minimise them.
Drivers must adopt cautious approaches towards vehicle direction changes, corners, roundabouts and lane changes. It may seem obvious, but when direction changes are made suddenly or rapidly and combined with a high centre of gravity the risk for rollover increases considerably. Most vehicles used on the roads today have several in-built protection systems, which operate behind the scenes carefully adjusting the vehicle, often unknown to the driver. A rollover is more likely if a driver comes to rely on them, producing a false sense of security. That is until something extraordinary occurs and the protection systems become overwhelmed and the laws of physics take over.
It is vital loads are secured properly and positioned on the vehicle in a manner which provides the lowest possible centre of gravity. The latter will help in lowering the vehicle’s overall centre of gravity, crucial to reducing the chances of rollover. The former will ensure the load is not capable of moving relative to the vehicle and rendering it unstable. The DfT has produced a code of practice Load Securing:
Vehicle Operator Guidance (see References for details).
> **Tyres – Tyre Pressure Monitoring**
Under inflated tyres provide and produce a number of challenges in terms of vehicle performance and the impact on the environment with increased emissions and fuel consumption for example. However, key for this guide is increased instability created by excess leaning to one side. Worn or under/over inflated tyres in adverse weather conditions inhibit the steering, braking, suspension and vehicle electronic systems to correct a potential rollover condition. In service, individual Tyre Pressure Monitoring systems advise drivers and maintenance teams of low pressure and potentially imminent failure. Tyre overheating due to under inflation is a common cause of rapid deflation and possible fire. If this occurs while driving at speed or negotiating a bend or corner, the risk of a rollover increases dramatically.
> **Vehicle Stability Systems**
If a condition arises where a vehicle is at risk of rollover, the driver needs to acknowledge the danger rapidly and take immediate action to bring the vehicle back under control. Electronic stability programmes (ESP) constantly monitor the ride dynamics of a vehicle and intervene automatically using the engine management and brake systems if the vehicle is in danger of rollover. It is able to assess situations even
> The effect of having ESP switched on and off
more rapidly than a very experienced driver and, provided the vehicle has not exceeded the physical limits of stability, drivers can often regain and maintain control.
**Operation**
ESP often comprises of two main systems a dynamic stability program (DSP) and a rollover prevention (ROP).
DSP assists drivers to keep a vehicle stable for example on wet roads, ice and snow. It takes effect in the event of low grip situations when there is a difference between the direction a driver intends to take and the actual direction of the vehicle.
ROP reduces the risk of the vehicle overturning in the event of high grip conditions on dry roads.
Vehicle manufacturers are continuously developing systems to enhance and support safe operation including rollover protection. Methods and operating systems may differ from vehicle to vehicle, as can the descriptive acronym label but, a carefully chosen system over and above the minimum legislative requirement can have a marked and positive effect on vehicle stability.
**Route planning**
Proactive route planning can be an effective means to reduce rollover risk. Assessing the route for rollover risks and encouraging drivers to follow chosen routes will help. Providing instruction for speed and approach or simply avoiding areas of particularly high risk greatly assists in incident reduction.
**Telematics**
Most if not all LGV and bus and coach manufacturers employ sophisticated electronic vehicle monitoring systems – telematics. On board information provides a useful insight into how vehicles are driven – fuel consumption, acceleration, deceleration, harsh braking and harsh steering to name a few. Note if vehicles are not fitted with OEM telematics, aftermarket systems are available and can be retrofitted and employed successfully in this respect.
If using a system that enables the operator to download information, it is important to analyse the data, establish trends, create change and training programmes to support and alter driver behaviour.
Modern vehicles are designed with a high degree of safety as a key requirement. The systems employed can very easily protect a driver from a potentially harmful outcome when the physics determine a rollover is imminent. If a driver regularly exceeds vehicle design limits and ‘on-board’ systems are correcting the outcome beforehand, then scrutiny of the data will show the frequency and extent of the stability control program being triggered.
It is important that vehicle data is reviewed and shared with drivers. It provides opportunities for focused training to avoid the risks and cost in terms of injury, possible fatalities and vehicle and infrastructure damage. Understanding how driver input affects the vast majority of vehicle rollover incidents, outlining the causes and explaining prevention techniques is key. Increasing awareness will undoubtedly reduce the number of rollover incidents.
Future developments
> ADAS
Advanced Driver Assistance Systems (ADAS) is truck safety technology that seeks to ensure safer vehicles are operating on roads. In most road incidents or accidents driver error, driver behaviour or impairment are contributing factors (UK Government, 2020). ADAS systems create safer vehicles and drivers and ultimately aims to remove accidents from the roads.
ADAS is an electronic system that interacts between vehicle and driver. The design aim is to assist in use of the vehicle with a wide range of technologies that either alert the driver of potential hazards and/or take temporary control of the vehicle if calculated reaction times are not prompt enough to avert an incident.
Safety and efficiency are major elements of the supply chain industry, and the vehicles and equipment it uses. The expectation is that ADAS scope will increase and should be seen as a mandatory feature as opposed to optional.
Legislation
This document outlines and illustrates key factors and shows best practice for the prevention of vehicle rollovers. Manufacturers are continuously developing new safety systems to improve and enhance their products. They work closely with legislators who ultimately produce regulations for adoption on various categories of vehicles. The legislation regarding vehicle type approval (a requirement to register a new vehicle) is too extensive to list in this document.
For further information contact relevant vehicle manufacturers and:
Department for Transport
www.gov.uk/government/organisations/department-for-transport
Vehicle Certification Agency
www.gov.uk/government/organisations/vehicle-certification-agency
Driver and Vehicle Standards Agency
www.gov.uk/government/organisations/driver-and-vehicle-standards-agency
> Current requirements:
Antilock brakes systems (ABS) allow brakes to work at their maximum without locking-up. It is a requirement for all M and N category vehicles (and their trailers) to have ABS fitted.
All **new vehicles** require electronic stability control (ESC) (also called ESP) together with a yaw sensor to predict the drivers intended movement with the vehicles actual reaction (detects when the vehicle is skidding). The system applies an appropriate level of braking to allow the vehicle to recover from, or prevent, a skid.
With additional control to individual braking modules, ABS and ESC, there are a range of other safety systems.
- Trailer roll stability (TRS) can predict the rollover threshold and slows the vehicle when cornering.
- Electronic brake distribution (EBD) systems monitors the load on the wheels and balances the braking as appropriate.
- Load-proportioning brake valve (LPBV) system is similar to EBD but is not electronic. It monitors the pressure in the air suspension (it can be retrofitted to mechanical suspension) and adjusts the air pressure for each specific axle.
REFERENCES
1. Driver and Vehicle Standards Agency Guidance. Load securing: vehicle operator guidance. Updated 16 November 2020.
https://www.gov.uk/government/publications/load-securing-vehicle-operator-guidance/load-securing-vehicle-operator-guidance
2. UK Government. Statistical data set. Contributory factors for reported road accidents (RAS50). Updated 30 September 2020.
https://www.gov.uk/government/statistical-data-sets/ras50-contributory-factors#contributory-factors-for-reported-road-accidents-ras50---excel-data-tables
SPONSORS
AGGREGATE INDUSTRIES CEMEX Hanson HEIDELBERG Cement Group TJ TRANSPORT |
News & Notes
MEMBERS’ MAGAZINE
ISSUE 255 | AUTUMN 2023
IN THIS ISSUE
Interview with Timothy Harrison 5
ISAC Monthly Lectures 7
Back to School in Babylonia 8
Susanne Paulus
Scribal Careers in the Old Babylonian Period 14
Carter Rote
House F at Nippur 17
Tablets from House F 18
Jane Gordon
ISAC Events 23
INSTITUTE FOR THE STUDY OF ANCIENT CULTURES
1155 EAST 58TH STREET
CHICAGO, ILLINOIS 60637
WEBSITE
isac.uchicago.edu
MEMBERSHIP INFORMATION
773.702.9513
email@example.com
MUSEUM INFORMATION/HOURS
isac.uchicago.edu/museum-exhibits
ISAC MUSEUM SHOP
773.702.9510
firstname.lastname@example.org
ADMINISTRATIVE OFFICE
773.702.9514
email@example.com
CREDITS
EDITORS: Matt Welton, Rebecca Cain, Andrew Baumann, and Tasha Vorderstrasse
DESIGNERS: Rebecca Cain and Matt Welton
News & Notes is a quarterly publication of the Institute for the Study of Ancient Cultures, printed exclusively as one of the privileges of membership.
ON THE COVER: ISACM A30276; see page 11
MESSAGE FROM THE DIRECTOR
I am pleased to be writing to you as the Institute for the Study of Ancient Culture’s (ISAC’s) new director. I am honored to return to the Institute, where I completed my doctoral studies in 1995, almost thirty years ago. It is a particularly exciting moment to be joining ISAC. At a time when the importance of humanistic scholarship is increasingly questioned and marginalized, the Institute’s unwavering commitment to pioneering, foundational research and interdisciplinary scholarship that contributes deep understanding to issues of pressing global concern, whether they be climate and the environment or profoundly complex social issues such as inequality and conflict, is both invigorating and immensely significant.
Earlier this year, we welcomed Marc Maillot as the associate director and chief curator of the ISAC Museum, and in September, Sheheryar Hasnain took up the critical leadership post of associate director of administration and finance vacated by Brendan Bulger. We also welcomed two new faculty members, Sumerologist Jana Matuszak and Egyptologist Margaret Geoga, and in January we will welcome Derek Kennet, who will be joining ISAC as the inaugural Howard E. Hallengren Professor of Arabian Peninsula and Gulf States Archaeology. Two further faculty searches, in Egyptian archaeology and ancient Near Eastern art, are currently also underway. The fall also witnessed other important transitions, most notably the retirement of Theo van den Hout, the Arthur and Joann Rasmussen Professor of Hittite and Anatolian Languages. The Institute’s new name, the renewal of its faculty, and the addition of new staff reflect the remarkable transformation underway at ISAC, and speak to the University’s continuing commitment to ISAC and its historic mission.
This issue of News & Notes features the current ISAC special exhibition Back to School in Babylonia, which highlights the (re)discovery and exploration of a scribal school, dating to the Old Babylonian period (ca. 2000-1595 BCE), at Nippur, first uncovered in 1951-52 during the Institute’s long-running excavations at the site. As Prof. Susanne Paulus and her team brilliantly describe and illustrate, the experience of students learning to read and write Akkadian almost four thousand years ago resonates deeply with our modern experience. The visual and tactile connection to exercise tablets that preserve the learning efforts of scribal students from a distant past, the smudges of their fingers, their corrections, even teeth marks (frustration?), powerfully convey this quintessentially human experience. The materiality of these documents also illustrates their value as pedagogical tools, and the importance of context. The excitement and interest the exhibit has generated, especially among students (of all ages), further highlights the importance and relevance of the scholarship (research and teaching) we do at ISAC. As Carter Rote and Jane Gordon further demonstrate in their articles, the Old Babylonian “school” at Nippur accentuates the enduring and fundamentally human quality that is learning and the pursuit of knowledge.
I am deeply grateful for the generous support provided by our members, donors, and partners, which makes possible the groundbreaking scholarship reflected in the Back to School in Babylonia exhibition and undergirds all of our work toward our mission to enhance scholarly understanding and public awareness of the places, peoples, and heritages we study. May you all enjoy a peaceful and restorative holiday season.
TIMOTHY HARRISON
Director
ISAC Tours: Central Asia with Gil Stein
June 2024, exact dates and itinerary coming soon
Travel on the Silk Road through Uzbekistan, Turkmenistan, and Tajikistan as we explore the crafts and foods along this monumental trade route. Stop by the Silk Road’s most renowned oases and marvel at monumental architecture and decorative arts. Tour leader Gil Stein will guide you through the region with a unique look at current ISAC cultural heritage initiatives.
For more information, and to be placed on the tour list, please contact Matt Welton at firstname.lastname@example.org.
On October 12, family, friends, and colleagues gathered in the ISAC Museum to celebrate Prof. Theo van den Hout’s retirement. Theo joined the University of Chicago and the Department of Near Eastern Languages and Civilizations community as a professor of Hittitology in 2000. In his time here, he has played an integral role in training countless students—including myself—in Hittite and Anatolian languages, literature, art, and culture. Theo was my first Hittite instructor when I came to the University of Chicago as a PhD student in Hittitology, having been introduced to Hittite only a year earlier. To say I had no idea what I was getting myself into would be a drastic understatement. From my first day of Elementary Hittite 1, Theo has been a role model both for his excellent scholarship and for his encouraging attitude toward others.
Theo’s impact on the field and the University was evident at his retirement party, where colleagues, students, volunteers, and ISAC Advisory Council members shared their stories of working with Theo and their gratitude for the time, knowledge, and humor he has devoted to ISAC. They also presented Theo with several gifts, including an Eataly gift card, a model of a Hittite stamp seal, and a poster depicting Theo with the Hittite deity Sharruma, based on a Hittite rock inscription from the site of Yazilikaya.
Although Theo is stepping down from teaching and from his role as interim director of ISAC, he will continue as chief editor of the Chicago Hittite Dictionary, a job he is looking forward to having more time for.
BREASTED SALON CALENDAR 2024
Wednesday, February 21 | Timothy Harrison
Director, Institute for the Study of Ancient Cultures; Professor of Near Eastern Archaeology, University of Chicago
Wednesday, March 20 | Felipe Rojas Silva
Associate Professor of Archaeology and the Ancient World & Egyptology and Assyriology, Brown University
Wednesday, May 15 | Theo van den Hout
Arthur and Joann Rasmussen Professor Emeritus of Hittite and Anatolian Languages, University of Chicago; Executive Editor, Chicago Hittite Dictionary Project
Wednesday, July 17 | Naomi Harris
PhD Student, Hittite, University of Chicago
Space is limited—please register in advance!
Questions? Contact Bill Cosper: email@example.com
Timothy Paul Harrison began his tenure as director of the University of Chicago’s Institute for the Study of Ancient Cultures—West Asia & North Africa (ISAC) on September 1, 2023. A renowned academic leader and scholar with decades of research experience in the Middle East, he previously served two terms as president and past president of the American Society of Overseas Research and two terms as chair of the University of Toronto’s Department of Near and Middle Eastern Civilizations.
Having earned a PhD and a master’s degree from UChicago’s Department of Near Eastern Languages and Civilizations (NELC) and a bachelor’s degree from Wheaton College, his new position is, in a way, a homecoming for him.
**You have previously served in several academic leadership positions. Did you ever imagine that being director of ISAC might one day be in your future?**
I certainly dreamed of being a professor of Near Eastern archaeology at the Institute when I was a student, but I don’t know that I had the confidence to imagine ever becoming the director, and in more recent years, my professional commitments and responsibilities made such a possibility increasingly remote and unlikely.
**How do you expect your work with ISAC’s “exceptional community of scholars” will support you in furthering your research on the rise of early society complexity in the ancient Near East, perhaps beyond your current focus on the Bronze and Iron Age in the Levant?**
I am certain that the incredible disciplinary scope and caliber of the scholarship at ISAC will greatly enrich and expand my intellectual world, and that of my own ongoing research. Indeed, this is already happening as I meet with colleagues and learn more about their research. This is truly one of the great benefits and privileges of being director: the opportunity to encounter new worlds of knowledge and learning as I venture throughout the building.
**In addition to being director of ISAC, we see that you will also be serving as a professor in NELC and the College. Looking ahead, what might be the courses you will be offering?**
I will have a reduced teaching load, due to my directorship responsibilities, but I am planning to teach a graduate seminar on the archaeology of the Bronze and Iron Age Levant/eastern Mediterranean, focusing on an evolving range of topics or themes (e.g., early urbanism and state formation, craft production, social organization and domestic or “household” archaeology, and the role of religion in the life of ancient communities). I am also interested in teaching an undergraduate course on cultural heritage and conflict in the Middle East, and I look forward to contributing to the Common Core curriculum in the College.
**Please tell us about the Computational Research on the Ancient Near East project launched in December 2020. We see that it is a “global research collaboration that aims to cultivate and analyze archaeological data.” Can you give us a brief description of one or two of its current projects and the associated challenges and, if possible, current results? Also, are you planning to continue your involvement in the coming years?**
The CRANE Project ([https://bit.ly/CranePro](https://bit.ly/CranePro)) is an international interdisciplinary research collaboration comprised of archaeologists, historians, climate scientists, paleo-environmental specialists, and computer scientists that seeks to draw on the vast and rich repository of knowledge produced by over a century and a half of field research and exploration in the Middle East to provide deepened insight into issues of contemporary concern, ranging from climate change to deeply rooted social issues such as inequality and conflict. One such CRANE project has involved a collaboration between archaeologists and a team of climate scientists building a high-resolution regional climate model for the Eastern Mediterranean that can introduce high temporal and spatial resolution to the study of changing climatic and environmental conditions in the region over the course of the Holocene. Working on powerful supercomputers, the climate scientists are able to build climate models with increasingly finer resolution, but they depend on historical data to test, or “ground truth,” their models, data that CRANE researchers are able to provide. The CRANE Project is nearing the end of its second phase (CRANE 2.0), and consultations are underway to launch a third phase of the project.
As for my involvement, I am the CRANE project director and will continue my involvement in and leadership of the project. ISAC’s David Schloen is a CRANE coinvestigator; he and ISAC/UChicago have been a CRANE institutional partner, involved in the project from its conception.
**What about field work? Do you plan to continue leading the Tayinat Archaeological Project in southeastern Turkey? If not, do you anticipate being able to participate in any on-site excavating?**
I am actively involved in a number of long-running field projects, including the Tayinat Archaeological Project, which has a direct connection to excavations conducted by the Institute back in the 1930s as part of the Syrian-Hittite Expedition, and I very much plan to continue these projects.
However, I will need to step back from the “day-to-day” running of them and hand these responsibilities over to colleagues. We have great teams in place, but I expect that stepping back and letting others take more direct leadership in these projects will be more than a little challenging for me.
**Not being a stranger to Chicago, can you share any thoughts you’ve had about how to leverage your local knowledge and connections to attract more visitors to the ISAC Museum and create opportunities to convey the richness and value of the ancient Near East to a wider audience?**
I believe the study of the ancient world, and the rich collections and knowledge the Institute has played a foundational role in discovering, are profoundly relevant and important to the broader public today, indeed, more than ever. Community engagement thus must be at the core of our mission, and in my experience, the most effective way to engage the local community is through friendship and outreach, building relationships with both institutions and individuals. I believe the Institute has made great strides in this regard since my student days decades ago, but we must continue to build these relationships and work to continue embedding the Institute in the life of our local communities.
**What have you missed most about Chicago from your student days? Now that you once again are living here, what are you most looking forward to revisiting in the Chicagoland area, and perhaps in the nearby Midwestern states?**
Returning to Hyde Park has been a very happy homecoming. My wife, Leann, and I have enjoyed going for walks through our neighborhood and biking along the lakefront, attending community events (such as the Taste of Chicago), and visiting local museums (the DuSable) and old familiar food haunts (the Medici and Harold’s Chicken Shack).
**You were quite active in local grammar school outreach during your student days at ISAC (then the OI). Do you have any pointers to offer to docents as they prepare to engage with our local community?**
I don’t know that I can add anything that docents don’t already know, but certainly I have found that communicating a sense of excitement and wonder or awe about the incredible richness and diversity of the lived experience of ancient communities and the ancient world has always engaged the imagination of students and audiences.
**Finally, what can the ISAC community—faculty, staff, members, volunteers—do (or not do) to best support you during your tenure?**
I have felt warmly welcomed by the ISAC community. I hope everyone will continue to feel welcome approaching me. I would love to hear from each of you, whether daily greetings or on any subject of concern, positive or negative. Please know that my door is open to all.
---
In our last issue, we featured a photo collage of ISAC’s name change celebration. We neglected to give credit to the photographer, Charissa Johnson at Charissa Johnson Photography. We thought we would highlight her work with a couple more images from the celebration. To view more of Charissa’s work, please visit [https://www.charissajohnsonphotography.com](https://www.charissajohnsonphotography.com).
ISAC MONTHLY LECTURES
This fall, we kicked off our 2023–24 ISAC lecture series with four events that explored themes featured in our latest special exhibition, *Back to School in Babylonia*.
Join us at 7:00 p.m. (Central) on the first Wednesday of each month for the remainder of the 2023–24 academic year as we explore the latest ISAC fieldwork in Spain, developments in the translation of the Meroitic language, and magic and ritual in the ancient world.
**February 7 | Carolina López-Ruiz, University of Chicago**
Revisit the Phoenicians in Iberia and celebrate ISAC’s participation in the Málaga Project as we welcome Carolina López-Ruiz, professor of ancient Mediterranean religions and mythology in the UChicago Divinity School and Department of Classics. López-Ruiz’s research centers around comparative mythology and cultural exchange in the ancient Mediterranean, exploring the idea that mythological narratives and religious practices act as loci for cultural exchange and provide mechanisms for groups in close contact to negotiate tensions, adapt to change, and bolster their resilience.
**March 6 | Claude Rilly, Sorbonne University**
We are excited to welcome Claude Rilly, one of the world’s foremost scholars in Meroitic writing, for a lecture that approaches the translation of this ancient language from Sudan. Early in his career, Rilly demonstrated that Meroitic belonged to a specific linguistic family, settling a question debated for more than a century. Join us for a lecture that will borrow from Rilly’s career as the director of the French Archaeological Unit in Khartoum and the head of the archaeological mission in Sedeinga, Sudanese Nubia.
**April 3 | Jeffrey Stackert, University of Chicago**
ISAC welcomes UChicago’s Jeffery Stackert, professor of Hebrew Bible, for a lecture titled “Judah in the Shadow of the Assyrian Empire.” A biblical scholar who situates the Hebrew Bible in the context of the larger ancient Near East, Stackert’s research focuses on the composition of the Pentateuch, ancient Near Eastern prophecy, cultic text, and ancient Near Eastern law. His first book, *Rewriting the Torah: Literary Revision in Deuteronomy and the Holiness Legislation*, received the 2010 John Templeton Award for Theological Promise.
**May 8* | Korshi Dosoo**
*second (not first) Wednesday of the month*
Join us as we welcome Korshi Dosoo, leader of the project “The Coptic Magical Papyri: Vernacular Religion in Late Antique and Early Islamic Egypt” at the Julius Maximilian University of Würzburg. Dosoo will present the lecture “Christian Egypt and Its Pagan Past: Perspectives on Phararonic Civilization from Coptic Magic.” Dosoo’s research focuses on magical and lived religion in Egypt from the Ptolemaic to the Mamluk periods as revealed by papyrological and epigraphic sources.
**June 5 | Daniel Schwemer**
We end our 2023–24 lecture series with a visit by a second scholar from the Julius Maximilian University of Würzburg, Daniel Schwemer, professor and chair of ancient Near Eastern studies and research associate of the School of Oriental and African Studies. Schwemer’s research interests include Akkadian, Hittite, the history of religion in the ancient Near East, ancient Near Eastern magic and medicine, and ritual. Schwemer’s published works include the three-part *Corpus of Mesopotamian Anti-witchcraft Rituals*.
Each of these lectures will be streamed live on ISAC’s YouTube channel exclusively for ISAC members. Every month you will receive a members’ e-mail with a link. If you miss the livestream, an edited version of each lecture will be posted to our YouTube channel in the weeks that follow.
When you hold the clay tablet with the unassuming number ISACM A30276 (fig. 1) in your hands, you notice right away how heavy it feels compared with the much smaller inscribed objects you usually handle—it feels important, but you can’t help but wonder how someone could have worked on it without the hand that held it going numb over time. Your eye then moves to the left: the practiced hand of a teacher has quickly written a multiplication table for the number 432. Reading the numbers, you struggle a bit, for they are written in the Babylonian sexagesimal system as $7 \times 60 = 420$; $432 \times 1 = 432$; $432 \times 2 = 864$; $432 \times 3 = 1,296$; $432 \times 4 = 1,728$; $432 \times 5 = 2,160$. Perhaps these calculations are easy for you, or perhaps you struggled with multiplication in school like I did.
Our Babylonian student had to practice this pattern in the space on the tablet to the right of the teacher’s example. If you look closely, you can see that the tablet is thinner on that side. Each time the student wrote down the numbers in the hope of finally memorizing them, they erased the numbers afterward by dragging their fingers over the clay and using the small vessel at their side to keep it moist and suitable for writing. After the final round, they erased it once more—after all, the pattern was by then safely stored in their mind. When you put your own hand in the lines left by the student’s fingers—using the 3D-printed copy on display in the exhibition *Back to School in Babylonia*—you can still see and trace those marks. Probably your hand is larger than the schoolchild’s who wrote the tablet.
Figure 1. Obverse (front side) of ISACM A30276. Photo by Danielle Levy. Annotations by Marta Díaz Herrera.
Figure 2. Clay plaque of a bull-man or kusarikku found in House K, ISACM 29440. Photo by Danielle Levy.
Who were they, these ancient students? If you could ask them, they would proudly tell you their names and those of their fathers—important scribes, priests, and administrators in Nibru, or Nippur as we call it today. They would tell you that their city is home to the most prominent god, Enlil “the Great Mountain,” and likely point out his temple in the background—the Ekur, with its enormous Duranki (the name of the ziggurat, or temple tower) connecting heaven and earth. The more senior students could tell you many myths surrounding the head of the Babylonian pantheon, explaining his importance for all Sumer and Akkad (i.e., Babylonia). As for the current king, they would point to Samsu-iluna in Babylon (1749–1712 BCE), who reigned over the land at the time. In school, the students learned about the tumultuous history of Nippur, whose rule had changed multiple times between the kings of Isin, Larsa, and finally Babylon during the previous two centuries. These students preserved the kings’ names and achievements—some more credible than others—by writing out their hymns.
The names of most of the ancient students, along with information about their identities, such as their families, ages, and genders, are lost to us. We assume that most of the students lived in the immediate neighborhood of the school—their homes were partially excavated by the Joint Expedition to Nippur of the University of Chicago and University of Pennsylvania in 1951–52. Some objects from their houses, such as the beautiful clay plaque depicting a mythical bull-man (fig. 2), found their way into the exhibition *Back to School in Babylonia*. The legal records left by the neighborhood inhabitants tell us more about the social status of the students’ parents, who were members of the local elite with connections to the many temples in Nippur. Fathers who were scribes likely taught their own children or sent them to scribal school.
Scholars still debate when children started school. Some, influenced by our own system, favor an early starting age of five or six, while others believe that students started school at age ten or eleven—an interpretation based on the analysis of dental records left by a bored student biting into an exercise tablet! Leaving the tricky question of age aside, the gender of the students is an equally fraught point for the modern scholar. We do not know the gender of the individual students who wrote our tablets, and the word “schoolchild” is gender-neutral in the language of the scribal school: Sumerian. Scholars also know that some women, such as certain *nadi-tum*-priestesses in Nippur, were literate and educated. But we cannot overlook the facts that Babylonian society was patriarchal at its core and the scribal schools in Nippur were places where masculinity was constructed and celebrated. I have little doubt, for instance, that the teacher who wrote the satirical story of a day in the life of a schoolchild was imagining a boy rather than a girl as the protagonist.
But let’s return to our student practicing multiplication. What would a typical school day look like for pupils like this one? Likely, they rose early in the morning and had a quick breakfast before heading off to school—to avoid harsh punishment, they had to be on time. Luckily, the school was only a few houses away. They entered the school through a narrow, probably arched entryway into a small courtyard that was still cool in the morning (fig. 3). They greeted their teacher, the “father of the school,” and fellow students, then sat down, and instruction began. The schoolhouse was small, like an ordinary home, allowing for only a handful of pupils at a time, while the teacher and his family likely resided in the same building. Indeed, our student could smell fresh bread baking in the kitchen oven when starting the first task of the day: preparing the tablet that the teacher would later inscribe with the multiplication exercise. After getting permission, the student took clay from a box integrated into the courtyard’s architecture and started shaping the large tablet by flattening and folding sheets of clay and defining the edges. Making such a large tablet was challenging, but our student succeeded, and the teacher was doubtless pleased when he wrote the multiplication exercise for morning practice. After a short lunch break, students moved into the large schoolroom in the back of the house, where they sat on a bench constructed with recycled tablets and continued their work.
*Figure 3. 3D-model top plan of House F in Level XI Floor 1. Created by Madeline Quimet based on the excavation plans published by Donald E. McCown and Richard C. Haines, *Nippur I: Temple of Enlil, Scribal Quarter, and Soundings* (University of Chicago Press, 1967) and related data.*
The texts, excavated architecture, and objects all allow us to reconstruct school life in Babylonia. When planning the exhibition, we integrated as many of these elements as possible into its design. The entire special exhibition space of the ISAC Museum (fig. 4) became the schoolhouse—or House F, as the archaeologists called it—since the gallery’s size only slightly exceeds the house’s actual dimensions. You enter the building through an archway, and light conditions change from the bright daylight outside to the dark, cool interior of a Babylonian house. The exhibition’s walls are strategically placed to mimic the rooms in House F; however, space was much tighter in the original house than modern museum design allows. Keep your eyes open for some of the installations, such as the box for tablet-making that our student used or the bench students sat on during the afternoon. The school walls were decorated with clay plaques, such as the one depicting a striding, majestic lion you may discover in a corner of the exhibit. Here, most cuneiform tablets are displayed vertically on the walls, though we are uncertain whether in antiquity they were stored on shelves built into the walls, as we know was the case in other archives and libraries in Babylonia. Most of the tablets with school exercises were not treasured items; some of them were quickly deposited in the clay box for recycling into new tablets, and others were discarded or used as building materials for walls and installations in the school. This part of the story is also visible on the tablets themselves: if you look closely, you can see that our pupil’s practice tablet broke into many parts when it was discarded sometime after completion—but let’s allow our student to finish it first.
ABOVE AND BELOW: Figure 4. Installation of the Back to School in Babylonia exhibition. Photos by Susanne Paulus.
Our student’s afternoon task consisted of inscribing the tablet’s reverse. Flipping the tablet vertically and dividing it into four columns, the student started writing a long list of words committed to memory over the past weeks (fig. 5): “man”—“king”—“status of the crown prince”—“minister”—“vizier”—“minister of the inner household”—and more. All these words were written with unique signs in Sumerian, the language of instruction at the school. When introducing the list, the teacher would have given explanations in Akkadian, the native tongue of our student, who now knew the signs and their readings by heart and could work swiftly through the exercise. After a while, all 126 entries were written down; however, you may notice that some of the columns came out a bit crooked. And upon the tablet’s presentation to the teacher, he complained, “Your hand(writing) is not good at all!” and set out to flog the student. But at that very moment he was interrupted by a dispute that arose between two senior students. One challenged the other: “Do you know the calculation of multiplications, reciprocals, accounts, as well as volumes? The rote recitations of the scribal school—let’s recite them! I know them better than you. Come on, position yourself as my rival! I will put an end to your insults!” The opponent answered: “Idiot! Obtuse! Obstinate!” Our student feared the worst, for correct etiquette and behavior were highly encouraged in school, and any deviation could result in a hefty flogging. Surprisingly, the teacher was pleased. He praised the students for speaking Sumerian perfectly and forgot about the flogging.
Oral instruction and conversations between teachers and students are lost to us because only material evidence dominated by written sources survives from the Babylonian school, while the oral and interpersonal components of instruction are not preserved. (Imagine reconstructing modern education solely from exercises and textbooks.) So we know the different types of lexical lists with thousands of individual entries students had to memorize, but we do not know the individual explanations the teacher gave for each entry. Oral instructions were essential, as students were schooled in the traditional but dead language, Sumerian, and not in the vernacular of the time—Akkadian or Babylonian, which students spoke at home. Although in this period there were no native speakers of Sumerian, the language was still preferred for literature, inscriptions, religious texts, and legal documents; the latter were limited regionally to certain traditional parts of Babylonia, such as Nippur. Contemporary texts of daily life, such as letters, were composed in Akkadian, which played a minor role in education. At school, teachers punished students for speaking Akkadian, while the latter possibly learned to speak Sumerian using disputes such as the one quoted above. Though for obvious reasons we have no audio recordings of a Babylonian teacher, we did record two disputes for the exhibition that allow you to eavesdrop on the voices of the Babylonian school. For our student who was just practicing mathematics, skills for debating in Sumerian would develop later, over time.
At the end of the school day, the teacher gave our student some homework: a small, rectangular tablet inscribed with a Sumerian proverb and a calculation exercise (fig. 6)—exciting, because writing proverbs or small sayings in Sumerian meant the student had finally reached the end of the first phase of education! After writing many
TOP: Figure 5. Reverse of ISACM A30276. Photo by Danielle Levy.
BOTTOM: Figure 6. Obverse and reverse of ISACM A29985. Photos by Danielle Levy.
lists to learn all the cuneiform signs and their readings, practice vocabulary, and master the most obscure meanings of complex signs in Sumerian, our student was now practicing not only mathematics but also short sentences in Sumerian by writing legal documents and proverbs. Having reached this stage meant that the scribe-to-be would soon start working on the first ten literary compositions studied in school and discover songs of ancient kings, hymns to goddesses and temples, and fantastic stories, such as those of Gilgamesh and his friend Enkidu fighting the guardian of the cedar forest, Huwawa. Or, as the teacher would have explained it, the student’s knowledge of signs would be put to the test by entering deeper into the complex world of the scribal art. Without paying much attention, the student copied the teacher’s signs for the proverb and then attempted the calculation. The challenge was to calculate the reciprocal of the number 17 46 40—but while remembering the solution, part of the technique used to reach it escaped our pupil, who, after a while, just gave up, wrote down the solution, and underlined it twice. Work done! Upon examining the tablet after his child finally got home, our student’s father read the proverb the teacher had drafted: “A chattering scribe’s guilt is great!” Once more, the teacher had included a lesson in behavior (unwarranted though it was, in this case).
From the homework tablet, we can deduce that students studied mathematics and proverbs at the same time. Using many exercises like those written by our student, scholars have gained a good understanding of the Babylonian curriculum in Nippur, including the compositions taught and their sequence. Schoolchildren started with basic exercises, learning to write syllables and their names, then tackled lexical lists before working on mathematics and easy sentences, as our pupil did. Afterward, they were introduced to the rich world of Sumerian literature and learned about their history, religion, laws and regulations, rhetoric, and much more. The teachers also did their best to form the students’ characters and transform them into good human beings or exemplary members of the scribal community. In the exhibition, we invite you to follow the path of a young student from first holding a stylus through writing literature in a neat hand toward the end of the educational curriculum, when nearing graduation—though official ceremonies are not attested.
What became of the “graduates” of the scribal school? Part of the long list of professions on the reverse of the aforementioned tablet of our student included the many professional options for scribes: they worked as administrators in palaces, temples, and local towns; they were priests and musicians; and they wrote not only letters for illiterate folks but also inscriptions and hymns for kings. Perhaps our student “followed in father’s footsteps” by becoming a temple administrator who kept records for incoming and outgoing goods, daily offerings, expenses for temple personnel, and so on, or possibly trained as a lamentation priest after school and learned to appease the angry gods. Wherever our and other students went, they likely still had a bit to learn. While the teacher boasted of teaching “the totality of scribal art,” in reality much practical knowledge, such as writing and balancing an account or composing a lamentation, was not covered in the Nippur curriculum—which was, however, a solid basis for any profession.
I hope you have the chance to visit Back to School in Babylonia to see tablet ISACM A30276 and remember some of the endeavors of our anonymous student. While not all events described here may have happened to this particular pupil, this reconstruction is based on seventy years of scholarship on House F, and I am indebted to my colleagues worldwide for their work. Many of them have written essays for our exhibition catalog, which is freely downloadable in PDF format, with print copies available for purchase, at https://bit.ly/BacktoSchoolBook.
Back to School in Babylonia is on display at the ISAC Museum from September 21, 2023, to March 24, 2024. This special exhibition has been curated by Susanne Paulus, with Marta Díaz Herrera, Jane Gordon, Danielle Levy, Madeline Ouimet, Colton G. Siegmund, and Ryan D. Winters and with support from Pallas Eible Hargro, C Mikhail, Carter Rote, and Sarah M. Ware. The exhibition reunites objects excavated at Nippur and now held in the ISAC Tablet Collection, the ISAC Museum, and the University of Pennsylvania Museum of Archaeology and Anthropology. Tablets in the Iraq Museum, Baghdad, are represented by plaster casts.
Scribes are omnipresent in our conception of Mesopotamia. This statement is especially true for the Old Babylonian period (ca. 2000–1595 BCE) in what is now southern Iraq. Scribes worked in a multitude of professions, and the vast majority of written sources that survive from this period were produced by scribes in various contexts. What “scribes” did in ancient Babylonia would not be clear if a modern person relied solely on dictionary entries to gain an understanding. For example, the *Oxford English Dictionary* defines a scribe as “a person who copies or transcribes manuscripts, esp. one employed as a copyist in ancient or medieval times. Now chiefly historical.” Rather than being mere copyists, scribes in Babylonia had careers as civil servants, administrators, jurists, teachers, military officials, or priests and priestesses (fig. 1). Scribes were involved in most aspects of governance, the economy, and religion, as they were essential in both day-to-day functions and the transmission of cultural and religious traditions. However, we have imperfect evidence regarding the roles of many scribes, and there are still things we do not yet understand about how they received technical training for their jobs. This essay explores two important potential careers for scribes during the Old Babylonian period—the *muirram* and the *sasukkatum*—and examines how aspects of the curriculum found at the Edubba’a in Nippur would or would not have prepared them for their professional careers.
The Old Babylonian period marked the true death of Sumerian as a spoken language; it flourished during the third millennium in Mesopotamia and was the first language written with the cuneiform script. During the early second millennium, Akkadian succeeded Sumerian as the *lingua franca* of the region. Akkadian, a Semitic language also written using the cuneiform script, was the common tongue in Babylonia and was used to compose royal inscriptions, letters, and other common documents, such as receipts. For this reason, it may be surprising to learn that scribes were first taught to read and write in Sumerian, not Akkadian. Scholastic texts in Sumerian have been found in different contexts across Babylonia, in cities such as Sippar, Ur, and Mari. However, the largest concentration of didactic materials from the Old Babylonian period was found at House F in Nippur’s Scribal Quarter. This structure, which is much like a regular house in appearance, was a schoolhouse, or Edubba’a, meaning “the house where tablets are distributed.” More than 1,400 tablets found in the schoolhouse were excavated in 1951–52 by the Joint Expedition to Nippur of the Oriental Institute (now Institute for the Study of Ancient Cultures, or ISAC) and the University of Pennsylvania. These documents, many of which are now on display at ISAC as part of the special exhibition *Back to School in Babylonia*, have been used to reconstruct the curriculum that was taught to scribal students in Nippur. As students completed their assignments and turned in homework to their schoolmaster that was written on clay tablets, which were then discarded or recycled as building materials, they unwittingly preserved much for us to study about the way they understood mathematics, religion, and literature.
If students at the Edubba’a were taught to read and write Sumerian, how did they become proficient in Akkadian for their future careers as scribes? This question may at first seem difficult to answer, as the curriculum may be interpreted as teaching an elite class of priests and literati a dead language with few practical applications. However, we must remember that scribal students in Nippur were native speakers of Akkadian, meaning they already understood the grammar of the language from speaking it every day and were not learning a new language from the ground up. Furthermore, even though Sumerian and Akkadian are linguistically unrelated, writing was made easier by the fact that both languages shared sign values, logograms, and some loanwords. Beyond learning the mechanics of
writing and fashioning clay tablets, one of the first steps in a student’s education was the memorization and copying of different lexical lists. Some examples include lists of geographical names and domestic animals, as well as lists of simple signs and lists of personal names (fig. 2). That these lists sometimes contained glosses, or the Akkadian equivalents of the Sumerian words, would have not only helped the students master their Sumerian vocabulary but also given them a foundation in writing Akkadian as they learned commonly used sign values. This skill could then be used to compose words and sentences in Akkadian that were not directly taught.
Just how proficient would a student become in Akkadian through the curriculum at the Edubba’ā? The standard view of literacy held by Assyriologists and Sumerologists today was designed by Nick Veldhuis, who proposed that there were three levels of literacy: functional, technical, and scholarly. *Functional literacy* refers to writing letters, reading receipts, and other daily uses of writing. *Technical literacy* refers to mastery of specialized vocabulary and skills for different careers, whereas *scholarly literacy* can be thought of as proficiency for academic and religious pursuits, such as composing hymns, literature, or royal documents. Students at the Edubba’ā would have achieved functional literacy in Akkadian during the elementary phase of their schooling but would have needed additional training to acquire technical skills or gain scholarly literacy. Some have argued that functional literacy in Akkadian was a by-product of the grander education in Sumerian offered at the Edubba’ā. However, others argue that it might have been an intended feature of the curriculum that was sought out by individuals who did not wish or need to gain further literacy in Sumerian. Unclear to us is what proportion of the students would have chosen either option, the age at which students typically began their education, and at what age they graduated. Some have argued that students would have started as young as five or six years old and that the full curriculum may have taken as long as ten years to learn. These numbers are further complicated when we focus on students who may have attended school for only the elementary phase of the curriculum. How long did it take for a student to master the elementary phase? Even if it took five years, or half the total time that a student would have been enrolled at the Edubba’ā, would an eleven-year-old then be prepared to enter the workforce as a scribe or begin an apprenticeship?
While the scholarly attention that House F and its contents have received has greatly advanced our understanding of scribal education in the Old Babylonian period, we must also acknowledge that the evidence from this single Edubba’ā is not representative for the entire region or period. There are still many uncertainties about scribal education that we lack the evidence to study properly. One significant gap in our understanding is how scribes received technical training for their various roles. It has been reasonably assumed that scribes would have undergone individualized, on-the-job training or would have been taken on as apprentices by experienced scribes. The nature of this training during the Old Babylonian period is impossible to ascertain from our current evidence. What is clear is that the curriculum of the Edubba’ā was well-rounded enough that students who went on to become administrators, for example, were able to take the training they did receive in mathematics or in writing model contracts and then go on to do more advanced scribal work with additional training.
In the Old Babylonian period, it is not difficult to find scribes within large administrative systems, such as the palace or the temple. However, understanding their actual roles can be more challenging. We do not yet understand some of the titles we know scribes held. In other instances, certainty of a person’s official title is difficult to achieve, as *tupiarrum* (“scribe”) and other titles might be used interchangeably on documents or seal impressions (fig. 3). One example of a scribal position with such a nebulous title is the *mu’irrum*. The


Chicago Assyrian Dictionary (CAD) defines *mu irrum* as “commander” or “director,” but this administrator’s real responsibility was to recruit seasonal labor from rural communities and smaller urban populations across Babylonia. In this role, the *mu irrum* negotiated with local political structures and dealt specifically with *rabšānī*, local leaders or mayors. In this capacity, the *mu irrum* created contracts for employment, much as a human-resources representative might hire independent contractors for specific projects today. But ancient human-resources managers would not have been best described as “commanders,” and they certainly would not have held prisoners in their homes in the way that one court document describes the task of one particular *mu irrum*: holding a cattle thief in his home for four days (Textes cuneiforms, Musées du Louvre 1, 29:23, 28E). This instance highlights how administrators at different levels were sometimes given responsibilities that fell completely outside the scope of their normal work. It also shows the difficulties one can encounter when trying to understand the specific functions associated with different titles, for there is no Babylonian equivalent of a modern job description.
On the other hand, some scribal positions are well understood thanks to archaeologists’ discovery of personal archives or letters written by individuals in the course of their administrative duties. For example, the title *šasukkum*, which the CAD defines as a “land registrar,” is quite well attested. A *šasukkum* worked as part of a department alongside other *šassukkā* (plural form), who in turn managed teams of field surveyors (*atatmātia*). The primary responsibility of a *šasukkum* was to keep track of land ownership. This duty required both functional and technical literacy in Akkadian, mathematical skills, and good organizational skills, as the *šasukkum* was responsible for maintaining an archive of official records regarding different properties. An accurate archive was especially important to the king, who needed to know what land could be awarded as payment to soldiers for their service. As an authority on land tenure, a *šasukkum* was likely to be mentioned in letters and legal documents involving disputes over property ownership. In this capacity, a *šasukkum* would verify the claims being made and check for potentially forged documents.
One *šasukkum*, named Šamaš-hazir, is very familiar to students of Assyriology, as letters written to him by King Hammurabi (ca. 1792–1750 BCE) appear frequently in Akkadian curricula (fig. 4). Šamaš-hazir held the office for thirteen years in the city of Larsa during the reign of Hammurabi. In some ways, Šamaš-hazir had considerable power and freedom in his role. In letters from Hammurabi ordering Šamaš-hazir to give land to various recipients, only the region and amount of land are specified. The *šasukkum*, after taking into account practical concerns such as bordering properties and canals, had the discretion to decide what land would be given to whom as compensation for their service to the king. While Šamaš-hazir is certainly the best-known *šasukkum*, likely he was not the best. In fact, Hammurabi became angry with Šamaš-hazir and in a letter accused him of giving rations to people who did not qualify to receive them and of failing to complete some of his duties. Hammurabi states, “(If) you don’t quickly give satisfaction to these messengers, well then it’s as if you were overstepping the mark. You will not be forgiven!” (Altbabylonische Briefe in Umschrift und Übersetzung 4, 11.; translation by Baptiste Fiette).
The distinctions between the different titles and positions held by ancient scribes are important to our understanding not only of administrative systems in the Old Babylonian period but also of scribal education. To comprehend fully the pedagogical practices of schoolteachers at places such as House F in Nippur and evaluate the efficacy of the curriculum, we must understand what types of scribes were being produced and how they put their education to use outside the classroom. Hopefully, future discoveries will further our understanding of technical literacy and of how specialized training was provided to scribes for different professional roles.
**SUGGESTIONS FOR FURTHER READING**
Susanne Paulus, ed. 2023. *Back to School in Babylonia*. ISAC Museum Publications 1. Chicago: Institute for the Study of Ancient Cultures of the University of Chicago.
Beranger, Marine. 2019. “Glimpses of the Old Babylonian Syllabary, Followed by Some Considerations of Regional Variations and Training in Letter-Writing.” In Keilschriftliche Syllabare. Zur Methodik ihrer Erstellung, edited by Jörg Klinger and Sebastian Fischer, 17–38. Berliner Beiträge zum Vorderen Orient 28. Gladbeck: PeWe-Verlag.
Fiette, Baptiste. 2021. “Les surfaces des champs et des palmeraies d’après les archives de Šamaš-Hazir.” In *Pratiques administratives et comptables au Proche-Orient à l’âge du Bronze*, edited by Ilya Arkhipov, Grégory Chambon, and Nele Ziegler, 77–108. Publications de l’Institut du Proche-Orient Ancien du Collège de France 4. Leuven: Peeters.
Charpin, Dominique. 2010. *Reading and Writing in Babylon*. Cambridge, MA: Harvard University Press.
House F at Nippur is more than simply a domestic house. In addition to being a place where people lived, it was also a school. It was identified as a school on the basis of the tablets found in the house, which were not saved on purpose but rather reused as building material or trampled into the floor. Prior to the discovery of House F, it was believed that schools in ancient Mesopotamia were similar to modern ones: large, purpose-built spaces filled with many students. House F, however, demonstrates that schools could be quite small, with space for only a small number of students.
The Sumerian literary text known as the *Kesh Temple Hymn* opens on a scene of spontaneous divine inspiration: Enlil, the chief god of the Babylonian pantheon and resident god of the city of Nippur, “went forth from the house,” and as he did so,
The four corners of the world grew green like an orchard for Enlil.
There Kesh was, lifting its head to him, (and) as Kesh was lifting its head among the lands, Enlil began to praise Kesh.
This scene serves as an origin story for the rest of the text: the extravagant praises of the temple in the city of Kesh that follow are Enlil’s praises from that moment.
But one other crucial detail is given in the text between the point when Enlil begins to praise Kesh and the praises themselves. We are told that
As Nisaba was the decision-maker there, from those words she twisted it together. That which was written was set on a tablet by (her) hand.
Therefore, the text contains within it the story not only of its own creation but also of its *being written down*—and by none other than Nisaba, goddess of writing and administration and the patron of scribes.
This is a fun detail in the text in and of itself, an enjoyable moment of meta-commentary in what is a particularly ancient work of literature—the earliest written version of it that we know of dates to the mid-third millennium BCE. Yet this scene of copying down words onto a tablet takes on new significance when we consider this text’s context of use in the scribal schools of Babylonia during the Old Babylonian period (ca. 2000–1595 BCE). Nisaba’s actions encapsulate the purpose to which the students training in those schools had dedicated their lives: the creation of texts recording those aspects of Babylonian life that society (or at least its scribes and the typically elite institutions and individuals that employed them) thought ought to be written down.
The *Kesh Temple Hymn* played a central role in the scribal curriculum across Babylonia. It formed part of a set of ten texts modern scholars call the *Decad*, a quasi “core curriculum” of literary texts from a range of genres that scribal students studied at the beginning of the advanced stage of their education. Having progressed from impressing large, slightly off-kilter wedges into clay to form their first signs through the increasingly confident transcription of long lists of words and signs, these students, whose native language was Akkadian, began intensively to memorize and copy down works of Sumerian literature. And their encounter with the *Kesh Temple Hymn*, the sixth text in the sequence of the *Decad*, was not the first time that students of the scribal school would have seen their own actions reflected in the texts they were studying. A proverb some of them would have copied out when they were just starting to write full sentences in Sumerian posed the question, “A scribe who does not know Sumerian—what kind of a scribe are they?”—thereby reinforcing for the students the centrality of the language they were learning to the identity they were taking on as they did so. And of course, only a scribe who did in fact know Sumerian could write down and understand this proverb about that very subject.
Other texts in the *Decad* contain images of scribal accomplishment as well. The first text of the *Decad*, a *Praise Poem of Shulgi*—a hymn celebrating a great king of the city of Ur from several centuries earlier—includes among Shulgi’s many impressive achievements his proud statement, “I am a knowledgeable scribe of Nisaba!” Thus this text conveniently suggested to trainee scribes that to be a wise scribe was to be like this illustrious king—and conversely that this illustrious king had something in common with them.
Although it was not part of this fixed curricular unit taught across the wider region, the teachers in House F at Nippur seem to have been particularly fond of another praise poem of Shulgi, as seventeen manuscripts of it were found there. In a passage from this text that is famous among Assyriologists and was copied onto a tablet found in House F (fig. 1), Shulgi proclaims,
The scribal schools will never be altered for all of eternity. For all of eternity, the places of learning will never come to an end!
The scribal student in House F who copied those words might have found it fulfilling or simply mind-boggling to contemplate how, hundreds of years after the reign of Shulgi, House F itself and the people who spent their days there participated in continuing the tradition ascribed to that king. And though it cannot be said that school has remained unaltered for eternity, just think how pleased (if not more than a little surprised!) both Shulgi and that student would be to find that people like me are still studying their words in school all these thousands of years later.
If, on the one hand, students in House F studied literary texts that connected their scribal training with illustrious kings of the past and even divine beings, they often copied out much more down-to-earth depictions of school life as well. These “school stories” take the form of catechistic dialogues about the rhythms and customs of a day at school for scribal students early in their educational journey, before they would have progressed to copying out such texts as part of their training at a more advanced stage. The beginning of one of these stories was copied down on the tablet shown in figure 2.
This story, named “Schooldays” by one Assyriologist because of its focus on everyday life at school, in some ways offers a more holistic view of the Edubba’a than the material evidence on its own can provide. It tells us not only about the objects that the students used but also about how they interacted with them, including through ephemeral actions, such as recitation, that leave no direct trace in the textual record. Scholars of texts can sometimes get too focused on texts as texts; passages like this one from Schooldays remind us that texts had oral lives as well—as passages that were recited aloud and as media of interpersonal communication between students and those around them, as illustrated by the student using the tablet to impress his dad.
The school stories also enable us to play a particularly satisfying kind of matching game. In addition to containing this manuscript of *Schooldays*, House F contained several of the “teacher–student exercises” (fig. 3) mentioned in the story, while the “round tablet” in figure 4 was found in a house nearby—perhaps because the student who wrote it took it home to show a proud parent of their own.
A similar dialogue, *School Regulations*, depicts a scene of harmonious and absorbed studiousness as different pupils go about their assigned tasks. The dialogue similarly highlights the spoken aspects of education at the scribal school:
the one reading to the other,
the one reciting multiplication tables will recite multiplication tables,
the one reciting word lists will recite word lists.
(translation by Nick Veldhuis)
It is easy to picture such a scene happening, for instance, on the benches of Room 192 in House F (fig. 5) and to imagine that the reciting students were the same ones who left behind the assignments found there (figs. 6 and 7). While doing all this reciting out loud, who knows if the students encouraged each other, or, in the words of one of the participants in a rather heated debate, which also formed part of the corpus of “school literature,” told each other, “It may be that you have written the thematically arranged word lists up to the list of professional titles, but your tongue is not adapted to the Sumerian language” (translation by Jana Matuszak).
The school literature, taken as a whole, tells one lively (if sometimes humorously exaggerated) type of story about life in the ancient Edubba’a. But the physical tablets found in the school tell their own stories too. If the school literature seems to recount evanescent aspects of the daily life of students like the ones who attended the Edubba’a in House F, the tablets those students made capture moments of life there as well.
That an accomplished scribe was one who knew Sumerian is something several textual sources clearly indicate, from the proverb defining a scribe to the one-up-person-ship regarding matters of pronunciation displayed in the debate quoted above. But the greatest proof of this comes in the accumulated tablets from the school itself, which trace students’ progressing knowledge of the language, from individual words written with simple signs to complex literary texts whose syntax or meaning still sometimes stumps researchers today.
The tablets from House F in aggregate encompass the whole spectrum of scribal student knowledge and abilities, while each individual tablet from the school contains a snapshot of a moment on a student’s journey of becoming a scribe. On the round tablet pic-
LEFT: Figure 3. “They assigned me my teacher–student exercise.” ISACM A30276. Photo by Danielle Levy.
ABOVE: Figure 4. “In the late afternoon, they assigned me my round tablet.” ISACM A30182. Photo by Danielle Levy.
Figure 5. Excavation photo showing Level X Floor 4 of Room 192 in House F, which included benches where the students of the Edubba’a likely sat to write their many tablets. ISACM P. 47262 (3N/216).
LEFT: Figure 6. “The one reciting multiplication tables will recite multiplication tables.” Tablet inscribed with a multiplication table of 25. ISACM A30281. Photo by Danielle Levy.
RIGHT: Figure 7. “The one reciting word lists will recite word lists.” A prism inscribed with part of the *Thematic List of Words* (Ura) listing objects made from wood. ISACM A30187. Photo by Danielle Levy.
tured in figure 4, the bottom two lines were produced by a student struggling to write the assignment—an extract with two entries from the wooden objects section of the *Thematic List of Words* (Ura)—as beautifully as the model provided in the upper two lines of the tablet. While in the model copy each individual sign is neatly written and tidily “hangs” off the horizontal line above it, the signs in the student’s version are endearingly messy, askew, and uneven. Perhaps after moving on to a proficient temple career, this student might have been embarrassed to find out that this early effort ended up as part of the historical record, but we can be charmed by it as evidence from a moment when writing was still a work-in-progress.
Meanwhile, the teacher–student exercise tablet (fig. 3) has paused frozen in time in a way that perfectly captures how this format worked: on the left-hand side the teacher wrote a model text, and on the right-hand side the student repeatedly copied it, using their fingers as “erasers” to rub out our successive efforts. Some of the signs from the student’s last practice session remain visible on the tablet’s surface, alongside the smudges of their fingertips. These marks are not just touchingly familiar, human traces left behind by ancient Babylonians—they are data points worth analyzing in themselves. Scholars (including my colleague and fellow graduate student Madeline Ouimet) are beginning to pay more attention to marks other than writing that people left behind on tablets and what such marks might tell us about the practices of material text creation.
Sometimes these marks may simply reflect a student lost in thought. For example, one student created a doodle that perfectly exemplifies the modernist principle of medium specificity. Taking advantage of the impressionable medium of clay, the student left a row of neat fingernail impressions along the top edge of a tablet inscribed with another text from the *Decad*, the *Hymn to Nungal A* (fig. 8).
The scribal student questioned in *School Regulations* about the customs of the school declares rather grandiosely that
One knows the customs of the scribal school,
but like the unknowable horizon, the unreach-
able, one does not know to speak of them.
(translation by Niek Veldhuis)
In this way, the ancient Babylonian text almost parodies the situation in which modern researchers find themselves: all the innate, ancient Babylonian knowledge people carried with them and passed down to each other—knowledge of things it wouldn’t occur to some-
one to speak or write about—got left out of the textual record. Never-
theless, the tablets themselves tell us more than the people who made them probably ever imagined they could. Thanks to the tablets in House F at Nippur, we can catch glimpses beyond the “unknow-
able horizon” of centuries into the lives of ancient scribes-to-be.
*Author’s note:* This essay is indebted to the transformative research of several scholars on the tablets of the scribal school as ma-
terial texts and evidence of pedagogical practices—Paul Deltnero, Eleanor Robson, Steve Tinney, and Niek Veldhuis—and to the work of my colleagues on the curatorial team for *Back to School in Baby-
lonia*, particularly Marta Díaz Herrera, Madeline Ouimet, and Prof. Susanne Paulus.
**SUGGESTIONS FOR FURTHER READING**
Robson, Eleanor. 2001. “The Tablet House: A Scribal School in Old Babylonian Nippur.” *Revue d’assyriologie et d’archéologie ori-
entale* 95, no. 1: 39–66. A groundbreaking study of the tablets of House F in their ar-
chaeological and historical contexts.
van Koppen, Frans. 2011. “The Scribe of the Flood Story and His Circle.” In *The Oxford Handbook of Cuneiform Culture*, edited by Karen Radner and Eleanor Robson, 140–66. Oxford: Oxford University Press. Taking a different approach to contextu-
alizing tablets produced by Old Babylo-
nian scribal schools, this article examines the social milieu in which tablets containing literary texts were created and kept in the Babylonian city of Sippar.
Veldhuis, Niek. 1997. “Elementary Education at Nippur: The List of Trees and Wooden Objects.” PhD diss., Rijksuniversiteit Gron-
ingen. By examining in detail the material that was being practiced on the fronts and backs of teacher-student exercise tablets, Veldhuis was able to reconstruct the order in which students learned the elementary phase of the scribal curriculum.
Figure 8. Tablet with fingernail impressions on its upper edge. ISACM A30234. Photo by Danielle Levy.
Winter Members Appreciation Day
Saturday, January 13, 4:30–7:00pm
For ISAC Members
Join us for food and drinks and extended members-only museum hours as we celebrate you, the members who help make everything we do at ISAC possible.
4:30–5:30pm: Various ISAC faculty members and graduate students will be stationed throughout the galleries to meet you and give you the opportunity to hear directly from them about their wide-ranging work and projects. This is a great chance to learn more about some of the less visible but equally exciting work that is going on every day at ISAC.
5:30–6:15pm: ISAC’s new director, Dr. Timothy P. Harrison, will give a special talk in Breasted Hall. This portion of the evening will also be livestreamed. To register for the live-stream please email Brad at firstname.lastname@example.org.
6:15–7:00pm: Enjoy a casual buffet dinner with fellow members and ISAC staff and faculty.
Feel free to bring a friend!
To register, visit: https://bit.ly/WinterMember
Adult Education Class: Introduction to Late Egyptian
Tuesdays, March 5–April 23 (8 weeks), 7:00–9:00pm Central on Zoom and recorded
Cost: Nonmembers ($392), members ($314), docents/volunteers/ISAC travelers ($157), UChicago lab/charter, students, faculty, and staff ($98)
Bundle with the other ancient Egyptian language classes (Late Egyptian and Demotic) and save 15%. Nonmembers ($1,000), members ($800), docents/volunteers/ISAC travelers ($400), UChicago lab/charter, students, faculty, and staff ($250)
Instructor: Foy Scalf, PhD, head of the ISAC Research Archives and research associate
Learn to read the Egyptian language as used by King Tutankhamun himself! Late Egyptian represents a phase of the ancient Egyptian language that appeared in the Amarna Period and was used for many genres of text for centuries afterward. Famous among Late Egyptian literature and correspondence are the Qadesh Poem of Ramses II, the Tale of Wenamun, the Contendings of Horus and Seth, the Tale of the Two Brothers, the trial documents from the murder of Ramses III, and the Late Ramesside Letters, to name just a few. In this course, students will receive introductions and instructions for using the available resources to study Late Egyptian.
Class sessions will focus on vocabulary, grammatical constructions, and guided readings of Late Egyptian texts, supplemented by manuscripts from the collections of the Institute for the Study of Ancient Cultures Museum. Over this eight-week course, students will be exposed to the hieroglyphic writing system of Late Egyptian, over 200 hieroglyphic signs, approximately 250 vocabulary words, a selection of the most important grammatical constructions fundamental to Late Egyptian, and strategies for independent study to continue their learning journey after the class ends. There are no prerequisites for this course. Previous experience studying Egyptian hieroglyphs will be helpful, but it is not a requirement. This is a rare opportunity to gain in-depth experience with Late Egyptian, a phase of the language rarely studied outside of the university classroom. All class sessions will be recorded and available to students to pursue at their own pace.
To register, visit: https://bit.ly/LateEgyptian
MEMBERSHIP
YOUR PARTNERSHIP MATTERS!
The Institute for the Study of Ancient Cultures depends on members of all levels to support the learning and enrichment programs that make ISAC an important—and free—international resource.
As a member, you’ll find many unique ways to get closer to the ancient Middle East—including free admission to the Museum and Research Archives, invitations to special events, discounts on programs and tours, and discounts in the Museum gift shop.
INDIVIDUAL: ANNUAL $50 / $40 SENIOR (65+)
FAMILY: ANNUAL $75 / $65 SENIOR (65+)
JOIN OR RENEW
ONLINE: isac.uchicago.edu/join-give
BY PHONE: 773.702.9513
ISAC MUSEUM
For visitor information and Museum hours:
isac.uchicago.edu/museum-exhibitions |
A covalent and cleavable antibody-DNA conjugation strategy for sensitive protein detection via immuno-PCR
Jessie A. G. L. van Buggenum\textsuperscript{1}, Jan P. Gerlach\textsuperscript{1}, Selma Eising\textsuperscript{2}, Lise Schoonen\textsuperscript{3}, Roderick A. P. M. van Eijl\textsuperscript{1}, Sabine E. J. Tanis\textsuperscript{1}, Mark Hogeweg\textsuperscript{1}, Nina C. Hubner\textsuperscript{4}, Jan C. van Hest\textsuperscript{3}, Kimberly M. Bonger\textsuperscript{2} & Klaas W. Mulder\textsuperscript{1}
Immuno-PCR combines specific antibody-based protein detection with the sensitivity of PCR-based quantification through the use of antibody-DNA conjugates. The production of such conjugates depends on the availability of quick and efficient conjugation strategies for the two biomolecules. Here, we present an approach to produce cleavable antibody-DNA conjugates, employing the fast kinetics of the inverse electron-demand Diels–Alder reaction between tetrazine and trans-cyclooctene (TCO). Our strategy consists of three steps. First, antibodies are functionalized with chemically cleavable NHS-s-s-tetrazine. Subsequently, double-stranded DNA is functionalized with TCO by enzymatic addition of N\textsubscript{3}-dATP and coupling to trans-Cyclooctene-PEG\textsubscript{12}-Dibenzocyclooctyne (TCO-PEG\textsubscript{12}-DBCO). Finally, conjugates are quickly and efficiently obtained by mixing the functionalized antibodies and dsDNA at low molar ratios of 1:2. In addition, introduction of a chemically cleavable disulphide linker facilitates release and sensitive detection of the dsDNA after immuno-staining. We show specific and sensitive protein detection in immuno-PCR for human epidermal stem cell markers, ITGA6 and ITGB1, and the differentiation marker Transglutaminase 1 (TGML). We anticipate that the production of chemically cleavable antibody-DNA conjugates will provide a solid basis for the development of multiplexed immuno-PCR experiments and immuno-sequencing methodologies.
Antibody-DNA conjugate based technologies are used in biomedical research and the food industry to detect and quantify specific proteins or molecules\textsuperscript{1}. In these technologies, antibody-conjugated DNA can be detected via gel electrophoresis\textsuperscript{2–3}, fluorescence hybridization\textsuperscript{4}, sequencing\textsuperscript{5,6} or quantitative polymerase chain reaction (immuno-PCR)\textsuperscript{7} after antibody binding to the targeted epitopes. In order to develop and implement such technologies, it is essential to produce antibody-DNA conjugates with the following characteristics. First, the conjugation approach itself should be (cost-)efficient and applicable to all antibodies. Secondly, the produced conjugates have to maintain specificity for their targeted epitope. Finally, sensitive detection of the DNA should be facilitated by release of the DNA barcode after immuno-staining. The antibody and DNA conjugation strategies that are available include non-covalent strategies, such as coupling via biotin–streptavidin\textsuperscript{3} or covalent conjugation, using e.g. thiol–maleimide chemistry\textsuperscript{2}. To find an antibody-DNA conjugation strategy that facilitates all of the previously mentioned characteristics, however, is a major challenge. Yet such a strategy is critical to attain efficient and cleavable conjugation of any antibody.
Antibody-DNA conjugation depends on the production of antibodies and DNA with functional chemical groups. Antibody functionalization can be achieved via enzymatic reactions\textsuperscript{8}, chemical tagging\textsuperscript{9–11} or incorporation of non-natural amino acids\textsuperscript{7}. These approaches can be laborious and are not necessarily applicable to a...
wide variety of commercially available antibodies. In contrast, N-Hydroxysuccinimide ester (NHS) chemistry makes use of available primary amine groups present in all antibodies, and is therefore widely applied to generate antibody-fluorophore conjugates for microscopy and fluorescence activated cell sorting (FACS). DNA functionalization can be achieved by either incorporation of modified dNTPs during chemical synthesis of an oligonucleotide, enzymatic reactions such as PCR, or end labelling. Notably, PCR is a cost-efficient and renewable source of dsDNA for conjugation.
We aimed to develop an easy and efficient protocol for conjugation of antibodies to double stranded DNA (dsDNA). In the past decade, a wide variety of bioorthogonal reactions have been developed that allow conjugation of biomolecules\(^{12–14}\), including the Staudinger ligation\(^{15}\), Cu(I)-catalyzed azide-alkyne (CuAAC)\(^{16,17}\), strain-promoted azide-alkyne cycloaddition (SPAAC)\(^{18}\) and inverse electron-demand Diels-Alder (iEDDA) reaction\(^{19,20}\). From these reactions, the iEDDA reaction between tetrazine and trans-cyclooctene (TCO) displays one of the fastest reaction constants, estimated at \(\sim 2,000–20,000\) M\(^{-1}\)s\(^{-1}\)\(^{20}\), making it a very suitable candidate for the conjugation of antibodies and dsDNA.
Making use of 1) the robustness of NHS-chemistry for antibody functionalization with tetrazine, 2) the cost-efficient production of TCO-dsDNA and 3) the quick reaction kinetics of tetrazine with TCO, we developed an efficient procedure to conjugate specific dsDNA sequences to a set of different antibodies. Furthermore, we included a disulphide-containing cleavable linker between NHS and tetrazine to allow highly efficient release of dsDNA using DTT and highly sensitive DNA detection in qPCR after immuno-staining (Fig. 1). We obtained between 50- and 100-fold signal over background in immuno-PCR with conjugates against human epidermal (skin) stem cell markers integrin α6 (ITGA6), integrin β1 (ITGB1) or differentiation marker Transglutaminase 1 (TGM1). Antibody and cell dilution series, as well as siRNA silencing experiments showed sensitive and specific protein detection in immuno-PCR using these conjugates. The approach described in this article can in principle be used to conjugate dsDNA to any antibody, and is thus broadly applicable to many different fields of research or industry where specific and sensitive protein detection via immuno-PCR is of interest.
**Results**
**Functionalization of antibodies with tetrazine using NHS-chemistry.** We aimed to develop an antibody-dsDNA conjugation approach applicable to a broad spectrum of (commercially) available antibodies. Ideally, such an approach should not require production of modified recombinant antibodies, laborious enzymatic modifications or other specialized methods that can only be applied to a selection of specific antibodies. Due to the universal presence of primary amines on antibody molecules we chose the widely used NHS chemistry as an antibody functionalization approach. In addition, we wanted to combine this functionalization strategy with bioorthogonal chemistry, allowing selective conjugation of the antibody with other biomolecules, in our case dsDNA. We first tested the applicability of the SPAAC and iEDDA reactions for antibody conjugation to polyethylene glycol (PEG\(_{5000}\)) by coupling different functional groups to these two molecules. We functionalized a mouse monoclonal antibody (against protein Transglutaminase 1, TGM1) with bicyclononyne (BCN), norbornene (Norb), TCO or tetrazine using NHS chemistry, followed by a conjugation reaction with N\(_3^-\), tetrazine- or TCO-functionalized PEG\(_{5000}\) for 1 hour or overnight. We found that BCN-, TCO- or tetrazine-functionalized antibody required only 1 hour incubation to conjugate tetrazine-PEG\(_{5000}\) or TCO-PEG\(_{5000}\) respectively (Supplementary Fig. S1), although the exact time for conjugation with antibodies may be different. In contrast, BCN- or norbornene-functionalized antibodies required overnight incubation with tetrazine-PEG\(_{5000}\) or N\(_3^-\)-PEG\(_{5000}\) respectively. Due to very fast kinetics\(^{20}\) and the higher stability of TCO compared to BCN\(^{21}\), we chose to continue with the iEDDA reaction between TCO and tetrazine for the remainder of the work.
We proceeded to optimize the antibody functionalization reaction for our antibodies (Fig. 2a) with NHS-tetrazine I (Fig. 3). The NHS-chemistry used for the antibody functionalization reaction is dependent on 1) the available lysines of the antibody, 2) the pH of the buffer and 3) on the ratio of the antibody and the NHS-ester. Functionalization reactions were performed in borate buffered saline (BBS) at pH 8.4 for 45 minutes at room temperature (rt). The functionalization efficiency of antibody to NHS-tetrazine was compared in a molar ratio series of antibody:NHS-tetrazine, and assessed by the conjugation to TCO-PEG\(_{5000}\) followed by Western blot analysis. A higher molar ratio of antibody: NHS-tetrazine results in an increased number PEG\(_{5000}\) on the heavy chain of the TGM1 antibodies (Fig. 2b). We found that only a minor proportion of the light chain of the antibody is functionalized using different ratios. To test whether the functionalization approach is applicable to antibodies derived...
Figure 2. Optimization of antibody functionalization with NHS-PEG$_4$-tetrazine (Figs 3,1). (a) Reaction conditions of antibody functionalization with tetrazine via NHS chemistry. (b) Western blot with ratio series of mouse (anti-TGM1) antibody: NHS-tetrazine (Figs 3,1), and conjugation with TCO-PEG$_{2000}$. (c) Coomassie staining of SDS-PAGE with mouse (anti-TGM1), rabbit (IgG) or rat (Ago2) antibody functionalized using a 5-fold excess of NHS-tetrazine (Figs 3,1) and conjugation with TCO-PEG$_{5000}$.
Figure 3. Structure of NHS-PEG$_4$-tetrazine (1), and synthesis route towards NHS-s-s-PEG$_4$-tetrazine (2), and structure of DBCO-PEG$_{12}$-TCO (3). The details of the synthesis route to 2 are described in the Supplementary experimental section.
from different animal hosts, we functionalized mouse (TGM1) and rat monoclonal as well as rabbit polyclonal antibodies with NHS-tetrazine (at a molar ratio of 1:5), followed by conjugation with TCO-PEG$_{5000}$. Western blot analysis revealed that all three types of antibodies were functionalized and conjugated (Fig. 2c).
After optimizing functionalization conditions, we explored the use of NHS-s-s-PEG$_4$-tetrazine 2 (Fig. 3) in our functionalization strategy. 2 contains a disulphide bond between the NHS and tetrazine groups, which allows the controlled release of conjugated DNA from antibodies under reducing conditions. We first synthesized tetrazine 6, which was prepared from 4 according to modified literature procedures\textsuperscript{22,23}. Coupling of 6 to a Boc-protected PEG-linker resulted in 8 which, after Boc removal, was coupled to a bifunctional NHS-dithiopropionate to afford the target NHS-s-s-PEG$_4$-tetrazine 2 (Supporting info, experimental section). We observed similar functionalization efficiencies for both non-cleavable NHS-PEG$_4$-tetrazine 1 and cleavable NHS-s-s-PEG$_4$-tetrazine 2, as determined by Western blotting of non-reducing SDS-PAGE (Supplementary Fig. S2).
To determine the number of tetrazine-groups after functionalization of a batch of antibodies, we performed Western blot analysis in parallel to electrospray ionization time-of-flight (ESI-TOF) mass spectrometry of reduced functionalized antibodies (molar ratio 1:5, Fig. 4a). ESI-TOF mass spectrometry showed that each antibody heavy chain contained up to three functional groups (Fig. 4b,c and Supplementary Fig. S3). This is similar to the amount of PEG$_{5000}$ groups conjugated to the heavy chain observed in the Western blot of the same sample.
conjugated to PEG\textsubscript{5000} (Fig. 4a). These results show that the Western blot of PEG-conjugated antibodies can be used to determine the number of functional groups on the antibodies.
Given that the NHS-chemistry targets primary amines, there are numerous potential functionalization sites present in each antibody molecule. To characterize the potential positions of the modified amino acid residues, we performed the following experiment. First, we functionalized a mouse IgG2a monoclonal antibody with a ten-fold molar excess of the cleavable NHS-s-s-Tetrazine 2. The functionalized antibody was denatured, digested with trypsin/lysC and reduced using DTT. This procedure leads to cleavage (reduction) of not only the disulphide bridges within the antibody, but also within the linker. Finally, the sample including reduced modified lysines was alkylated using iodoacetamide. These steps lead to a 145.02 Da ‘fingerprint’ on the functionalized lysines and a missed-cleavage of these peptides, allowing identification of modified sites using high resolution mass spectrometry (LC-MS/MS) (Fig. 4d). We mapped the identified modification sites on the crystal structure of mouse IgG2a and observed a total of nine modified lysines on the heavy-chain and two on the non-variable (non-epitope binding) part of the light-chain (Fig. 4d,e). As expected, all these modifications are positioned on the solvent-exposed surface of the antibody. Although the exact positions of the modified residues will be different for each antibody, our results suggest that any antibody that contains surface exposed lysines can be functionalized with a limited number of tetrazine groups via NHS-chemistry.
**Development of an easy dsDNA functionalization approach.** To introduce functional groups on dsDNA that are compatible with iEDDA, we developed a combined enzymatic and chemical functionalization approach. After production of a blunt-ended PCR product, the 3'-ends of the dsDNA PCR product were extended with a single N\textsubscript{3}-dATP (azide-dATP) using \textit{E. coli} DNA polymerase I Klenow fragment lacking 3' → 5' exonuclease activity (Fig. 5a). This polymerase makes use of blunt-ended dsDNA and adds specifically one dATP to the 3'-ends of the dsDNA. N\textsubscript{3}-labelled dsDNA was subsequently conjugated to bifunctional DBCO-PEG\textsubscript{12}-TCO 3 (Fig. 3) through a SPAAC reaction, leading to a shift in migration in an agarose gel. Conjugation to tetrazine-PEG\textsubscript{5000} and analysis on agarose gel showed a near complete functionalization of the dsDNA with one or two TCO moieties (Fig. 5b).
To optimize the functionalization efficiency we performed a molar ratio series of N\textsubscript{3}-dsDNA to DBCO-PEG\textsubscript{12}-TCO 3 and monitored conjugation via gel electrophoresis. We found that high functionalization efficiency is achieved with mild (five to ten fold) excess of 3 (Fig. 5c), facilitating easy and efficient removal of non-conjugated 3 using a gel filtration column. Thus, dsDNA produced via a regular PCR reaction can be efficiently functionalized by combining enzymatic incorporation of N\textsubscript{3}-dATP and conjugation of TCO via SPAAC chemistry. In contrast to modified ssDNA oligo's, our dsDNA production and functionalization strategy can be used on any blunt-end dsDNA PCR product, and allows the production of functionalized DNA in large quantities. By using unique DNA sequences per antibody, one could develop multiplexed immuno-PCR.
Conjugation of antibody and dsDNA using the iEDDA. After functionalization of antibody and dsDNA with tetrazine and TCO respectively, we aimed to determine conditions that facilitate efficient conjugation of the two biomolecules (Fig. 6a). First, we determined the time needed for efficient conjugation, using NHS-PEG$_2$-tetrazine 1. Gel electrophoresis shows that the reaction is saturated within 30 minutes, which underlines the fast reaction kinetics of TCO with tetrazine (Supplementary Fig. S4). For the conjugation of antibodies with DNA we used a reaction time of one or two hours for further conjugation reactions, followed by quenching of the remaining TCO groups with free tetrazine. Because the functionalized dsDNA has one or potentially two functional groups per molecule, quenching of the TCO groups is desirable to prevent sequential conjugation of antibodies and dsDNA over time.
Next, we determined the conjugation efficiency at both the DNA and antibody level. The conjugates were visualized by running the samples on a 4–15% polyacrylamide gradient gel, followed by in-gel antibody-staining.
with fluorescently labelled antibodies, and subsequent DNA-staining with ethidium bromide. We observed conjugation at molar ratios of 1:2 and 1:10 antibody to DNA. These conjugates were seen at the same position in the polyacrylamide gradient gel via immuno-staining and via ethidium bromide staining (Fig. 6b). Taken together, the characterization of the conjugates directed us to use a molar ratio of antibody to dsDNA of 1:2 for the production of the following conjugates.
To determine whether functionalized and conjugated antibodies maintain their specificity, antibodies against two skin stem cell markers, integrin α6 (ITGA6) and integrin β1 (ITGB1) and one differentiation marker Transglutaminase I (TGM1), were used for immuno-staining (in-cell Western). We observed loss of signal for unconjugated, NHS-PEG₃-tetrazine functionalized and dsDNA-conjugated antibodies following siRNA silencing of the targeted epitopes (Supplementary Fig. S5), indicating that the antibodies maintain their specificity after functionalization and conjugation.
Next, we aimed to determine the optimal conditions for the release of the DNA, without interfering with downstream PCR analysis. A disulphide bridge containing linker between antibody and DNA allows DNA release upon the presence of DTT. An advantage of a s-s containing linker over photo-cleavable linker is that the cleavage only occurs in presence of DTT without the risk of light-dependent instability issues during handling of the conjugates. Moreover, the DTT reduces all disulphide bridges, including the ones of the antibodies. Another advantage is that there is no extra risk of light-induced DNA-damage when using DTT to release the DNA. The release efficiency could be dependent on DTT concentration and availability of the conjugates. To test which concentration of DTT is needed to release the DNA, antibody-dsDNA conjugates were prepared using NHS-s-s-PEG₃-tetrazine 2, and subsequently incubated with decreasing concentrations of the reducing agent DTT. Gel electrophoreses showed that at DTT concentrations exceeding 5 mM, most DNA is effectively released from the antibodies (Fig. 6c). Moreover, DTT concentrations up to 50 mM did not affect the efficiency of subsequent DNA amplification by qPCR (Supplementary Fig. S6). Based on these results, we chose to use 10 mM DTT for dsDNA release after immunostaining.
To confirm the detection of barcodes after immunostaining and DTT treatment we produced antibody-dsDNA conjugates ITGA6, ITGB1 and TGM1 (Supplementary Fig. S7), and used these conjugates in an immuno-staining on fixed human epidermal stem cells. dsDNA was released using 10 mM DTT for 2 hours at rt and measured with quantitative PCR. Compared to control samples without DTT, we observed 39.8 fold ($p = 0.0018$) higher signal in samples stained with ITGA6 and 49.3 fold ($p = 0.002$) ITGB1 conjugates, and 12.0 fold ($p = 0.0002$) higher signal for samples stained with TGM1 conjugates. The cells that were used for this experiment where undifferentiated skin stem cells, which could explain the lower signal of differentiation marker TGM1. Together, these results provide a workflow for creating cleavable antibody-DNA conjugates that can be directly detected in qPCR after standard immuno-staining and DTT mediated release of the dsDNA.
**Sensitive detection of human skin stem cell and differentiation markers via immuno-PCR using DTT cleavable antibody-DNA conjugates.** After developing a protocol for antibody-dsDNA conjugation and release, we optimized the immunostaining procedure using three conjugates for TGM1, ITGA6, or control IgG (Supplementary Fig. S7b). In the immuno-PCR, unspecific antibody binding or unspecific DNA binding could contribute to high background signal, and would result in lower signal over background levels. To reduce background signal from our conjugates in immuno-PCR, several blocking conditions were tested in immunostaining. We tested the influence of double or single stranded salmon sperm DNA and the effect of a protein free blocking reagent on the signal over background (Fig. 7a–c). The background in this experiment was defined as the mean signal coming from cell populations stained with unconjugated dsDNA. First, addition of double stranded salmon sperm DNA to our ‘standard’ blocking solution for ICW (1% bovine serum albumin in PBS) increased signal over background to >25 for the two specific antibodies TGM1 and ITGA6 (Fig. 7a,b, left column). Second, a further increase to >75 signal over background was achieved when using single stranded, rather than double stranded, salmon sperm DNA (Fig. 7a,b, middle column). Finally, the highest signal over background (120 and 194 for the TGM1 and ITGA6 antibody conjugates, respectively) was obtained when combining single stranded salmon sperm DNA with protein free blocking buffer instead of 1% bovine serum (Fig. 7a,b, right column). In all conditions the control IgG-DNA conjugate showed low signal of <1.6 (Fig. 7c). This is two orders of magnitude lower than the specific antibodies, indicating little unspecific binding events of the conjugates.
The specificity of two conjugates TGM1 and ITGA6 was validated by performing an immuno-PCR on cell populations with or without siRNA silencing of the targets TGM1 and ITGA6 respectively. Compared to control cells, a significant decrease of the protein level was detected using our conjugates in immuno-PCR (Supplementary Fig. S8). The mRNA levels of TGM1 and ITGA6 in these cells were determined using quantitative reverse transcription PCR (RT-qPCR), confirming efficient silencing of the mRNA. Together, these results show that the conjugates specifically recognize their targets in immuno-PCR.
Finally we evaluated the sensitivity of two different conjugates in the immuno-PCR. First, we fixed epidermal stem cell populations containing different cell numbers and thus different amounts of epitopes. Then, we determined the protein levels of TGM1 or ITGB1 via immuno-PCR using antibody-DNA conjugates or via a standard in-cell western (ICW) using unconjugated antibodies (Fig. 8a). The relative limit of detection (LOD) in the ICW is 0.358 and 0.353 for ITGB1 and TGM1 respectively. The immuno-PCR approach, however, has a lower LOD of 0.095 and 0.094 for ITGB1 and TGM1 respectively. Moreover, the squared correlation coefficient to the 2-fold dilution factor is higher with the immuno-PCR approach (ITGB1: $R^2 = 0.99$, TGM1: $R^2 = 1.00$) than with the ICW (ITGB1: $R^2 = 0.97$, TGM1: $R^2 = 0.92$). Together, these results show that immuno-PCR is able to detect much lower signal than with ICW. Secondly, we performed a dilution series of the antibody-DNA conjugates (Fig. 8b) to determine how little of the antibodies is needed for detection above background. The background signal from cell-populations without antibody ('no antibody') is much lower with immuno-PCR than with ICW (ITGB1: ~4400 and TGM1: ~6800 times lower background). We observed in immuno-PCR a log-linear relationship...
between antibody concentration and signal over 3 orders of magnitude before approaching the background signal coming from cell-populations without antibody (Fig. 8b). This indicates that very low concentrations (total of 1.6 ng per 50 μl, in lowest dilution) of our conjugates are sufficient for detection of proteins through immuno-PCR.
To test the usefulness of our conjugation and immuno-PCR method for other (intracellular) proteins, we have performed similar antibody-dilution experiments using a wide variety of conjugates against >40 (mostly intracellular) proteins (data not shown). The average squared correlation of these antibodies to the dilution factor is 0.988 with a standard deviation of 0.036, indicating our conjugation and immuno-PCR approach is applicable to many different antibodies. Together, these results show that our antibody-dsDNA can be used for sensitive immuno-PCR experiments and that a comparatively small amount of conjugates is needed in these experiments.
Figure 8. Comparing in-cell western and immuno-PCR approach to detect ITGB1 and TGM1. (a) Relative signal (to first dilution, A.U.: Arbitrary Unit) from cell-dilution series in ICW and immuno-PCR of ITGB1 ($n = 6$) and TGM1 ($n = 5$). (b) Relative signal (to first dilution, A.U.: Arbitrary Unit) from an antibody dilution series ($\log_{10}$ µg/ml) in ICW and immuno-PCR of ITGB1 and TGM1 ($n = 6$).
Conclusion
We have developed a strategy for antibody and dsDNA conjugation and sensitive immuno-PCR experiments. The approach consists of an easy to apply antibody functionalization step and two-step dsDNA functionalization, followed by conjugation of the two molecules via tetrazine and TCO. By introducing a DTT cleavable linker, dsDNA can be released after immuno-staining for sensitive detection in qPCR. Distinct sequences of dsDNA can be conjugated to the antibodies, which would allow the development of multiplexed immuno-PCR experiments. The throughput of the strategy may be increased by performing reactions in parallel and in a miniaturized, or automated, fashion. We believe the described conjugation strategy for DTT-cleavable antibody-DNA conjugates is an important step towards easy implementation of high-throughput multiplexed immuno-staining analysis by quantitative PCR and potentially by high-throughput sequencing.
Experimental Procedures. Antibodies. Rat IgG, referring to an antibody against Argonaute2, was obtained from Sigma (Clone 11A9, Cat. No. SAB4200085). Purified Rabbit IgG was obtained from Bethyl laboratories (Cat. No. P120-101). Antibodies against ITGA6 (clone MP4F10) and ITGB1 (clone P5D2) were a kind gift from Simon Broad. GAPDH antibody was obtained from Abcam (clone 6C5).
The antibody that was used for functionalization experiments (in figures referred to as ‘anti-TGM1’ or TGM1) was produced from mouse hybridoma line BC.1 (recognizing Transglutaminase 1); Hybridoma cells were cultured in RPMI medium 1640 + GlutaMAX™-I (Gibco life technologies) supplemented with Penicillin/Streptavidin (P/S) and 10% fetal bovine serum (FBS, Lonza) for 4 days. Then cells were passaged every 3 days in this medium with 5%, 2.5% or 1% FBS. After 13 days cells were passaged and resuspended at $10^6$ cells/mL in PFHM-II medium + P/S. Culture medium containing the antibody was harvested after 9 days. Antibody was
purified over ProtA/G column (GE Healthcare) at 4 °C, 50 K Amicon filter (Millipore) and 40 K Zeba™ Spin Desalting columns (Thermo Scientific) were used for buffer exchange into PBS.
**Functionalization of antibodies.** For all antibodies, a buffer exchange to 50 mM borate buffered Saline pH 8.4 (150 mM NaCl) was performed using 40 K Zeba™ Spin Desalting columns (Thermo Scientific). Antibodies (1.5–2 µg/ul) were incubated for 45 minutes with NHS-PEG₄-tetrazine 1 (Jena Bioscience) or NHS-s-s-tetrazine 2 (Fig. 3, production of 2 see supplementary ‘Experimental section’) in the indicated molar ratios at rt. Surplus 1 or 2 was removed using 40 K Zeba™ Spin Desalting columns. Functionalized antibodies were stored in 50 mM borate buffered Saline pH 8.4 (150 mM NaCl) or PBS at 4 °C or −20 °C.
**Mass spectrometry ESI-TOF.** Protein mass characterization was performed by electrospray ionization time-of-flight (ESI-TOF) on a JEOL AccuTOF CS. Deconvoluted mass spectra were obtained using MagTran 1.03 b2. Protein samples were desalted and concentrated to 10–100 µM by spin filtration (amicon 10 K filter from millipore) with MQ.
**Mass spectrometry LC-MS/MS.** To determine the localization of tetrazine modifications, 1 µg of functionalized antibody in 1 µl was diluted in 15 µL of 8 M Urea in 100 mM Tris, pH 8. Disulphide bonds were reduced by adding 2 ul 10 mM dithiothreitol and subsequently alkylated by adding 2 ul 50 mM iodoacetamide, for 15 minutes in the dark. Subsequently, the antibody was digested using 0.5 ul of Trypsin/Lys-C mix (0.04 µg/ul, Promega) overnight at rt. The digestion was stopped by acidifying with trifluoroacetic acid and the peptides were purified on StageTips⁴⁴. Thirty percent of the peptides were loaded onto a pulsed fused silica column (New Objectives) packed in house with 1.8 µm ReproSil-Pur C18-AQ (Dr. Maisch, 9852). Using the Easy-nLC 1000 (Thermo Fisher Scientific), peptides were separated in a 60 min. gradient and directly injected into a QExactive mass spectrometer (Thermo Fisher Scientific). The mass spectrometer was operated in TOP10 data dependent acquisition. Full MS were recorded at a resolution of 70,000 at \( m/z = 400 \) and a scan range of 300–1,650 \( m/z \). MS/MS spectra were recorded at a resolution of 17,500. Raw mass spectrometry data was analyzed using the MaxQuant software package, version 22.214.171.124, with standard settings if not further specified⁵². The following variable modifications were allowed: Oxidation of methionines, acetylation of protein N-termini, and carbamylation of cysteines. Furthermore, a modification of lysines and protein N-termini corresponding to the reduced and alkylated linker (\( \Delta m = 145.01975 \)) was allowed. This modification was only allowed for peptide internal lysines due to the missed trypsin cleavage that is caused by the linker modification. Three missed cleavages were allowed and the maximum peptide mass was set to 8000 Dalton. Data was searched against the mouse Uniprot database (downloaded 13.06.2014) using the integrated Andromeda search engine. The search was performed with a mass tolerance of 4.5 ppm mass accuracy for the precursor ion and 20 ppm for fragment ions. Peptides, modified peptides and proteins were accepted at an FDR of 0.01.
**Production of TCO-PEG₁₂-dsDNA.** Template and primers (for sequences see Table S1) were ordered from Biolegio and were used for standard PCR reaction to produce dsDNA barcode-1, 2 or 3 using Pfu proof-reading DNA polymerase that produces blunt-ended dsDNA. After purification using a PCR purification kit (Qiagen), a Klenow exo- (New England Biolabs) enzymatic reaction was used to add N₃-dATP (Jena Bioscience) to the 3′-end of the barcodes. For this, up to 8 µg dsDNA per reaction was incubated for 1 hour (h) at 37 °C. Following a second purification using PCR purification kit (Qiagen), SPAAC was used for functionalization of N₃-dsDNA with DBCO-PEG₁₂-TCO (Jena Bioscience) using a molar ratio of 1:20. After overnight reaction at rt, surplus DBCO-PEG₁₂-TCO was removed using a Zeba™ Spin desalting column (Thermo Scientific; 40 KDa molecular-weight cut-off).
**Conjugation conditions reverse electron-demand Diels-Alder chemistry.** To determine functionalization efficiency of TCO-dsDNA or tetrazine-antibodies, tetrazine-PEG₃₀₀₀ or TCO-PEG₃₀₀₀ were conjugated to dsDNA or antibodies respectively (molar ratio 1:300, 1 h at rt). Conjugation of antibody and dsDNA was performed with a molar ratio of 1:2 in PBS for 1 h at rt (ITGA6, ITGB1 in Fig. 6c,d, Supplementary Figs S4 and S5) or with a molar ratio of 4:1 in BBS pH 8.4 for 2 h at rt (ITGA6, anti-TGM1, control IgG in Fig. 7 and anti-TGM1 in Fig. 6c and all conjugates in Fig. 8 and Supplementary Figs S7 and S8), unless stated otherwise. Conjugation reactions were quenched by addition of an excess of 3,6-diphenyl tetrazine. Conjugation reactions can be linearly scaled from 1 to 100 µg antibody with similar conjugation efficiencies.
**Gel electrophoresis.** dsDNA and dsDNA-PEG₃₀₀₀ were run with 10 × SYBR Green I (Life technologies) on a 2% agarose gel (0.5 × TBE) and scanned on a Typhoon Trio+ machine (GE Healthcare).
**Western blots.** After 10 minutes incubation with 1x sample buffer (1% SDS, 40 mM TrisHCl pH 6.8, 5% glycerol, β-ME, BPP) at 95 °C, antibodies were separated on standard 4–15% gradient gel (BioRad) and blotted on a nitrocellulose or PVDF membrane using Bio-Rad Trans-Blot® Turbo RTA Transfer Kit (mixed molecular weight program). Antibodies were detected with specific fluorescent goat-anti-mouse antibody (1:10,000, Licor).
**In gel western.** After 5 minutes incubation with 2 × non-reducing sample buffer (1% SDS, 40 mM TrisHCl pH 6.8, 5% glycerol) at 95 °C, antibody-DNA conjugates were run on a standard 4–15% gradient gel (BioRad) at constant 20 mA for 1.5 to 2 h. Then, the gel was fixed with 50% propanol, 5% acitic acid for 15 minutes. After 3 washes with MQ, the gel was incubated with fluorescent secondary antibody (anti-mouse IRDYe800, 1:2000) overnight at 4 °C. Following three 10 minutes washes with PBST, the gel was washed with PBS and scanned on odyssey CLx
(LI-COR). Then DNA was stained using 20 minutes incubation with 1 μg/mL ethidium bromide in PBS (gel was fully submerged in solution). After two washes with PBS, the gel was imaged using Gel Doc XR+ (BioRAD).
**Cell culture and siRNA transfection.** Primary pooled human keratinocytes (foreskin strain Knp) were obtained from Lonza. Cells were expanded and cultured as described\(^{26}\). Before transfection, expanded keratinocytes were grown for several days in keratinocyte serum-free medium (KSFM) supplemented with 30 μg/mL bovine pituitary extract and 0.2 ng/mL EGF (Gibco) until 70% confluency. After collection, cells were resuspended in cell line buffer SF (Lonza) with \(2 \times 10^5\) cells per 18 μL. Then, \(2 \times 10^5\) cells in SF were mixed with 2 μL siRNA (20 μM) and transfected (program FF-113) using the Amaxa 96 well shuttle system (Lonza). After 10 minutes incubation, cells were resuspended in KSFM and seeded in 96 wells plate at 20,000 cells per well. Cells were grown for 48 h, washed with 150 ul PBS and fixed with 50 ul 4% formaldehyde in PBS for 10 minutes at rt.
**Immunostaining (in-cell-western).** Procedure for Fig. S5: Fixed keratinocytes (as described under cell culture and siRNA transfection) were washed 3 times with 150 μl PBS and permeabilized with 0.1% triton for 10 minutes at rt. After blocking overnight with 10% bovine serum in PBS, cells were incubated with control or DNA-conjugated antibodies for 1 h at rt at 2 μg/ml (ITGA6, ITGB1) or ~ 0.3 μg/ml (TGM1). Cells were washed with PBS (\(3 \times 10\) minutes) followed by secondary antibody staining with Goat-anti-mouse 1:2000 and DRAQ5 1:4000 in blocking buffer. Analysis was performed with an Odyssey scanner. Measurements of the total intensity were normalized over DRAQ5 and the average and standard deviation were calculated and plotted as the relative intensity.
Procedure for Fig. 8: cells were washed 1× with 150 μl PBS and fixed with 4% formaldehyde for 15 minutes. After 3 washes with 150 μL TBS, cells were incubated for 30 minutes with blocking and permeabilization buffer (0.5 × protein free blocking buffer, 0.1% triton, 200 ng/ml single strand Salmon Sperm DNA). Fixed cells were incubated with 50 μl primary antibodies (TGM1 or ITGB1 antibody in 0.5 × protein free blocking buffer, 0.1% triton) at 0.1 μg/ml for the cell dilutions series or at the indicated concentration for 2 h at rt. The staining was followed by 3 × short washes, 1 × 15 minutes wash, 3 × short washes with PBS. Cells were then incubated with 50 μl Goat-anti-mouse 1:2000 and DRAQ5 1:4000 in blocking buffer. Analysis was performed with an Odyssey scanner.
**Barcode release and immuno-PCR.** Cells were cultured in a 96 wells plate (15,000 or 20,000/well) for 2–3 days, washed with PBS and fixed with 4% formaldehyde for 15 minutes and stored in PBS at 4°C after 3 washes with 150 μL PBS. Then cells were incubated for 30 minutes with blocking and permeabilization buffer (0.1% triton, 0.5 × protein free blocking buffer (PFBB), 200 ng/ml single strand Salmon Sperm DNA). For optimization of blocking conditions the following buffers were tested: 0.1% triton with 2% BSA in PBS or 0.5 × PFBB in PBS, unboiled (double strand) salmon sperm DNA or boiled (single stranded) salmon sperm DNA (200 ng/ml).
Cells were incubated with 50 μL primary antibody conjugates at 0.5 μg/mL unless stated otherwise for 2 h at rt. Subsequently cells were washed three times with 150 μL blocking/permeabilization buffer for 15 minutes. After three times rinsing with PBS, barcodes were released using 50 μl of 10 mM DTT (in 150 mM borate buffered saline, 50 mM NaCl) for 2 h at rt. After thorough vortexing, 2 μL sample was used for quantitative PCR (20 μL/reaction, iQ™ SYBR Green Supermix, CFX 96 machine). To avoid template contamination, it is important to work carefully by using filter tips and regularly rinsing the working area.
**Data analysis.** qPCR data in Fig. 6: Each signal (Ct) was divided by the mean signal from immuno-stained cells treated without DTT. The average of 3 (ITGA6 and ITGB1) or 6 (TGM1) replicates is shown in the figure with the corresponding standard deviation. The p-value described is calculated by a t-test (2-tailed, equal variance)
qPCR data in Fig. 7: Each signal (Ct) was divided by the mean signal form cells that were stained with unconjugated dsDNA. The average of 4 replicates with the standard deviation is plotted in the figure.
qPCR data in Fig. 8: The average, standard deviation and coefficient of variation of 5 replicates was calculated. Per technique (ICW or IPCR) signals are plotted relative to the first and highest signal, the arbitrary unit (A.U.), in order to compare the signals from the two techniques.
**References**
1. Adler, M., Wacker, R. & Niemeyer, C. M. Sensitivity by combination: immuno-PCR and related technologies. *Analyst* **133**, 702–18 (2008).
2. Agasti, S. S., Liong, M., Peterson, V. M., Lee, H. & Weissleder, R. Photocleavable DNA barcode–antibody conjugates allow sensitive and multiplexed protein analysis in single cells. *J. Am. Chem. Soc.* **134**, 18499–502 (2012).
3. Sano, T., Smith, C. L. & Cantor, C. R. Immuno-PCR: very sensitive antigen detection by means of specific antibody-DNA conjugates. *Science* **258**, 120–122 (1992).
4. Ullal, A. V. *et al*. Cancer cell profiling by barcoding reveals multiplexed protein analysis in fine-needle aspirates. *Sci. Transl. Med.* **6**, 219ra9 (2014).
5. Darmanis, S. *et al*. ProteinSeq: high-performance proteomic analyses by proximity ligation and next generation sequencing. *PLoS One* **6**, e25583 (2011).
6. Dezfooli, M., Vickovic, S., Iglesias, M. J., Schwenk, J. M. & Ahmadian, A. Parallel barcoding of antibodies for DNA-assisted proteomics. *Proteomics* **14**, 2432–2436 (2014).
7. Kazane, S. a *et al*. Site-specific DNA-antibody conjugates for specific and sensitive immuno-PCR. *Proc. Natl. Acad. Sci. USA* **109**, 3731–6 (2012).
8. Zeglis, B. M. *et al*. Enzyme-mediated methodology for the site-specific radiolabeling of antibodies based on catalyst-free click chemistry. *Bioconjug. Chem.* **24**, 1057–67 (2013).
9. Le, H. T., Jang, J.-G., Park, I. Y., Lim, C. W. & Kim, T. W. Antibody functionalization with a dual reactive hydrazide/click crosslinker. *Anal. Biochem.* **435**, 68–73 (2013).
10. Fischer-Durand, N., Salmain, M., Vessières, A. & Jaouen, G. A new bioorthogonal cross-linker with alkyne and hydrazide end groups for chemoselective ligation. Application to antibody labelling. *Tetrahedron* **68**, 9638–9644 (2012).
11. Kamphuis, M. M. J. *et al.* Targeting of cancer cells using click-functionalized polymer capsules. *J. Am. Chem. Soc.* **132**, 15881–15883 (2010).
12. Sletten, E. M. & Bertozzi, C. R. Bioorthogonal chemistry: Fishing for selectivity in a sea of functionality. *Angew. Chemie-Int. Ed.* **48**, 6974–6998 (2009).
13. King, M. & Wagner, A. Developments in the field of bioorthogonal bond forming reactions-past and present trends. *Bioconjug. Chem.* **25**, 825–839 (2014).
14. Best, M. D. Click chemistry and bioorthogonal reactions: unprecedented selectivity in the labeling of biological molecules. *Biochemistry* **48**, 6571–84 (2009).
15. Saxon, E. & Bertozzi, C. R. Cell surface engineering by a modified Staudinger reaction. *Science* **287**, 2007–2010 (2000).
16. Wang, Q. *et al.* Bioconjugation by copper(I)-catalyzed azide–alkyne [3 + 2] cycloaddition. *J. Am. Chem. Soc.* **125**, 3192–3193 (2003).
17. Link, A. J. & Tirrell, D. A. Cell surface labeling of Escherichia coli via copper(I)-catalyzed [3 + 2] cycloaddition. *J. Am. Chem. Soc.* **125**, 11164–11165 (2003).
18. Agard, N. J., Prescher, J. A. & Bertozzi, C. R. A strain-promoted [3 + 2] azide–alkyne cycloaddition for covalent modification of biomolecules in living systems. *J. Am. Chem. Soc.* **126**, 15046–15047 (2004).
19. Devaraj, N. K., Weissleder, R. & Hilderbrand, S. A. Tetrazine-based cycloadditions: Application to pretargeted live cell imaging. *Bioconjug. Chem.* **19**, 2297–2299 (2008).
20. Blackman, M. L., Royzen, M. & Fox, J. M. Tetrazine ligation: Fast bioconjugation based on inverse-electron-demand Diels–Alder reactivity. *J. Am. Chem. Soc.* **130**, 13518–13519 (2008).
21. Debets, M. F., van Hest, J. C. M. & Rutjes, F. P. J. T. Bioorthogonal labelling of biomolecules: new functional handles and ligation methods. *Org. Biomol. Chem.* **11**, 6439–55 (2013).
22. Lang, K. *et al.* Genetic encoding of bicyclonynynes and trans-cyclooctenes for site-specific protein labeling *in vitro* and in live mammalian cells via rapid fluorogenic Diels–Alder reactions. *J. Am. Chem. Soc.* **134**, 10317–10320 (2012).
23. Evans, H. L. *et al.* A bioorthogonal 68Ga-labelling strategy for rapid *in vivo* imaging. *Chem. Commun. (Camb).* 9557–9560, doi: 10.1039/c4cc03930c (2014).
24. Rappsilber, J., Mann, M. & Ishihama, Y. Protocol for micro-purification, enrichment, pre-fractionation and storage of peptides for proteomics using StageTips. *Nat. Protoc.* **2**, 1896–1906 (2007).
25. Cox, J. & Mann, M. MaxQuant enables high peptide identification rates, individualized p.p.b.-range mass accuracies and proteome-wide protein quantification. *Nat Biotech* **26**, 1367–1372 (2008).
26. Gandarillas, A. & Watt, F. M. c-Myc promotes differentiation of human epidermal stem cells. *Genes Dev* **11**, 2869–2882 (1997).
**Acknowledgements**
We thank Simon Broad for providing the ITGA6 and ITGB1 antibodies, Dr. R. Rice for hybridoma clone BC.1 (anti-TGM1), and Drs Sanne Schoffelen, Mark van Eldijk and Martijn Verdoes for fruitful discussions and technical support. This work was financially supported by the Radboud University, the Dutch Organisation for Scientific Research (NWO-VIDI) and the European Union (Marie-Curie Career Integration Grant).
**Author Contributions**
J.v.B. designed and performed most of the experiments, analysed the data and prepared figures. J.G. and S.T. designed, performed and analysed experiments. S.E. under supervision of K.B. synthesised the cleavable linker, prepared Fig. 3, and wrote supplemental info “Experimental section”. L.S. under supervision of J.v.H. performed and analysed the ESI-TOF experiment. N.H. and R.v.E. performed and analysed the LC-MS/MS experiment. M.H. performed the immuno-PCR optimization experiments. All authors reviewed the manuscript. K.M. conceived and oversaw the study. J.v.B and K.M. wrote the manuscript with input from all co-authors.
**Additional Information**
Supplementary information accompanies this paper at http://www.nature.com/srep
**Competing financial interests:** The authors declare no competing financial interests.
**How to cite this article:** van Buggenum, J. A. G. L. *et al.* A covalent and cleavable antibody-DNA conjugation strategy for sensitive protein detection via immuno-PCR. *Sci. Rep.* **6**, 22675; doi: 10.1038/srep22675 (2016).
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ |
May 6, 2014 DINNER MEETING
Who: John C. Lacy will speak about "The Genesis of Mining Law."
Where: Sheraton Tucson Hotel and Suites, 5151 East Grant Road, (at the intersection of Grant and Rosemont on the North side of Grant in the PIMA BALLROOM (enter at northwest corner of the building) and go upstairs to the meeting room.
When: Cash Bar at 6 p.m.—Dinner at 7 p.m.—Talk at 8 p.m.
Cost: Members $27, guests $30, Student members free with online reservation ($10 without).
RESERVATIONS ARE REQUIRED: CALL (520) 663-5295 or reserve on the AGS website (www.arizonageologicalsoc.org) by 11 a.m. by Friday, May 2. Please indicate regular (Chicken stir fry with brown rice), vegetarian, or cobb salad meal preference. Please cancel by Friday, May 2 at 11 a.m. if you are unable to attend—no shows and late cancellations will be invoiced.
Abstract
The Genesis of Mining Law
John C. Lacy
DeConcini, McDonald, Yetwin and Lacy, Tucson, Arizona
The “Mining Law of 1872” has been much maligned as being “ancient,” “out of date,” and “in need of modernization.” In fact, governmental systems of regulation of private mineral development can be traced to Greek and Roman precedents and what became the mining laws of the United States reflect a reliance of private initiative that is almost unmatched in the world. Mr. Lacy’s thesis is that the modern criticisms of this law reflect arguments for imposition of law that the laws of the United States were designed to avoid. This presentation will trace the roots of mining law from an ancient genesis, through the development of tribal traditions of the “free miners” of medieval Europe and the importation of Saxon/English and Iberian systems into the New World. Once in the Americas, traditions of private custom and regal systems combined into the ordinances and practices in the Viceroyalties of New Spain and Peru, then into the mining camps of the California gold fields and the Comstock Lode and finally were incorporated into the mining laws of the United States. The presentation will try to isolate those portions of the mining laws of the United States that reflect a basic policy of encouraging mineral development from those portions of the law that are rightfully criticized as being archaic.
About the May Dinner Meeting Speaker
Mr. Lacy is a shareholder in the law firm of DeConcini McDonald Yetwin & Lacy in Tucson, Arizona. His practice emphasizes mining and public land law and encompasses transactional and title consideration involving acquisition of mineral rights from private and governmental agencies together with permitting issues and associated water rights. A significant amount of Mr. Lacy’s historic practice has been devoted to international mineral transactions and he assisted in the revisions to the mining law of the Republic of Bolivia and the English translation of the mining laws and regulations of Mexico. He has teaches courses on mining and public land law, oil and gas law and mineral transactions at The University of Arizona Rogers College of Law and in the Department of Mining and Geological Engineering as an Adjunct Professor. He is the author of numerous publications concerning mineral rights and mineral law history and occasionally testifies as an expert witness on these subjects. Mr. Lacy is a past President of the Rocky Mountain Mineral Law Foundation and the Arizona Historical Society.
Geological Society of Nevada 2015 Symposium
The Geological Society of Nevada has announced a call for papers to be presented at its 2015 Symposium, which will be held at John Ascuaga’s Nugget in Reno/Sparks, Nevada on May 14-24, 2015. Co-hosts for this event include the Society of Economic Geologists, Nevada Bureau of Mines and Geology and the U. S. Geological Survey. Its theme is New Concepts and Discoveries. Anyone wishing to present a paper at this meeting needs to submit a draft abstract no later than May 30, 2014. For more info on this event visit this link.
Arizona Mining Review
Arizona Mining Review e-Video Magazine. The 30 April episode of the Arizona Mining Review (AMR) includes the following topics and guests:
• Exploration uptick in Arizona. Nyal Niemuth on a recent uptick in mining exploration in Arizona – things are looking up;
• Morenci Mine, Arizona’s flagship copper mine. Ralph Stegen, Vice-President for Mine Site Exploration with Freeport-McMoRan Copper & Gold on the geology and history of the Morenci Mine.
• AZGS launches new Minedata site – Casey Brown, AZGS digital archivist, on the launch of the Arizona Geological Survey Mining Data site. The site includes 1000’s of downloadable, historic mining records, reports, maps and photos from the Arizona Dept. of Mines and Mineral Resources collection. A suite of search tools – textual and geographic – facilitate success in locating and retrieving material.
The April episode will be broadcast at 10:00 am MST on 30 April on LiveStream (http://new.livestream.com/accounts/2496466/azminingreview). Immediately thereafter it will be available on our AZGS YouTube Channel (https://www.youtube.com/user/azgsweb).
Second Annual Arizona Geological Society Doug Shakel Memorial Student Poster Event
The Arizona Geological Society held its second annual Doug Shakel Memorial Student Poster Meeting on Thursday April 24, 2014 at the Embassy Suites Hotel in Tempe. Student turnout was excellent from all three state universities. Not only did undergraduates participate as well as graduate students, but they also won two of the top three prizes.
Our distinguished panel of judges included Carl Bowser, Professor Emeritus, University of Wisconsin at Madison; Jon Spencer, Senior Geologist at the Arizona Geological Survey; and Barbara Murphy, Senior Geologist with Clear Creek Associates. The Arizona Geological Society also thanks Geotemps, Inc., whose sponsorship helped offset our costs for this event.
Poster viewing was from 6 to 7 PM. Dinner ran from 7 to 9 PM because the poster presenters were required to give a three-minute oral summary of their posters. The ability to summarize one’s poster clearly and succinctly was an important part of the evaluation process.
The winners were:
- **First Prize ($500):** Jason D. Mizer, Graduate Student, U of A: U-Pb geochronology of Laramide magmatism related to Cu-, Zn-, and Fe-mineralized systems, Central Mining District, New Mexico
- **Second Prize ($250):** Lily Jackson, Undergraduate Student, U of A: Lake Malawi sediment record provides clues on climate variability and response to Mount Toba super-eruption
- **Third Prize ($150):** Crystylynda Fudge, Undergraduate Student, ASU: The coexistence of Wadsleyite and Ringwoodite in SAH 293: Constraints on shock pressure conditions and olivine transformation
Honorable Mentions: ($50 each):
- Ada Rosa Dominguez, Graduate Student, U of A: Paleoclimate and magmatic reconstructions of sediment-hosted copper and iron-oxide copper gold deposits
- Meghan Guild, Graduate Student, ASU: Boron isotopic variation in the subcontinental lithospheric mantle
- Daniel R. Hadley, Graduate Student, NAU: Analysis of geomorphic and vegetation change at Colorado River campsites, Marble and Grand Canyons, AZ
It is no exaggeration to say that all the posters and oral presentations were of excellent quality. The proof of this is that the judges, who took their job very seriously, haggled over the winners for an hour, although they had PDFs of most of the posters days before the event.
May Member Spotlight - James D. Girardi, 2011 Courtright Scholarship Recipient
What is the Title of your Ph.D. Dissertation? Comparison of Mesozoic magmatic evolution and iron oxide(-copper-gold) (‘IOCG’) mineralization, central Andes and western North America.
Where you are from? Well, in the geographic sense, I grew up in the New York City/Long Island area. From west to east notable stays were at Rosedale (Queens), and the towns of Inwood and East Meadow (Long Island). I am from a very large and traditional Italian family; the two halves, the Iovino’s and Girardi’s mostly immigrated to the USA from southern Italy in the 1940’s and 1950’s. So, in the historical sense, I suppose I am ultimately derived from my Mediterranean ancestors.
Where Did You Get Your Undergraduate Degree? I attended a SUNY, which stands for State University of New York. Geology is what I studied as a Bachelor of Science at SUNY Stony Brook, which is located in north-central Long Island. Our campus lies atop the majestic Harbor Hill glacial moraine; this is important because there are no mountains where I am from and to get a few 10’s of feet of topography from piled up glacial till is a really big deal for a geologist undergrad living in Long Island. At Stony Brook I was very lucky to be introduced to research, and as an undergraduate research assistant I learned about analog modeling of thrust belts (the “sandbox” experiments), and 2-D and 3-D subsurface imaging using Ground Penetrating Radar (GPR). When I was not in class or doing something geology related (like mineral collecting or GPR surveys), I was working in various deli’s, hardware stores, and construction sites to help pay for my college expenses. During that time I could also be found in Manhattan at live music concerts, or out on “The Island,” hanging out by the beach, fishing, clamming, and surfing. Really, I am not sure how I graduated in 4 years!
Why did you come to the U of A for your Graduate Degrees? I chose to come to U of A because I felt that here I had the best opportunity to develop as a scientist and as a professional. We have a unique group of faculty and students that really have made every day a blast! Like my undergrad experience, way too much fun, every day, if that is even possible… I am going to miss the “serious” work of playing football, wiffle ball, and lacrosse during lunch breaks.
What got you interested in your thesis topics? I have always had a great interest in continental arc magmatism. Because many types of ore deposits are intimately associated with arc magmatism, it was only natural to blend the two fields of study. The geochemical and petrologic skills I learned during my M.S. really helped me transition into my PhD, which is focused, broadly, on petrology, geochemistry, and ore deposits.
What do you plan to do (or hope to do) next? This summer I’ll be working a post-doc with Mark Barton at the U of A. Our plans are to publish several papers from my thesis, and also from our collaborative work in northern Chile. Later on, in early August, I’ll be pulling up my tent stakes and moving to Houston, TX. I will be working there as a exploration geologist with BP. I am looking forward to the next challenge, and hopefully staying involved with research on Cordilleran magmatism and ore deposits in some capacity.
How many papers you intend to squeeze out of your thesis work? Let’s see… Two papers will come out soon from my work in northern Chile. One of them talks about the pattern of Mesozoic magmatism in the central Andes and how it compares to coeval magmatism in the Cordillera of North America. The other will use Hf, Nd, Sr, and O isotopes, and whole-rock major and trace elements to show how the compositions and sources of the Andean Costal Batholith evolved through time. These studies will have implications for how we understand the mechanisms that govern continental arc magmatism and relationships between different magma compositions/sources and ore deposit formation.
Then there is the work I have done in the Mojave Desert… this will be submitted for publication soon as well, and that work focuses on a framework geologic study of the Jurassic magmatic arc in the central Mojave Desert (roughly the region between Blythe and Barstow) and links between “Kiruna-type,” “Iron-skarn,” and iron-oxide-copper-gold (IOCG) deposits. This work will show that the Mojave Desert is one of the best places in the world to study IOCGs because we can study system scale zoning of hydrothermal features over >250 km of strike across extended terrane and variable levels of crustal exposures. The Jurassic Mojave IOCG systems have not received much attention because nearly all of them were not economic to mine after the late 1880’s. Although one (Eagle Mountain) produced until the 1980’s, none are feasible today… Despite this, I contend that there is much to learn from the rocks whether or not they are associated with economic resources!
I have rattled through the main papers above, but collaborative work Mark Barton, Frank Mazzab, and Gordon Haxel will result in a few more papers on topics that include Chilean IOCGs, Mojave Desert IOCGs, and Jurassic magmatism in the southwestern United States.
What are your other interests, hobbies, talents, etc., etc.? I enjoy many hobbies. The main ones right now include tinkering on old cars (I own a 1970 VW camper), playing guitar, hiking, mineral collecting, and fishing. I also enjoy sports: football, baseball, wiffle ball, and lacrosse. I novice at cycling and I am currently training for long distances and climbing to the top of Mount Lemmon on my road bike.
I have met many great people through the AGS, and I’d like to stay in touch even after I leave town. My contact info is available at my website: www.terracryst.com.
More Photos from the Second Annual Arizona Geological Society Doug Shakel Memorial Student Poster Event
Ann Pattison and Alison Jones Congratulating Themselves on No longer Being Students
Mariah Romero Points Out Where She was Stalked by a Bear
Students Who Participated in the Second Annual Arizona Geological Society Doug Shakel Memorial Student Poster Event
Vaden Aldridge, Graduate Student, NAU: Estimating recharge in semi-arid ponderosa pine forests using the chloride mass balance method
Wadyum Ayyad, Undergraduate Student, U of A: Plate boundary zone deformation associated with Panama South American collision using GPS
Deon Ben, Graduate Student, NAU: Cultural adaptations of climate change: Navajo livestock grazing practices and animal husbandry on the Navajo Nation
S. Sarah Cronk, Undergraduate Student, ASU: (U-Th)/He geochronology of detrital grains in baked zones to date young volcanic flows
Ada Rosa Dominguez, Graduate Student, U of A: Paleoclimate and magmatic reconstructions of sediment-hosted copper and iron-oxide copper gold deposits
Crystylynda Fudge, Undergraduate Student, ASU: The coexistence of Wadsleyite and Ringwoodite in SAH 293: Constraints on shock pressure conditions and olivine transformation
Meghan Guild, Graduate Student, ASU: Boron isotopic variation in the subcontinental lithospheric mantle
David E. Haddad, Graduated Student, ASU, Effect on mechanical stratigraphy on hydraulically induced fractures in shales
Daniel R. Hadley, Graduate Student, NAU: Analysis of geomorphic and vegetation change at Colorado River campsites, Marble and Grand Canyons, AZ
Lily Jackson, Undergraduate Student, U of A: Lake Malawi sediment record provides clues on climate variability and response to Mount Toba super-eruption
Angela Lexvold, Graduate Student, NAU: Testing two hypotheses on Proterozoic crustal growth in northwestern Arizona using geochronology and thermobarometry analyses of metasedimentary rock
Alejandro Lorenzo, Graduate Student, ASU: On the lower radius of exoplanets
Megan Miller, Graduate Student, ASU: Spatiotemporal monitoring & modeling of land subsidence in Phoenix, Arizona, USA
Jason D. Mizer, Graduate Student, U of A: U-Pb geochronology of Laramide magmatism related to Cu-, Zn-, and Fe-mineralized systems, Central Mining District, New Mexico
Mariah C. Romero-Armenta, Undergraduate Student, U of A: Timing of exhumation of Laramide ranges in Montana and Wyoming constrained by apatite fission track thermochronology
Simone Runyon, Graduate Student, U of A: Fe Oxide-Cu Mineralization at the Minnesota and Pumpkin Hollow Deposits, Yerington, Nevada
Kelsey E. Young, Graduate Student, ASU: The use of handheld x-ray fluorescence (XRF) technology in unraveling the eruptive history of the San Francisco volcanic field, Arizona
Thank You for Your Donations to the Courtright and AGS Scholarship Funds
Dan Laux
Don Hammer
M. C. Kleinkopf
Bruce Walker
New Publications from the Arizona Geological Survey
(Available free, online at the AZGS Document Repository)
Chenoweth, W.L., 2014, *The Geology and Production History of the Black Rock Point Nos. 1 and 3 Uranium-Vanadium Mines, Apache County, Arizona*. Arizona Geological Survey Contributed Report, CR-14-B, 12 p.
Briggs, D.F., 2014, *History of the San Manuel-Kalamazoo Mine, Pinal County, Arizona*. Arizona Geological Survey Contributed Report, CR-14-A, 9 p.
UPDATED Arizona Geological Survey, 2014, *Locations of Mapped Earth Fissure Traces in Arizona, v. 03.31.14*. Arizona Geological Survey Digital Information (DI-39 v. 03.31.14), Arc GIS Layer Package.
Palmer, R., 2014, *Setting up Hyper-V 2012 Replication on Workgroup Servers: A Guide*. Arizona Geological Survey Open File Report, OFR-14-04, 27 p.
Cocker, M.D., 2014, *Lateritic, supergene rare earth element (REE) deposits*, in, Conway, F.M., ed., Proceedings of the 48th Annual Forum on the Geology of Industrial Minerals, Phoenix, Arizona, April 30 - May 4, 2012. Arizona Geological Survey Special Paper #9, Chapter 4, p. 1-18.
McLemore, V.T., *Rare Earth Elements Deposits in New Mexico*, 2014, in Conway, F.M., ed., Proceedings of the 48th Annual Forum on the Geology of Industrial Minerals, Phoenix, Arizona, April 30 - May 4, 2012. Arizona Geological Survey Special Paper #9, Chapter 3, p. 1-16.
*Arizona Geology e-Magazine* will be rolling out the Spring 2014 issue the week of 28 April. The feature article will on the state of knowledge of landslides and mass movement phenomenon in Arizona.
---
**Drilling America, Inc.**
**ANNOUNCES NEW SERVICES! NOW OFFERING SONIC DRILLING**
Major Drilling is now offering Sonic Drilling Services from our Salt Lake City office for Environmental and Geotechnical services. Full size truck & track rigs along with track mounted Mini Sonic rigs are available.
**Services offered:**
- Mineral Exploration
- Well Abandonment & Well Development
- Instrumentation Installation
- Geotechnical Testing and Sampling
- Well Construction & Remediation Wells
- Discrete Water Sampling & Packer Testing
- Soil Probing, In-situ Chemical Injection and Grouting
Contact info:
Jon Tedrick: cell (320) 630-3636 firstname.lastname@example.org
Nguyen Do: cell (801) 554-8383 email@example.com
ANNOUNCEMENTS
Welcome New AGS Members
Jacob Alden Meghan Guild Larry Lepley Bob Sandefur
Wadyan Ayyad David Haddad Alejandro Lorenzo Mary Schultz
Michael Bierwagen Daniel Hadley Diane Love Jim Scott
Melissa Boerst Abeer Hamdan Megan Miller John Stitzer
Joseph Cain IV Sky Jackson Mary Pendleton Hoffer Berkley Tracy
Irene Castillo Michael Jaworski Tony Potucek Kelsey Young
Stephanie Cronk Devin Keating Simon Russell Guang Zhai
Ada Dominquez Mostafa Khoshmanesh Andrea Sanchez Megan Zivic
Arizona Geological Society is grateful to Freeport-McMoRan Copper and Gold for their generous support of our student members!
Freeport-McMoRan is sponsoring student dinners for the 2014 AGS monthly meetings.
2014 AGS MEMBERSHIP APPLICATION OR RENEWAL FORM
Please mail check with membership form to: Arizona Geological Society, PO Box 40952, Tucson, AZ 85717
Dues (check box) □ 1 year: $20; □ 2 years, $35; □ 3 years: $50; □ full-time student (membership is free)
NEW MEMBER or RENEWAL? (circle one) Date of submittal ________________
Name: ____________________________________________________________ Position: _______________________
Company: _______________________________________________________________________________________
Mailing Address: _________________________________________________________________________________
Street: ___________________________ City: ______________ State: ______ Zip Code: ___________
Work Phone: ___________________________ Home Phone: ___________________________
Fax Number: ___________________________ Cellular Phone: ___________________________
E-mail: ___________________________ Check this box if you do not have an email address □
All newsletters will be sent by email. If you do not have an email address, we will mail a hard copy to you, but we cannot guarantee timeliness.
If registered geologist/engineer, indicate registration number and State: ______________________________________
Enclosed is a _________ tax-deductible contribution to the J. Harold Courtright Scholarship Fund. |
Abscess of the Tongue Evolution and Treatment of an Emergency
Bertolini G¹, Bruschi A¹, Meraviglia I¹, Gazzano G², Avigo C³, Luzzago F¹ and Capolunghi B¹*
¹Department of ENT, ASST-Franciacorta, Italy
²Department of Pathology, ASST-Franciacorta, Italy
³Department of Radiology, ASST-Franciacorta, Italy
Abstract
The Authors present a rare case of abscess of the tongue in a 86-years-old man suffering of severe odynophagia and dysfagia, increased snoring and tongue swelling for 15 days. A contrast-enhanced computed tomography scan (CT scan) of the neck was performed disclosing a 4.2 x 3.2 x 3.8 cm sized abscess at the 2/3 posterior of the body of the tongue. Due to the general conditions of the patient needle aspiration through the oral route was made consecutively for three days. The patient experienced considerable amelioration of the symptoms. A CT scan of the neck in the 5th day after the conservative treatment revealed an initial extension of the abscess into the hypo-pharyngeal space. General anaesthesia and endotracheal intubation were required. Incision and drainage of the abscesses were performed using diode laser. Tracheotomy was necessary. Complete regression of the symptoms was experienced after 10 days.
Keywords: Abscess; Tongue; CT scan
Introduction
Abscess of the tongue is a rare pathology reported only once in the English Literature [1]. These Authors reported lingual tonsil abscess pointing out the similar structure to that of the palatine tonsils. However the different progression of infection is not similar because lingual tonsil lack the capsule, thus preventing the formation of a peritonsillar abscess [1].
Acute lingual abscess is life-threatening clinical entity, as swelling of the tongue may rapidly occlude the airway [2]. Symptoms are progressive pain, fever, swollen and immobilization of the tongue, oedema and redness of the tongue. The most common cause of lingual abscess is direct trauma, although immunocompromised state is a predisposing risk factor [3]. The brisk vascularisation and muscularity of the tongue and anti-infective properties of saliva are preventing factors to the development of the abscess [4].
Diagnosis of abscess of the tongue required enhanced Computed Tomography (CT scan) of the neck, Ultrasound (US) through floor of mouth 4 or Magnetic Resonance Imaging (MRI). Despite of the rarity and complexity of the condition its management strategy is relatively simple [5]. Intravenous antibiotics are the primary treatment modality, with consideration given to adjunctive surgical drainage [3]. Differential diagnosis of the tongue abscess include haemorrhage, infarction, tumor and edema [5].
The Authors describe a rare case of abscess of the tongue in a 86-years-old man treated initially with conservative procedure because his general conditions; after initial ameliorations of the symptoms patient underwent to general anesthesia 5 days later for the extension of the abscess into the hypo-pharyngeal space.
Case Presentation
A 86-years-old man was urgently referred to ENT department with complaints of severe odynophagia and dysfagia to solid and liquid foods for 15 days, treated with two different antibiotics without resolution of the symptoms. Tongue swelling, voice changes and increased snoring were progressively experienced. He was afebrile with normal vital signs. The white blood cell count was 16.49/mm³, C-reactive protein was 3.15 mg/L. He had no history of smoking, alcohol consumption. The patient was affected by chronic bronchitis, atrial fibrillation treated with Warfarin and implant...
of Pace Maker. Progressive pain involving tongue and stomatolalia were the main symptoms. A flexible endoscopic examination revealed normal larynx with no signs of airway obstruction. Tonsil were normal. Large spectrum of intravenous antibiotic and corticosteroid were initially used. A contrast-enhanced Computed Tomography scan (CT scan) of the neck was performed disclosing a 4.2 x 3.2 x 3.8 cm sized abscess at the 2/3 posterior of the body of the tongue (Figure 1). Due the general conditions of the patient needle aspiration of the pus collection through the oral route was made consecutively for three days. The patient experienced considerable amelioration of the symptoms. Pathologic examination of the pus revealed only flogistic cells. Unfortunately a CT scan of the neck in the 5th day after the conservative treatment revealed an initial extension of the abscess into the hypo-pharyngeal space (Figure 2). Thus patient underwent in the operating room. The general anaesthesia and endotracheal intubation were required. Incision and drainage of the hypo-pharyngeal abscess and the residual tongue lesion were performed using diode laser. Tracheotomy was necessary. The follow-up CT scan of the neck performed 10 days later the operation revealed complete disappearance of the two abscesses (Figure 3).
**Discussion**
The tongue is resistant to infection because many protective mechanisms: the increased vascularisation and lymphatic drainage, thick keratinized mucosa, the immunological properties of saliva, and the constant mobility which reveals cleaning effect of saliva [6]. The tongue abscesses are classified in two groups: anterior tongue abscess and posterior third abscess [7]. The etiologies vary according to its localization. Posterior abscess usually derived from lingual tonsillitis, infected thyroglossal duct cyst remnants, and periodontal infections spreading from lower molar teeth [6,8]. Recurrent tongue abscess were referred in case of diabetes and tongue laceration [9]. Tajudeen et al. [10] report a case of glossal abscess as a complication of tongue-base suspension surgery for the treatment of obstructive sleep apnea (2011). In careful retrospective history taking, the symptoms had dated from an episode of trauma [3,11–13].
The symptoms of tongue abscess are painful swelling of the tongue, pain, fever, dysphagia and dyspnea. Differential diagnosis includes: carcinomas, acute epiglottitis, dermoid cyst, lingual artery aneurysm, infarction, haemorrhage, lingual tonsillitis, thyroglossal cysts, tuberculosis and actinomycosis [14]. Laboratory and radiological tests may be helpful. CT scan and MRI are recommended for differential diagnosis of tongue swelling [9,7].
Approximately 60 cases of tongue abscess have been reported in the English-language Literature over the past 30 years [10]. Treatment of abscess of the tongue consists of airway maintenance, abscess drainage, antibiotic treatment [7]. Drainage of the abscess is usually done by needle aspiration of the pus through the inferior surface of the tongue [15]. Gulsum et al. drained successfully the abscess of the base of the tongue through the oral route by needle aspiration for five consecutive days.
In our experience the abscess of the tongue was drained through the oral cavity by needle aspiration for 3 consecutive days. The patient experienced sensible amelioration of the symptoms. Unfortunately a CT scan 5 days later showed initial extension of the abscess into the hypo-pharyngeal space. Incision and drainage in the operating room was necessary. Diode laser assisted drainage of hypo-pharyngeal abscess and the residual tongue lesion were performed. A tracheotomy was necessary to avoid airway compromise.
**Conclusion**
The tongue abscesses are rare but potentially life-threatening pathologies. The case report in this paper increase awareness among
head and neck surgeons of the clinical findings of this acute pathology. The Authors consider conservative treatment the first choice leaving operative surgery in general anaesthesia in the cases of complications such as described in this issue.
References
1. Coughlin AM, Baugh RF, Pine HS. Lingual tonsil abscess with parapharyngeal extension: a case report. Ear Nose Throat J. 2014; 93: E7-E8.
2. Tikkakoski T, Weitz-Touretnaa A, Kamisnksi T, Rahkonen M. Tongue abscess. Duodecim. 2014; 130: 71-74.
3. Kettaneh N, Williamson K. Spontaneous lingual abscess in an immunocompromised patient. Am J Emerg Med. 2014; 32: 492.
4. Kulkarni CD, Verma AK, Kanaujia R. A rare case of hemilingual abscess in a 17-year-old girl: the ease of ultrasound and the advantage of MRI. Jpn J Radiol. 2013; 31: 491-495.
5. Pallagatti S, Sheikh S, Kaur A, Puri N, Singh R, Arya S. Tongue abscess: a rare clinical entity. J Investig Clin Dent. 2012; 3: 240-243.
6. Antoniades K, Hadjipetrou L, Antoniades V, Antoniades D. Acute tongue abscess. Report of three cases. Oral Surg Oral Med Pathol Oral Radiol Endod. 2004; 97: 570-573.
7. Gulsum TO, Mehemt VA, Gulhan KU, Huseyin SG. A rare case of Acute Dysphagia: Abscess of the Base of the Tongue. Case rep Gastrointest Med. 2015; 2015: 431738.
8. Vellin JF, Crestani S, Saroul N, Bivahagumye L, Gabrillargues J, Gilain L. Acute abscess of the base of the tongue: a rare but important emergency. J Emerg Med. 2011; 41: e107-e110.
9. Sanchez Barrueco A, Melchor Diaz MA, Jiménez Huerta I, Millan Juncos JM, Almodovar Alvarez C. Recurrent lingual abscess. Acta Otorrinolaringol esp. 2012; 63: 318-320.
10. Tajudeen BA, Lanson BG, Roehm PC. Glossal abscess as a complication of tongue-base suspension surgery. Ear Nose Throat J. 2011; 90: E15-E17.
11. Kim HJ, Lee BJ, Kim SJ, Shim WY, Baik SK, Sunwoo M. Tongue abscess mimicking neoplasia. AJNR Am J Neuroradiol. 2006; 27: 2202-2203.
12. Westergaard-Nielse M, Ostvoll E, Wanscher JH. An abscess in the tongue. Ugerskr Laeger. 2013; 175: 1579-1580.
13. Chen PL, Chiang CW, Shiao CC. Tongue abscess induced by embedded remnant fishbone. Acta Clin Belg. 2015; 70: 466-467.
14. Kolb JC, Sanders DY. Lingual abscess mimicking epiglottitis. Am J Emerg Med. 1998; 16: 414-416.
15. Balatsouras DG, Eliopoulos PN, Kaberos AC. Lingual abscess: diagnosis and treatment. Head Neck. 2004; 26: 550-554. |
Digital and STEM skills for girls at a glance:
Latin America and the Caribbean
Digital and STEM skills for girls at a glance: Latin America and the Caribbean
Supervision:
Ivonne Urriola Pérez, Gender and Development Officer, and María José Velasquez Flores, Digital Education Specialist at the UNICEF Latin America and Caribbean Regional Office.
Author:
June Pomposo Angulo, Gender Consultant.
Technical collaboration and content development (in alphabetical order):
Lina Beltrán, Head of Education at the Bolivia Country Office; Ileana Cofino, Education Specialist at the Guatemala Country Office, Luisa Martinez Cornejo, Gender and Development Officer at the Peru Country Office; Gabriela Mora, Youth and Adolescent Development Officer and Gender Focal Point at the Brazil Country Office, Inti Tonatihu Rioja Guzman, UNV for Youth Skills, Employability and Innovation at the Bolivia Country Office; Vincenzo Placco, Deputy Representative at the Guatemala Country Office.
Translation from Spanish to English by Yvonne Fisher.
Creative Design by Maria Paz Gonzales and Franco Rucabado.
The contents of this document represent the views of the authors and do not necessarily reflect the policies or views of UNICEF. Any reference to a website other than UNICEF does not imply that UNICEF guarantees the accuracy of the information contained therein or that it agrees with the views expressed therein.
UNICEF does not support any company, brand, product or service.
Full reproduction of the contents of this document is permitted only for research, advocacy and education purposes, provided that it is not altered and appropriate appropriations are allocated (UNICEF). This publication cannot be reproduced for other purposes without prior written permission from UNICEF. Requests for permission must be addressed to the Communication Unit, email@example.com
Suggested citation: United Nations Children’s Fund, Digital and STEM skills for girls at a glance: Latin America and the Caribbean, UNICEF, Panama City, 2023.
© United Nations Children’s Funds (UNICEF)
March 2023
Latin America and the Caribbean Regional Office
Building 102, Alberto Tejada St.
City of knowledge
Panama City, Republic of Panama
PO Box 0843-03045
Phone: + 507 301 7400
www.unicef.org/lac
Contents
4 Introduction
5 Background and context
6 Educational challenges for girls and adolescent girls in Latin America and the Caribbean
7 Girls’ potential
8 UNICEF’s response: The Skills4Girls programme in the Latin America and the Caribbean region
12 Bolivia
13 Brazil
14 Guatemala
15 Peru
20 An opportunity to invest in girls
21 Outcomes so far
22 Proposals for the Future
23 Key resources
25 Bibliography
UNICEF’s global programme Skills4Girls was launched in 2020 with the aim of providing girls and adolescents girls with skills and competencies so they can access opportunities at present and in the future, and preparing them for the socioeconomic challenges of the 21st century. Activities within the Skills4Girls programme focus on STEM (science, technology, engineering and math) education, digital literacy, social entrepreneurship and life skills.
Skills4Girls is active in 22 countries, with the support of public-private partners, including companies such as Chloé, Clé de Peau Beauté, Dove, Gucci and Pandora that have funded and sponsored ongoing programmes. As at 2022, it is estimated that the programme has had a direct impact on approximately 40,000 girls and adolescents girls.
The core approach enabling to bring about gender transformative outcomes that improve the lives of girls in a tangible way, is based on a significant commitment with girls as the leading players to design and implement solutions to meet their needs and interests. The programme aims at bridging the gap between the skills girls need to become a part of the future workforce and those they have traditionally accessed.
As one of the five specific priorities for girls’ empowerment within the UNICEF Gender Action Plan 2022-2025, investments in girls’ education and skills are a critical pathway to decent work and empowerment.
In addition, the new strategy of the adolescents girls’ programme 2022-2025\(^1\) describes how UNICEF mainstreams gender equality in all operations and in its programmatic work, committing through this and other programmes to develop transformative actions that foster, inter alia, adolescent girls learning and skills with the aim of promoting their rights and meeting their many and diverse needs.
---
\(^1\) United Nations Children’s Fund, *Adolescent Girls Programme Strategy*, UNICEF, October 2022
The programme contributes to UNICEF’s overall approach to facilitate gender equality and the empowerment of adolescent girls, especially in the transition towards equal participation in the workforce, contributing to the outcome of the Strategic Plan (SP) and the Gender Action Plan (GAP) of “reaching out to 6.5 million girls with programmes on skills for employability, learning, personal empowerment and/or active citizenship”.
Throughout this document, an analysis is carried out of the impact that the Skills4Girls programme has had on the reality faced by adolescents girls in Latin America and the Caribbean, especially in the four countries in which the programme is currently in force (Bolivia, Brazil, Guatemala and Peru) and in other countries such as Mexico and Colombia that are currently conducting skills development activities specifically for girls. Likewise, it explores the opportunities that the programme offers adolescent girls to develop their full potential and access better future employment and social opportunities, providing successful examples of STEM skills acquisition, entrepreneurship, life skills and advocacy initiatives to challenge and do away with gender roles and stereotypes.
Finally, it delves into the benefits of investing in the training of adolescent girls as a key element to overcome the challenges and barriers of gender inequality and discrimination experienced by girls in our region.
For millions of girls and adolescent girls, gender inequality combined with poverty and other disadvantageous conditions limit their freedom of choice and access to resources for a decent life. In a girl’s development through to adulthood, there are many education, sexuality, participatory and social environmental barriers. In addition, cultural and social expectations, child marriage, the risk of pregnancy, violence, and lack of access to resources determine the opportunities girls may have. Promoting and enhancing their empowerment and leadership is a part of UNICEF’s mandate to counter inequalities and pave the way for girls and adolescent girls to prosper.
In this regard, it is essential to recognize that the gender gap in skills, especially related to STEM, is linked to historically rooted cultural aspects, in which stereotypes, models and prejudices tend to pigeonhole women and girls in certain roles and occupations. Thus, structural inequalities and gender biases can restrict girls’ choices when deciding on the careers they want to study and the types of jobs they aspire to hold or have access to. In Latin America and the Caribbean, girls and boys are guided to choose areas of study in a segregated manner traditionally considered “female” or “male”. Girls are guided to value and prioritize household chores and family care, as well as to choose highly feminized professions linked especially to the health, education or services sectors and/or to low-paying professions on the labour market. Boys, on the other hand, are expected to be the providers and main economic support of their families in the future, so they are directed towards more physically demanding professions and/or with greater workloads.
This has negative effects on employment opportunities and girls’ choice of technical specializations to adequately train for today’s economic landscape that increasingly values and requires STEM and digital skills. It is important to highlight that we live in an increasingly digitized world, in which many of the daily activities such as employment, socialization and participation in socioeconomic aspects call for digital skills and access to technological tools.
Despite this growing social-labour demand with digital qualifications and skills, there is a significant shortage of equal opportunities for women to be trained in these skills, which will lead technologies to be designed, produced and managed predominantly by men in the future.
So as to promote the inclusion of more women in technology-related training and careers, it is essential to foster an ecosystem to arouse the interest, presence and participation of girls and youth in STEM and digital fields from an early age and promote inclusive educational activities with special emphasis on secondary education and vocational technical training.
---
2 Economic Commission for Latin America and the Caribbean, *The care society. A horizon for sustainable recovery with gender equality*, ECLAC Chile, November 2022.
3 Ibid.
Educational challenges for girls in Latin America and the Caribbean
Latin America and the Caribbean is one of the youngest regions in the world, 30 percent of the region’s total population is under 18 years old.\(^4\) This generation has the opportunity to contribute positively to change and promote future scenarios that are innovative, inclusive and respectful of the environment and people.
Education is a key lever to bring about great benefits for the economy, human well-being and the environment. Investing in the human capital of this young generation in Latin America and the Caribbean would result in great benefits for the region and for the future of its societies. Greater attention and investment in today’s girls, boys and adolescents is required to ensure a prosperous region in the future.\(^5\)
Unfortunately, secondary school drop-outs among adolescents continues to be a great pending matter across the region, and it is estimated that four out of every 10 students drop out of secondary education\(^6\) because of a series of factors, such as the lack of economic resources and other factors affecting girls and adolescent girls to a greater extent, such as child marriage and early unions as well as child pregnancy and early motherhood. At present, in Latin America, between 10% and 25% of women have become mothers before the age of 18, and 22% of women between the ages of 20 and 24 in the region were married or had a stable union.
---
\(^4\) United Nations Children’s Fund, *Children in Latin America and the Caribbean: 2020 Overview*, UNICEF Latin America and the Caribbean, October 2020.
\(^5\) United Nations Children’s Fund, *Reimagining education “Reimaginar la educación y el desarrollo de habilidades para niños, niñas y adolescentes en América Latina y el Caribe”*, December 2021.
\(^6\) Information center for the improvement of learning.
before they turned 18.\textsuperscript{7} Among available data from 14 countries in the region\textsuperscript{8} nearly 50% of pregnant girls and adolescents or under 14-year-old mothers drop out of school, and between 67 and 89% of adolescent mothers do not attend school (CLADEM, 2016).
On the other hand, in recent years, especially as a result of the pandemic, technological changes have accelerated, and digital transformation is a phenomenon that will continue to spread across the different walks of life, thus becoming a necessary condition not only to ensure the right to education, even in emergency settings but also for the jobs of the future. This has provided opportunities although new paths have also been drawn up that deepen inequalities, particularly concerning barriers to access, use and ownership of new technologies.
\begin{quote}
Inclusive digitization must be a means to achieve sustainable development and gender equality, so it is important to assess the possibility of transforming educational systems to adapt them to digital demands, and make them more flexible, inclusive and innovative, reaching out to all the Region’s girls, boys and adolescents.\textsuperscript{9}
\end{quote}
\textsuperscript{7} Economic Commission for Latin America and the Caribbean, \textit{The care society. A horizon for sustainable recovery with gender equality}, ECLAC Chile, November 2022.
\textsuperscript{8} Argentina, the Pluri-national State of Bolivia, Brazil, Colombia, Dominican Republic, El Salvador, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Puerto Rico, and Uruguay.
\textsuperscript{9} ECLAC, \textit{The care society}
In this regard, the region is one of the most unequal in the world, in which thousands of girls and boys do not yet have access to the Internet or to technological devices, especially those who live in rural areas or indigenous communities. Data shows that 51% of students (over 32 million) between the ages of three and 17 in the region do not have an Internet connection at home (UNICEF, 2020). Despite the fact that many governments provided distance learning opportunities in 2020 - 2021 during the COVID-19 pandemic, the digital divide left behind girls, boys and adolescents from rural areas, indigenous and migrant communities, as well as persons with disabilities. It is estimated that 40% of students did not have access to distance education and, therefore, had fewer chances of digital learning and a higher risk of dropping out of school. (UNICEF 2022).
COVID-19 exacerbated these obstacles, many girls and boys lagged behind in their studies and others abandoned them. At the end of 2020, UNICEF estimated that 3.1 million girls, boys and adolescents in the region were at risk of dropping out of school due to the pandemic\(^{10}\). Especially girls, who continued to undertake the high burdens of care and household chores, became pregnant or entered into early marriages and were among those mainly affected by economic constraints.
The overload of care tasks that girls undertake affects their access to opportunities and the exercise of fundamental rights such as quality studies, decent jobs, enjoyment of leisure time, promotion of health and well-being, among others. This situation affects the development of girls by reducing the time available to carry out various physical, social, cognitive and emotional activities that contribute to their comprehensive development and the exercise of their autonomy. (ECLAC 2022)
\(^{10}\) United Nations Children’s Fund, EDUCATION ON HOLD A generation of children in Latin America and the Caribbean are missing out on schooling because of COVID-19, UNICEF Latin America and the Caribbean, December 2020.
Therefore, the cultural patterns that link women and girls to care activities, and the lack of flexible programmes promoting their education, have an impact on their life choices and access to opportunities for their comprehensive development. According to ECLAC 2019 data, only 34.6% of STEM graduates in the region were women, who also have to face gender-based beliefs and expectations at school and at the workplace that limit adolescent girls' aspirations and influence their participation, performance and progression in STEM-related fields. This ends up being a clear limitation for their access to highly-demanded quality jobs in the future and their ability to adapt to ongoing changes and technological advances at work.
Girls’ potential
Girls are the global example of strength and resilience, they have the ability to drive progress and positively transform their families, schools and communities. If adequate resources are invested in the development of their skills and competencies, they will have the opportunity to generate positive changes in society now and in the future and become the leaders governing the world.
Digital skills and social entrepreneurship are a critical path towards decent work that results in economic independence, and professional and personal empowerment for girls.
Twenty-first century employment focuses on new technologies in science, technology, engineering, mathematics and entrepreneurship (STEM), since 90% of jobs around the world require digital skills.
When girls study STEM-related subjects, they are prepared to solve global problems and challenge gender stereotypes, they gain confidence, agency and skills to face everyday challenges. For girls to have the opportunity to learn and succeed in science, technology, and engineering solutions, we need to reinvent educational systems.
Through the Skills4Girls programme, UNICEF aims to do away with gender stereotypes, boost their employability and socioeconomic insertion, contribute to building educational public policies and help reduce the digital gender divide and the STEM gap.
The Skills4Girls initiative, which has been led by UNICEF since 2020, contributes to developing the skills and empowering girls, preparing them so that in the future they can face the challenges of the 21st century, facilitating their transition to the labour market and exercising their active citizenship.
Secondary education, learning and skills for girls, including STEM skills, are among the outcomes of the Gender Action Plan that UNICEF prioritizes to address the challenges faced by girls.
Investing in education and skills development for girls can transform the vulnerabilities they face regarding their current and future opportunities, with ripple effects for their families, other girls, and future generations.
The programme is currently operational in 22 countries, with an approach focused on girls to help develop their skills through innovative initiatives and methodologies, carry out advocacy activities to do away with stereotypes and provide visibility to the successful experiences of girls, as well as generate evidence through studies and consultations that show the importance of investing and continuing to work on developing girls’ skills.
The programme intentionally focuses on girls’ education based on two strategies:
a) Advancing girls’ education and learning, particularly in STEM (science, technology, engineering and mathematics), and developing digital skills, including digital literacy and safety and security.
b) Overcoming gender stereotypes in learning, access to learning opportunities, participation, empowerment through girl-centered approaches, transferable skills, problem solving, entrepreneurship, networking and mentoring.
It adopts a methodology that compiles successful approaches and models to promote girls’ abilities through empowerment, flexible and meaningful learning, targeted to girls and adapted to their realities and contexts.
**The objectives of the programme focus on:**
1. Placing girls and adolescent girls at the center through their meaningful participation in design, implementation, monitoring and learning.
2. Promoting and developing skills that enable them to participate on equal grounds and transition towards employment, including STEM skills, social entrepreneurship and transferable skills.
3. Tailoring approaches to their needs: flexible learning, safe spaces, mentoring, internships, access to technology, soft skills development, and leadership.
4. Sharing successful learning cases and encouraging networking to expand global dialogue and collective impact.
The Skills4Girls programme’s advocacy and public-private partnerships strategy has focused on investing in the development of girls’ skills and on working collaboratively to disseminate messages and raise awareness concerning the power of girls as change agents and leaders in proposing solutions to current challenges.
In forthcoming months, the idea is to continue expanding and replicating the programme in other communities and countries across the region, based on the example of successful experiences as a precedent to scale up the programme. In addition, through communication and advocacy campaigns, UNICEF aims at providing ongoing visibility to and disseminating the programme’s success and progress that will lead to opening new spaces for dialogue and collaboration with decision makers. Finally, the short-term purpose is to continue strengthening the partnerships established with the public and private sectors to continue moving forward and generating positive outcomes.
**Where is it implemented?**
At present, the Skills4Girls programme is operational in four countries in Latin America and the Caribbean, namely, Bolivia, Brazil, Guatemala and Peru, with the support of the following private donors: Chloé, Pandora, Shiseido and Dove, and partnerships with the private and public sector of the countries in which the programme is being implemented.
Other countries in the region conduct activities to promote girls’ skills in the different fields of education, adolescence and gender and may be proposed to become a part of the S4G programme in the future, as is the case of Mexico and Colombia (characteristics of their programmes are described below).
BOLIVIA
Develop the scientific and technological skills of adolescent girls.
BRAZIL
Mental health care of adolescents through social networks.
GUATEMALA
Secondary education delivered under a flexible modality for adolescents from rural and indigenous areas.
PERÚ
Skills for programming and web design and social-emotional skills, and for employability of adolescent girls in vulnerable situations.
Note: This map is stylized, and it is not to scale. It does not reflect a position by UNICEF on the legal status of any country or territory or the delimitation of any frontiers.
Bolivia
Eduardo Ruiz, UNICEF Bolivia. 2021.
Programme
Since 2019, UNICEF Bolivia has been conducting multilevel interventions to reduce the gender gap in STEM-related areas, ensuring that girls can develop the skills they need for the future. UNICEF Bolivia works in partnership with different public and private sectors to reduce the gender gap in STEM. The programmatic efforts of the country office aim at carrying out (i) Activities to develop innovative capacities for girls, such as Technovation, a technological entrepreneurship contest to solve social problems in their communities by developing a mobile application or a digital project, the Chicas Waikiris Bootcamp training programme to promote development of scientific and technological skills, and the RoboTICas programme for building robot prototypes; among others (ii) Advocacy activities to break down stereotypes and provide visibility to the successful experiences of girls in high-level events, thematic conversations and communication campaigns; (iii) Advocacy activities, setting up partnerships with key institutions and stakeholders and carrying out research to determine the causes and magnitude of the gender gap in access to technology in the country.
Programme quick facts
- Target audiences: 7 to 18-year-old girls and adolescent girls
- Current scope: 6,690 girls and adolescent girls have developed their robotics, digital and computer skills.
- Programme interventions: technology contest Technovation, Training programme Bootcamp Chicas Waskiris, RoboTICas robot creation programme (name includes the Spanish acronym for ICT), science camps, communication campaigns
- Skills component: scientific and technological skills
Brazil
Manuela Cavadas, UNICEF Brasil, 2019
Programme
Since 2019, in partnership with Dove -a company that makes personal care products-, UNICEF Brazil has been strengthening the empowerment of adolescents by developing, disseminating and managing a chatbot script on body confidence and self-esteem. The project consists of reinforcing the self-esteem and confidence of adolescents regarding their bodies, with online conversations through a chatbot on the Facebook Messenger platform. Over 2,000 adolescents have participated in workshops for developing and validating the chatbot script, sharing their opinions and the concerns affecting Brazilian youth about self-esteem and body image. To date, over two million adolescents have interacted with the chatbot, acquiring skills related to positive management of their emotions and generating greater self-confidence and self-esteem, particularly for adolescent girls.
\footnote{Total number of adolescents who have downloaded and used the Topity chatbot.}
Guatemala
Programme
Through the initiative Dreamcatcher (atrapasueños), UNICEF Guatemala with funds from the jewelry company Pandora, aims to empower adolescent girls from rural and indigenous areas so that they become innovators and social entrepreneurs.
The initiative is targeted to adolescents between the ages of 15 and 19 who live in the mountainous region of the country, whose families depend on agriculture for their livelihood and where employment opportunities are especially limited.
UNICEF Guatemala actions aim at offering alternative and flexible education programmes to thus reduce the risk of absenteeism and school dropout. With a view to promoting educational continuity, the programme includes activities to develop skills and abilities with innovative, alternative educational methodologies to meet the individual needs and contexts of adolescents. These include ICT skills, psychosocial support, recreational and artistic activities, the promotion of healthy habits and lifestyles, leadership training and citizen participation to address community problems as well as dispute settlement and resilience.
Advocacy activities together with the Ministry of Education, and communication campaigns within the communities are providing good outcomes in motivating families to enroll their sons and daughters in the flexible education programme, and in combating school dropout.
---
12 Number of girls who have directly participated in programme activities such as training sessions and workshops since it started up in 2021.
Peru
Programme
With the support of the Japanese cosmetics multinational, Shiseido, UNICEF Peru has been implementing a programme to empower girls in digital skills and STEM since January 2021. The programme’s objective is to strengthen and promote a set of educational policies with and for adolescent girls aged 16 to 17 who have experienced early motherhood, and for vulnerable adolescents who are interested in professional careers related to STEM to improve their current and future social-labour inclusion.
In partnership with Laboratoria, a center for technological studies, UNICEF developed and implemented a training programme in STEM and digital skills, new technologies and life skills. This training programme is a part of an expanded offer of the Technical-Productive Education Centers of Peru’s Ministry of Education (CETPRO). Digital skills include programming and software development, as well as the bootcamp programme that teaches adolescent girls JavaScript. So far, 935 adolescent girls have benefited from the programme.
---
13 Number of adolescent girls who have directly participated in programme activities (Bootcamp, Hackaton, +Chicastec) since it started up in 2020.
An opportunity to invest in girls
If the conditions and means are created for girls to explore and make the most of their talent, they will be given the opportunity to make their dreams come true and live a full life as an active part of society.
Investing in girls and adolescents has a ripple effect that benefits their families, their communities and society. There is no limit to what girls can achieve if they are given the chance to develop their inherent skills and if the discrimination and inequality that stands in their way can be ended. They have proven to take leadership in many fields to bring about great changes that have had a positive impact at all levels: social, environmental, cultural and also in sports and technology, etc.
UNICEF thus works to transform the lives of girls and adolescents together with them, listening to their needs and demands, investing in their education, supporting them in their transition to the labour market, promoting and developing their inherent skills, providing visibility to their leadership and giving them the opportunity to participate and voice their opinions.
Outcomes so far
Since the S4G programme started up in Bolivia, Brazil, Guatemala and Peru, it has brought about progress and has provided new opportunities, especially for girls and adolescents in vulnerable situations, disadvantaged socioeconomic contexts, for those from rural and indigenous areas, who have dropped out of school, and for those who are pregnant or who have become mothers:
1. Development of skills for girls:
- Specific skills have been developed among girls through innovative initiatives in fields such as science, technology, engineering, mathematics, robotics and digitization. (See: Statements in the next section)
- But also, life skills with an inter-cultural approach, boosting their skills for solving complex problems through creativity and innovation, leadership and participation, self-esteem, resilience, healthy habits and lifestyles, decision making and entrepreneurship.
- Training programmes were developed using flexible and innovative learning methodologies adapted to the specific educational needs of girls, respecting their pace, their social and cultural context and their specific characteristics.
- Collaboration took place with young professional women who work in STEM-related fields who provided guidance and mentoring, and served as inspiration and motivation for girls and adolescents.
- Teachers were trained in innovative educational methodologies and in the use of digital and technological tools, as well as being sensitized about how important it is to have education free of stereotypes and gender violence.
- Technological tools and platforms were also provided so that both girls and teachers were able to continue learning online.
- Platforms were designed and developed in collaboration with girls so that they can communicate and share information in a safe and accessible manner. Through these platforms they have been able to gain greater self-esteem, confidence in their bodies and, moreover, an emotional support network.
With all this, girls with fewer opportunities were able to go back to school to continue and complete their studies, specialize in professional areas and discover new vocations.
2. Advocacy for the elimination of gender stereotypes
**Communication campaigns**
- **Commemoration of the Day of Girl in Science**
- **Partnerships with national media**
Advocacy activities carried out to eliminate stereotypes and provide visibility to successful experiences allowed girls and adolescents to express their interests, demands and achievements. Social organizations, civil society, public and private institutions, and the media have become involved, allowing for a greater impact and media coverage. (See: Delivery of Tecnovation awards. Bolivia and Webinar in collaboration with the newspaper *El Comercio*. Peru.)
In addition, activities were carried out with teachers and families to raise awareness of the power of girls as agents of change and provide visibility to their leadership in proposing solutions to current challenges.
3. Evidence generation:
- Studies and consultations with the population were conducted to find out the status of the gender-based digital divide, its causes and consequences. (See: Exploratory study on the gender-based digital divide among adolescents in Peru and U-Report Brazil on self-esteem)
- Data obtained through consultations were analyzed to reach relevant conclusions to have an impact on different scenarios and on several key stakeholders.
- The outcomes and findings of the studies and consultations for sensitization and awareness-raising on the gender-based digital divide were disseminated in the media and on communication platforms. They also served to maintain dialogues with key stakeholders that position the topic on public agendas and give rise to government entities' commitments to action. (See: Press release on access to technologies. Bolivia)
Partnerships were set up with public institutions to advocate for their policies, design learning methodologies adapted to girls and adolescents to ensure the scalability and sustainability of educational programmes.
| Country | Girls and adolescent girls | Teachers, families and/or communities |
|---------|----------------------------|----------------------------------------|
| | Direct | Indirect* | Direct | Indirect* |
| Bolivia | 4,458 | 36,500 | 77 | 877,092 |
| Brazil | 29,962 | | 19,976 | |
| Guatemala | 1,494 | 4,482 | 2,521 | 3,481 |
| Peru | 935 | | | |
*Achieved through advertising, communication campaigns, social media posts
---
14 Mainly users of the app Topity.
MIRANDA
7 años, Bolivia
I would like to tell other girls around the world that they are very strong and could do much greater things than other people think. May they never give up.
MARTA
15 años, Guatemala
Thanks to the programme I will be able to finish studying and work as a teacher in my community.
I encourage other young women to study because learning has changed my life.
FERNANDA
15 años, Perú
I was doubtful about what I wanted to do in the future, now, thanks to the Bootcamp for girls, I know that I want to become a web developer.
GABRIELLE
18 años, Brasil
I felt very well participating in this experience, very real issues that matter to young people were discussed, now we must be multipliers of this experience to reach out to more people.
Proposals for the future
To advance the S4G portfolio in the region, it will be important to continue working at different levels and based on several strategies:
a) Managing information and internal knowledge
- Provide visibility to the programme’s strengths and benefits through internal UNICEF communication platforms to publicize the actions carried out in other countries of the region to replicate the activities elsewhere and maximize the outcomes for the region’s girls and adolescents.
- Exchange experiences between countries of the region and between regions through audiovisual dissemination activities (webinars, blogs, podcasts, infographics) with a view to sharing good practices, challenges and lessons learned.
- Explore and map strategies and programmes that are being developed at the global level on the acquisition of STEM skills, entrepreneurship and life skills for girls and adolescents that can be used as an example to design new activities. Use the platforms created to record information and use data obtained to analyze the needs and perceptions of youth to guide UNICEF’s next steps.
b) Raise girls’ voices
- Listen to and consider the demands and priorities of girls in the design and execution of programmes, promoting their participation in consultations and conversations with UNICEF programme teams and their partners. This will enable adapting the activities to give an adequate and effective response to the girls’ needs.
- Generate spaces (online and in-person) to provide visibility to and disseminate the voices of girls, emphasizing their achievements and positioning them as role models.
Promote and strengthen partnerships between national and regional organizations of girls and adolescents focused on STEM to join forces, generate spaces for exchanging information, provide mentoring and good practices.
c) Raise awareness of societies and families
Provide visibility to the strengths and benefits to raise awareness on the importance of the programme for the development and well-being of girls and, consequently, for the socioeconomic development of society.
Continue publicly advocating about the importance of girls’ education and their access to the professional labour market.
d) Promote partnership with private and public institutions
Set up partnerships with the public sector, especially the education sector, to design flexible and innovative educational plans and increase the allocation of financial and human resources to ensure the programme’s scaling up and sustainability.
Continue investing in generating evidence to present to government institutions to propose public policies targeted to reducing the gender-based digital divide.
Publish and widely disseminate the outcomes of the studies and show data on the return on investment in girls to attract new private funders.
Set up partnerships with technology companies to invest in infrastructure and provide accessible and free wireless connection to the most remote places and to the people living there, enabling everyone to have access.
Continue working in collaboration with public and private institutions to promote competitions for technological ideas and research projects, financing transformation and innovation-based ideas that have a social impact. (Support to female youth entrepreneurship)
Promote partnerships with prestigious technology companies to promote mentoring and internship programmes for young professional women.
Establish collaboration with private companies to develop professional skills for accessing the labour market (professional career mentoring, curriculum
development, skills training to overcome selection processes, access to exclusive employment channels).
e) Build on digital media work
- Establish partnerships with digital media and social media influencers to design and publish creative and innovative communication campaigns that challenge gender stereotypes, make girls’ agency and leadership visible, and promote their access to STEM-related professions.
- Due to the boom in the use of social media and technological platforms, especially among youth audiences, harness these spaces to transmit relevant information of interest to youth and generate safe, accessible, violence-free and psycho-emotional support spaces for exchanges.
f) Strengthen capacities of teachers and schools
- Invest in training professionals in the education sector to raise awareness on gender stereotypes and the gender-based digital divide.
- Promote non-formal education workshops for girls on STEM and digital-related topics at schools.
- Through UNICEF’s GIGA programme, continue working so that all state-run schools in the region have access to the Internet and digital tools.
Key resources
Global
- UNICEF website: Skills4Girls Girl-centered solutions for unlocking the potential of adolescent girls
https://wcmsprod.unicef.org/gender-equality/skills4girls?auHash=EkfOwj016t6hHzpyg3B1ZS9IG_Sfp-07wQeFkQDLwQU
- Brief document: Skills4Girls Portfolio Girl-Centered, Generational Impact:
https://www.unicef.org/documents/skills4girls-portfolio-girl-centered-generational-impact-brief
- UNICEF Internal sharepoint: Skills4Girls Portfolio
https://unicef.sharepoint.com/sites/PD-Gender/SitePages/Copy-Skills-for-girls.aspx
- ESAR Sharepoint: Skills4Girls Portfolio in ESAR
https://unicef.sharepoint.com/teams/ESAR-Education/SitePages/Skills4Girls.aspx
- EAPR Sharepoint: Skills4Girls Portfolio in EAPR
https://unicef.sharepoint.com/sites/PD-Gender/SitePages/East-Asia-and-Pacific.aspx
Regional
Initiatives for the development of STEM skills of adolescent girls in the LAC region. UNICEF Regional Office for Latin America and the Caribbean and UNICEF Argentina
Link
Peru
Exploratory study on digital gender gaps in adolescent population in Peru. February 2022 (only available in Spanish)
Virtual contest: Laboratoria Talent Fest powered by UNICEF - Demo Night (only available in Spanish)
MAS CHICAS TECH virtual webinar in collaboration with the newspaper El Comercio.
News: Fernanda and the "yes" of girls to the technological world (only available in Spanish)
https://www.unicef.org/peru/historias/fernanda-y-el-si-de-las-chicas-al-mundo-tecnologico
News: Women can be great web programmers (only available in Spanish)
https://www.unicef.org/peru/historias/las-mujeres-podemos-ser-grandes-programadoras-web
Colombia
Video of the programme in Quibdó (Chocó) (only available in Spanish)
Guatemala
Press release: Pandora launches new charm to support UNICEF’s work and provide educational opportunities for adolescent girls in Guatemala (only available in Spanish)
Link
Video: Life story of Telma Castro (only available in Spanish)
Link
Video: #CharmsforChange of Pandora (English subtitles)
Link
Bolivia
Press Release: International Day of the Girl Child 2019 states that the strength of girls is surprising and unstoppable (only available in Spanish).
https://www.unicef.org/bolivia/comunicados-prensa/el-día-internacional-de-la-niña-2019-afirma-que-la-fuerza-de-las-niñas-es
Press release: Technology education should include girls and adolescent girls (only available in Spanish).
https://www.unicef.org/bolivia/comunicados-prensa/la-educación-en-tecnología-debe-incluir-las-niñas-y-adolescentes-mujeres
News: UNICEF and AGETIC launch robotics course with scholarships for 800 girls (only available in Spanish).
https://www.unicef.org/bolivia/comunicados-prensa/unicef-y-agetic-lanzan-curso-de-robotica-con-becas-para-800-niñas-y-adolescentes
Memory VII CONVERSATORIO. Designing the future: science and technology in the hands of girls. June 4, 2019 Cochabamba, Bolivia.
https://docplayer.es/169170340-Memoria-vii-conversatorio-disenando-el-futuro-la-ciencia-y-la-tecnologia-en-manos-de-las-ninas-4-de-junio-de-2019-cochabamba-bolivia.html
Executive summary: Mapping and analysis of the ecosiSTEM of girls in STEM in Bolivia (internal document, only available in Spanish)
Link
Concept note: Empowering girls in STEM (internal document, only available in Spanish)
Link
Textbook for primary education: Exploring. The world of science (internal document, only available in Spanish)
Link
Website: Technovation - Innovation Bootcamp (only available in Spanish)
Link
Video Chicas Waskiris – 2020 (only available in Spanish)
Link
Video: VII Talk #niñez360: Designing the future: science and technology in the hands of girls – 2019 (in Spanish)
Video: Talk of girls in STEM with the vice president of Bolivia – 2019 (only available in Spanish)
Video: Technovation Awards, ChicasTech – 2019 (only available in Spanish)
Life story: Teresa – Bootcamp Programming 2022 (only available in Spanish)
Life story: Miranda – RoboTICas 2022 (only available in Spanish)
Life story: Yessica: she created a prototype to monitor irrigation of crops in their community – 2022 (only available in Spanish)
Press release Topity 2022: Topity +: UNICEF launches guide to empower the use of Topity with adolescents and young people (only available in Portuguese)
https://www.unicef.org/brazil/comunicados-de-imprensa/unicef-lanca-guia-para-potencializar-o-uso-do-topity-com-adolescentes-e-jovens
Press release salud mental y Topity 2022: UNICEF supports mental health of more than 50,000 adolescents and young people with Pode Falar and Topity (only available in Portuguese)
https://www.unicef.org/brazil/comunicados-de-imprensa/unicef-apoia-saude-mental-de-mais-de-50-mil-adolescentes-e-jovens
Website with Topity Materials (only available in Portuguese)
https://www.unicef.org/brazil/topity-um-chatbot-para-melhorar-sua-autoestima
Topity Guide – 2022 (only available in Portuguese)
Link
Webinar "How to develop self-esteem and self-confidence" – 2021 (only available in Portuguese)
Link
Leaflets distributed to teachers and adolescents presenting UNICEF mental health content. (only available in Portuguese)
Link
The Latin American and Caribbean Committee for the Defense of Women’s Rights, *Niñas Madres. Embarazo y maternidad infantil forzada en América Latina y el Caribe*, CLADEM, Asunción, February de 2016.
Economic Commission for Latin America and the Caribbean, *The care society. A horizon for sustainable recovery with gender equality*, ECLAC, Santiago Chile, November 2022.
Economic Commission for Latin America and the Caribbean, Opportunities and challenges for women’s autonomy in the future work scenario, ECLAC, Santiago de Chile, 2019.
United Nations Children’s Fund, *Adolescent Girls Programme Strategy*, UNICEF, New York, October 2022.
United Nations Children’s Fund, *EDUCATION ON HOLD: A generation of children in Latin America and the Caribbean are missing out on schooling because of COVID-19*, UNICEF, Panama City, December 2020.
United Nations Children’s Fund, *Children in Latin America and the Caribbean: 2020 Overview*, UNICEF, Panama City, October 2020.
United Nations Children’s Fund, *Reimaginar la educación y el desarrollo de habilidades para niños, niñas y adolescentes en América Latina y el Caribe*, UNICEF, Panama City, December 2021.
United Nations Children’s Fund, *Reimagining care: Voices and demands of adolescents and young people in the region*, UNICEF, Panama City, October 2022.
© United Nations Children’s Funds (UNICEF)
March 2023
Latin America and the Caribbean Regional Office
Building 102, Alberto Tejada
City of Knowledge
Panama City, Republic of Panama
PO Box 0843-03045
Phone: + 507 301 7400
www.unicef.org/lac |
NUMERICAL INVESTIGATION OF SHORT ELLIPTICAL TWISTED TUBE FOR REDUCED FOULING RATE IN STEAM CRACKING FURNACES.
*B. Indurain\textsuperscript{1,2}, F. Beaubert\textsuperscript{1}, D. Uystepruyst\textsuperscript{1}, S. Lalot\textsuperscript{1} and M. Couvrat\textsuperscript{2}
\textsuperscript{1} LAMIH UMR CNRS 8201, Polytechnic University Hauts-de-France, 59300, Valenciennes, France. \email@example.com} (corresponding author)
\textsuperscript{2} Manoir Industries, 27108, Val de Reuil Cedex, Pitres, France
ABSTRACT
In the steam cracking industry of natural gas or naphtha, fouling of tubular heat exchangers by cokes is one of the biggest issues regarding yields of valuable product and life span of the tubes composing the furnace. Coke build up on the tubes wall and this growing carboneous layer has two major negative effects: 1) it increases pressure drop and 2) reduces heat transfer from the tube wall to the processed fluid. Increasing wall shear stress yields higher friction forces at the wall of the tube which could reduce the coking rate. Previous studies prove that minor change of tubes cross section can both enhance wall shear stress and heat transfer by generating a swirling decaying flow. Using the open-source CFD software OpenFOAM, this study numerically investigates wall shear stress and pressure drop performances of swirl decaying flow generated by different elliptical cross-section twisted tube. One of the objectives is to determine if minor modifications of tube geometry can generate swirling flow which could enhance wall shear stress at a reduce pressure drop penalty. For a Reynolds number ranging from 10,000 to 100,000, it is shown that the investigated geometries could enhance heat transfer by 90% at an increased pressure drop of 128% which yields a Performance Evaluation Criterion (PEC) of 1.44. The comparison between the performances of the different geometries is carried out using a newly defined PEC based on the bulk temperature, along with the usual PEC.
INTRODUCTION
Steam cracking of naphtha and ethane produces about 85% of olefins made in the world, such as light olefins (ethylene, propylene, butene...) and aromatics [1]. The cracking reaction takes place within the tubes of the steam cracking furnaces at very high temperatures (above 1000 K) and produces the aforementioned products but also cokes on the wall of the tubes [2, 3]. This growing carboneous layer has several negative effects. First, coke build up decreases the cross-sectional area of the gas flow resulting in higher pressure drop and loss of ethylene selectivity [4]. Secondly, the low thermal conductivity of cokes weakens the heat transfer from the tube wall to the process gas. Consequently, the heat input is raised to counteract the increased heat transfer resistance, leading to higher tube metal temperature (TMT) and still higher coking rate. Eventually, either due to an excessive pressure drop over the reactor or due to metallurgical constraints of the reactor tube alloy, production needs to be halted to decoke the reactor [5]. For obvious economics reasons coking rate must be slowed down. To that end, metallurgy developments of tubes [6,7] or three-dimensional reactor designs are used to enhance heat transfer, resulting in lower wall temperatures and/or higher wall shear stress and so to reduce coking rates as deduced from the well known Ebert and Panchal model (see e.g. [8]). Designs can be divided into two classes based on the physical reason of heat transfer: increased internal surface area or enhanced mixing.
Van Goethem et al. [9] numerically studied heat transfer and pressure drop of air flow in several heat transfer enhancers and among them there were straight and helical internally finned tube. For Reynolds numbers from 80,000 to 350,000, they reported that these increased surface technologies respectively enable an average increase in heat transfer of 51% and 66% compared to a straight tube yet at the expense of an average increased pressure drop of 67% and 92%. The better heat transfer performance of the helically finned tube is linked to its ability to generate a swirling flow and thus improving the mixing of the gas which leads to a more effective and more homogeneous heating of the process gas.
Swirling flow increases mixing in the fluid core section and results in increased shear stress at the wall. The studies conducted by Torigoe et al. [10] and Györffy et al. [11] focused on the heat transfer and pressure drop performances of a single start internally ribbed tube called Mixing Element Radiant Tube (MERT) patented
by Kubota in 1995 (see www.kubotamaterials.com/products/mert.html for a brief description). They found with the latest version of the MERT that the heat transfer is improved by up to 40% while the pressure drop is increased by up to 210%.
Although heat transfer is enhanced with the previous technologies, this is at the cost of a tremendous pressure drop increase. This drawback is mainly due to the added material at the tube surface. However, swirling flows could be generated by other means, such as deforming the tube shape as with the Swirl Flow Tube (SFT) developed by Technip [12, 13]. Van Goethem et al. [9] have experimentally and numerically studied this design of tube. The increased mixing is obtained by changing the shape of the tube from a straight to a small amplitude helical tube. Their results showed that for Reynolds numbers ranging from 30,000 to 120,000, the SFT can achieve a good balance between enhanced heat transfer and improved pressure drop with a 33% increase for both. Those results prove that a mere modification of the tube geometry could lead to a power efficient swirling flow.
Tubes with elliptical cross-sections have been widely studied and some of those researches focused on the heat transfer and pressure drop performances of twisted elliptical tube. Tan et al. [14] conducted a parametric study of a twisted elliptical tube and they reported that this kind of tube geometry offers an excellent Performance Evaluation Criterion (PEC) as defined by Webb and Eckert [15] within the studied Reynolds number range with the highest PEC reaching 1.725. It can also be concluded from their study that the greater the aspect ratio of the ellipse the higher the PEC and this is also true with the twist pitch of the tube but up to given value. This latest result shows that continuous swirling flow can become less efficient if it is maintained over a too long distance. Thus, after reaching a fully developed state, the swirling flow should decay and not increase pressure drop further.
This paper presents some results of a numerical investigation on the heat transfer and pressure drop performance of a developing swirling flow generated by a short length twisted tube with elliptical cross-section (SETET) and decaying downstream of the SETET in a tube with a circular cross-section. Several configurations of the SETET are studied in order to find the configuration which provides the highest heat transfer enhancement at the lowest pressure drop increase.
2. SHORT ELEMENT OF TWISTED ELLIPTICAL TUBE
The numeric test bench for the simulations consists of a tube composed of different elements. First of all, there is a twisted elliptical tube (TET) whose hydraulic diameter $D_h$ is defined as:
$$D_h = \frac{4A}{E}$$ \hspace{1cm} (1)
The twist operation of the elliptical cross-section consists in both a translation over a distance $P$ and a $2\pi$ rotation along the tubes axis. The length of the TET is $L_{TET} = 20D_h$ and its inlet is considered as the origin of the axis coordinate ($z=0$). Upstream of the TET there are two different elements, a transition tube and a tube with a circular cross-section. The latter has the same hydraulic diameter of the TET and has a length $L_{up} = 40D_h$. The purpose of this tube is to achieve a developed flow before entering the TET. The transition tube is used to have a smooth transition between the circular and the elliptical cross-section tubes over a length $L_{tr} = 4D_h$. Downstream of the TET, the same elements as upstream are used but the length of the tube with a circular cross-section is $L_{down} = 32D_h$ so that the total length of the test bench is $L = 100D_h$. $L_{TET}$ is only one fifth of $L$, that is why the TET is renamed here as SETET. The computational domain can be seen on figure 1.

Fig. 1. Sketch of the numeric test bench with $P = 10D_h$.
The geometric parameter of this study is the twist pitch $P$ and the aspect ratio of the ellipse $c$. As it can be seen in table 1 the twist pitch $P$ for a given aspect ratio is tested.
| Cases | P (m) | c |
|-------|-------|---|
| Case 1-1 | 20$D_h$ | 0.6 |
| Case 1-2 | 10$D_h$ | 0.6 |
| Case 1-3 | 5$D_h$ | 0.6 |
For every case five different Reynolds numbers (Re) are tested, they range from 10,000 to 100,000 and Re is defined as:
$$R_e = \frac{\rho U_b D_h}{\mu}$$ \hspace{1cm} (2)
3. SET UP OF THE NUMERICAL SIMULATIONS
The simulations are performed using the open source CFD software OpenFOAM. OpenFOAM is an open source object oriented numerical simulation toolkit developed in C++ and released under GPL license by the OpenFOAM@Foundation [16]. As no experimental data are yet available with the investigated tube configuration, a part of the numerical study conducted by Tang et al. [17] on the heat transfer and pressure drop performance of the flow in a twisted tube with elliptical cross-section was reproduced in this study.
3.1 Test case of Tang et al. [17]
The parameters of the elliptical cross-section are the ellipse major and minor axis which are respectively $a=0.024$ m and $b=0.015$ m. The numerical test bench, as shown on figure 2, is composed by a TET and two straight tubes with an elliptical cross-section upstream and downstream of the TET. The hydraulic diameter of the TET is given by Tang et al. to be $D_h=0.02$ m, and the twist pitch is $P=10D_h$. Ultimately, the length of the TET is $L=4P$.
![Fig. 2. Numeric test bench used by Tang et al. [17]](image)
3.2. Boundary conditions and numerical schemes
The flow is considered steady, incompressible, turbulent with heat transfer and the flowing fluid is water. Considering the boundary conditions, a constant bulk velocity $U_b$ based on the desired Reynolds number is imposed at the inlet along with a constant temperature $T_0=300$ K. A constant wall temperature $T_w=350$ K is imposed, and a no-slip condition is applied for the velocities. At the outlet, a constant pressure is imposed while null fluxes for the velocity and the temperature are set.
The different components of the governing equations (continuity, momentum and energy) are discretized using a second order bounded linear UPWIND scheme. The pressure-velocity linked equation are solved using the SIMPLE algorithm. A k-$\omega$ SST turbulence model with a Low-Re approach is used to reproduce the numerical work of Tang et al. [17]. At the inlet, the value of the turbulent kinetic energy $k$ is set with the turbulent intensity and the specific dissipation rate $\omega$ is determined with the calculated value of $k$ and the turbulent mixing length of the case. The corresponding formulas along with the governing equations can be found in Robertson et al. [18]. At the outlet, a null flux boundary condition is imposed for both turbulent quantities and fixed values are imposed at the wall.
3.3. Validation of the numerical procedure
To quantify the pressure drop $\Delta p$ along the TET, the friction factor coefficient $f$ is used and is defined as:
$$f = 2 \frac{\Delta p D_h}{\rho U_b^2 L}$$ \hspace{1cm} (3)
The heat transfer along the TET is quantified using the Nusselt number (Nu) defined as:
$$Nu = \frac{h D_h}{\lambda}$$ \hspace{1cm} (4)
Heat transfer is calculated using a thermal energy balance between the inlet and outlet of the TET and the Nusselt number can be rewritten as:
$$Nu = \frac{\dot{m} c_p (T_o - T_i)}{\pi \lambda L T_{LMTD}}$$ \hspace{1cm} (5)
where:
$$T_{LMTD} = \frac{\bar{T}_o - \bar{T}_i}{\ln \left( \frac{T_w - \bar{T}_o}{T_w - \bar{T}_i} \right)}$$ \hspace{1cm} (6)
The results of the comparison between the simulations of the present study and the experimental data of Tang et al. [17] are shown in table 2. It can be observed that the maximum differences between the results of the simulations and the results from Tang et al. for Nu and f are respectively 6.3% and 5.2%. Those differences are sufficiently small to consider that the adopted numerical procedure is suitable to correctly predict heat transfer and pressure drop of a swirling flow generated by a twisted tube with elliptical cross-section. Therefore, this numerical procedure is adopted to study the different geometric configurations of SETET presented in table 1.
Table 2. Comparison between the results from the simulations of the present study and the experimental data from Tang et al [17].
| Re | 20,000 | 18,000 | 16,000 | 14,000 |
|------|--------|--------|--------|--------|
| $f_{Tang}$ [17] | 0.0282 | 0.0294 | 0.0305 | 0.0322 |
| f | 0.0268 | 0.0279 | 0.0298 | 0.0311 |
| Rel. dev | 5.0% | 5.2% | 2.2% | 3.2% |
| $Nu_{Tang}$ [17] | 102.2 | 93.7 | 85.9 | 77.4 |
| Nu | 95.7 | 90.8 | 83.0 | 78.3 |
| Rel. dev | 6.3% | 3.1% | 3.4% | 1.1% |
4. NUMERICAL SIMULATIONS OF THE SETET
The same numerical configuration as in the validation process of part 3 is kept here. However,
the working fluid is now air and the thermodynamic properties used are summed up in table 3. One assumption is that, within the given range of encountered temperatures, the thermodynamic properties are kept constant.
Table 3. Thermodynamic properties of air at T=300 K [19]
| Property | Value |
|----------------|-----------|
| $c_p$ (J/kgK) | 1006 |
| $\lambda$ (W/mK) | 0.024 |
| $\mu$ (Pa.s) | 1.91e-5 |
| $\rho$ (kg/m³) | 1.205 |
Although the numerical procedure has been validated, a mesh independent test study with case 1-1 is undertaken to ensure that the results of the simulations are not mesh sensitive.
4.1. Mesh independence test
The meshing process is achieved by using cfMesh, an open source meshing software developed by Dr. Franjo Juretic [20]. Three different cartesian unstructured meshes were tested with a refined mesh close to the wall of the tube consisting in 11 additional mesh layers. The three meshes from the finest to the coarsest are noted mesh 1, mesh 2 and mesh 3 and they respectively have 10,890,272, 7,632,400 and 5,330,459 cells. Mesh 2 is obtained with the same meshing parameters used for the validation case.
The Grid Convergence Index (GCI) method developed by Roache [21] is adopted here to conduct the mesh independence test. It is a generalization of the Richardson extrapolation and it provides a uniform measure of convergence for grid refinement studies. The GCI value represents the resolution level and how much the solution approaches the asymptotic value and is defined as:
$$GCI_i = \frac{e_{i+1} - e_i}{e_i (r^\alpha - 1)} \quad (7)$$
where the grid refinement ratio $r$ in this study is $r=1.43$ and $\alpha$ is computed as follows [21]:
$$\alpha = \frac{\ln \left( \frac{e_1 - e_2}{e_2 - e_3} \right)}{\ln (r)} \quad (8)$$
For the sake of clarity, the GCI method is not described here but can be found in [21,22]. Both global and local quantities are investigated with the GCI method. The global quantities are the friction factor $f$ and the Nusselt number Nu. The local quantities are averaged values in a plane located in $z^*=34$ and they are the skin friction coefficient $C_f$ and the bulk temperature $T_b$ respectively defined as:
$$C_f = 2 \frac{\tau_w}{\rho U_b^2} \quad (9)$$
and
$$T_b = \frac{1}{U_b A} \int_A T u_z dA \quad (10)$$
The results of the grid refinement study are summarized in table 4 and it can be noticed that the GCI between mesh 1 and 2 is lower than the GCI between mesh 2 and 3 except for the Nusselt number but the GCI is still low. Therefore, the results of the simulations are less prone to change between mesh 1 and mesh 2. Furthermore, it is computationally less expensive to run simulations with mesh 2, thus this latest mesh was chosen to perform all the other simulations for the parametric study.
Table 4. Order of accuracy and Grid Convergence Index (GCI) for several flow quantities and for the three meshes.
| Quantity | $\alpha$ | GCI$_2$ (%) | GCI$_1$ (%) |
|----------|----------|-------------|-------------|
| f | 7.5 | 3.1 | 0.21 |
| Nu | 2.4 | 0.28 | 1.1 |
| $C_f$ | 2.95 | 1.44 | 0.5 |
| $T_b$ | 2.6 | 0.35 | 0.14 |
5. RESULTS AND DISCUSSION
5.1. Flow field
For the three cases, the swirling flow is generated in the same way. As the flow progresses through the SETET it acquires a tangential velocity component due to the geometry curvature. However, the lower the twist pitch of the SETET the greater the curvature and therefore the higher the intensity of the swirling flow as it can be seen on figure 3. In a cross-section of the SETET located at $z^*=10$, which corresponds to half the length of the twisted tube, the maximum dimensionless azimuthal velocity $U_\theta^*$ has been calculated for case 1-1, 1-2 and 1-3 and is respectively of 0.10, 0.28 and 0.53. As the twist pitch decreases, the maximum dimensionless tangential velocity increases significantly which leads to a greater mixing of the fluid, a longer flow path and an improved heat exchange between the swirling flow and the wall of the tube. In those same cross-sections, the mean deviation angle of the flow calculated between the tube axial direction and the flow velocity vector for the three cases is respectively of 9.2°, 16.9° and 24.5°. Again, the lower the twist pitch, the higher the mean deviation angle. This means that the wall of the twisted tube could deflect more efficiently the axial flow yielding to an intense swirling motion.
Fig. 3. Swirling flow in the different SETET. Case 1-1 (top), case 1-2 (middle) and case 1-3 (bottom) are all at Re=100,000.
In order to quantify the intensity of the swirling flow, a specific quantity is usually adopted and defined by Kitoh [23] as the swirl number $S$:
$$S = \frac{\int_0^R r^2 u_\theta u_z \, dr}{R \int_0^R ru_z^2 \, dr} \quad (11)$$
The evolution of $S$ from $z^*=0$ to $z^*=51$ and for the three different cases at the highest Reynolds number is shown on figure 4, where the vertical bars delimitate the different phase of the swirling flow as discussed in the following. It must be noticed that there are three major phases: 1) from $z^*=0$ to $z^*=20$, the generation and development of the swirling flow, 2) at $z^*=20$ the transition between the SETET and the exit tube with circular cross-section where there is a huge drop of the swirl number and 3) from $z^*=24$ to $z^*=51$, the decay of the swirling flow. In phase 1, between all of the three cases, there are large difference of swirl numbers and the slopes are also greatly different from one another. The sharpest slope is achieved with case 1-3 meaning that the swirling flow is rapidly developing inside this SETET. For every case, there is a dramatic drop of the swirl number at $z^*=20$, caused by the end of the curved geometry and the rather sharp transition between an elliptical and a circular cross-section. Moreover, the higher the swirl number, the larger the drop of $S$.
Besides, it could also be observed that whatever length the SETET of case 1-1 might be, it will never generate a swirling flow whose $S$ is as high as the intensity of the SETET of case 1-2 after $10D_h$ or of case 1-3 after $5D_h$. In addition, if only one twist pitch of every SETET is considered, it must also be noticed that the shorter the length of $P$, the greater the value of $S$.
Fig. 4. Evolution of the swirl number along the TET and downstream for the cases 1-1, 1-2 and 1-3 at Re=100,000.
Ultimately, because of the logarithmic scale, it is easily observable that all of the three swirling flows have the same decay in phase 3. The calculation of the slope or decay rate of case 1-1 to 1-3 gives respectively a decay rate of 0.04, 0.041 and 0.042. This specific behavior of the swirling flows is shown on figure 5, where the graph has been built as follows. The evolution of $S$ from case 1-3 is used and the last value of $S$ at $z^*=51$ is stored. Then, the closest value of $S$ from case 1-2 to the latest stored $S$ is found and added to the graph along with the subsequent values of $S$ from case 1-2 and the latest value is stored. Then this operation is done once more between case 1-1 and 1-2.
Fig. 5. Reconstruction of the Swirl number from the three cases at Re=100,000.
Although the pitch of the SETET has a tremendous influence on the development of the swirling flow, it has very little regarding its decay. Figure 5 also illustrates that with case 1-3, the swirling flow could last over almost almost $70D_h$ after the SETET.
5.2. Pressure Drop
The swirling flow generated by the three SETET are greatly different as previously seen on figure 3 and 4. Hence it is important to study the evolution of the friction factor with the Reynolds number, depending on the SETET. Figure 6 features the friction factor ratio between the
SETET (f) and a straight tube (f_p) where f_p is calculated with the correlation from Pethukov:
\[
f_p = \left(0.79 \ln(R_e) - 1.64\right)^{-2}
\]
(11)
while f is computed by using Eq. (3) between z*=0 and z*=51.

**Fig. 6.** Evolution of f/f_p with Re for cases 1-1 to 1-3.
The transition between the elliptical cross-section and the circular cross-section might cause a large pressure drop due to a sudden change of the flow topology as depicted with the sharp variation of the swirl number in phase 2 on figure 4. Pressure loss of case 1-2 are higher than case 1-1 but are in the same range especially compared to case 1-3 where the pressure drop are tremendously higher.
### 5.3. Heat Transfer
The amount of heat exchanged between the fluid and the wall depends also greatly on the swirling flow. The calculation of the bulk temperature T_b with Eq. (8) and its evolution from z*=0 to z*=51 for every cases, as seen on figure 7, gives information on how effective the swirling flow is to transport the heat from the wall to the bulk flow.

**Fig. 7.** Evolution of T_b along the TET and downstream for cases 1-1 to 1-3 and for the straight tube at Re=100,000.
The swirling flow generated by the SETET of case 1-3 is more effective at increasing the bulk temperature than the two other SETET. Although this increase is not really high, it means that the swirling flow enables a more effective heat transfer between the wall of the tube and the flow. As a consequence, for a given heat flux at the wall of the tube, the increased heat transfer will have two major effects in steam cracking furnaces. First of all, it will result in a diminution of the wall temperature and according to the coking model of Plehiers [24] yield a lower coking rate. Secondly, because of the imparted swirling motion, it will lead to a more uniform radial temperature distribution which in turn decreases secondary reactions which also participate to the coke formation [25, 26]. Furthermore, more heat is transported by the rotating fluid as shown on figure 8 which displays the ratio between the global heat transfer obtained with an enhanced geometry (Nu) and the heat transfer obtained in a straight tube (Nu_p) where Nu_p is calculated with the correlation of Gnielinsky:
\[
Nu_p = \frac{\frac{f_p}{8}(R_e - 1000)Pr}{1 + 12.7\left(\frac{f_p}{8}\right)^{0.5}Pr^{0.6666 - 1}}
\]
(12)
where Pr is the Prandtl number and is defined as:
\[
Pr = \frac{\mu C_p}{\lambda}
\]
(13)
The Nusselt number is calculated from Eq. (5) and the heat balance is done between z*=0 and z*=51.

**Fig. 8.** Evolution of Nu/Nu_p with Re for cases 1-1 to 1-3.
It can be observed that both cases 1-1 and 1-2 provide approximately the same heat transfer enhancement and are both below the heat transfer enhancement of case 1-3. All the SETET are efficient at enhancing heat transfer and the improvement grows with the Reynolds number, except for case 1-1 at Re=100,000 where there is a sudden drop of Nu/Nu_p. The improved heat transfer and the relatively low pressure drop increase from the twisted tube is of particular interest for the steam cracking industry because again it leads to a lower TMT and provided that the coke deposition occurs at the wall where the temperatures are the highest to a lower coking rate [13, 26, 27]. Similarly, considering the Ebert and Panchal model, the enhanced heat transfer of the SETET leads to a diminution of the deposition term by increasing the heat transfer coefficient in the thermal boundary layer and by reducing the wall
temperature thus leading to a diminution of the fouling rate [8].
Even though every investigated SETET generate a swirling flow which increases pressure drop, it also enhances significantly the heat transferred between the wall and the flowing fluid.
To quantify the energetic efficiency of the SETET, the Performance Evaluation Criterion (PEC), proposed by Webb et al. [15] is used here and is defined as:
\[
PEC = \frac{Nu}{Nu_p} \cdot \frac{\sqrt[3]{f}}{f_p}
\]
(14)
The evolution of the PEC for the different cases is shown on figure 9.

Although the SETET of case 1-3 provides better heat transfer performance at the cost of a higher pressure drop than the two other SETET, it can be observed that all the studied geometries have approximately the same PEC with a slightly higher values for the SETET with the greatest twist pitch. Considering that the SETET of case 1-3 generates the most intense swirling flow which increases both the wall shear stress and the heat transfer at the same PEC as the two other SETET, it is concluded that this SETET is the most interesting studied configuration.
6. THE BULK TEMPERATURE RELATED PEC
In the steam cracking industry, the temperature of the processed gas is a prime parameter to achieve a better selectivity in highly valuable products (ethylene, propylene...). The higher the bulk temperature, the better the selectivity and also the lower the radial temperature gradient and the lower the coking rate [25]
It was seen in section 5.3, on figure 7 that every configuration of SETET generates a swirling flow which yields higher \( T_b \) along \( z^* \) than in a turbulent flow in a straight tube. Thus, the required length of tube in cases 1-1 to 1-3 to reach the same bulk temperature as the one at the end of a straight tube, here at \( z^*=51 \), is shorter. This length is denoted as \( L_{eq} \) and new calculations of the friction factor and the Nusselt number with respectively Eqs. (3) and (4) between \( z^*=0 \) and \( z^*=L_{eq} \) are performed to determine the PEC associated with this reduction of tube material to achieve the same \( T_b \) as in a straight tube. This new number is denoted \( PEC_b \).
The values of \( L_{eq} \) for every cases is shown on table 5 and from this table it is clear that the SETET from case 1-3 achieve the same \( T_b \) as in a straight tube within a dramatically shorter distance. Thus the total length of the tube with the SETET of case 1-3 could be reduced to a maximum of nearly 65% at \( Re=80,000 \) and to a minimum of 17% at \( Re=100,000 \). It is also surprising to observe such an increase of \( L_{eq} \) between these two Reynolds numbers from case 1-3. It must also be noticed that the two other SETET are quite inefficient at reducing the length of tube required to obtain the same \( T_b \) as in a straight tube.
Table 5. \( L_{eq} \) obtained with cases 1-1 to 1-3 at every Reynolds number.
| Re | \( 10^5 \) | \( 8.10^4 \) | \( 5.10^4 \) | \( 3.10^4 \) | \( 1.10^4 \) |
|------|------------|--------------|--------------|--------------|--------------|
| Case 1-1 \( L_{eq} \) | 47.5D_h | 48D_h | 48D_h | 48D_h | 34.5D_h |
| Case 1-2 \( L_{eq} \) | 46.5D_h | 47D_h | 47D_h | 46D_h | 35D_h |
| Case 1-3 \( L_{eq} \) | 42.5D_h | 18D_h | 19D_h | 20.5D_h | 29.5D_h |
The evolution of the \( PEC_b \) for the different cases is shown on figure 10.

From the latest figure it can be observed that the \( PEC_b \) of case 1-2 is below 1 for \( Re=10,000 \) and for case 1-3 is below 1 for the two lowest Reynolds numbers, meaning that this solution causes more pressure drop than heat transfer enhancement. However, at higher Reynolds number the \( PEC_b \) becomes higher than 1 and thus the generated swirling flows are energetically efficient.
CONCLUSION
This work presents a numerical study of different geometry of short length twisted elliptical tube (SETET) used to induced a swirling flow, followed by a transition tube and a tube with a circular cross-section. Three configurations of SETET, whose length remain the same, are studied
and the tested parameter is the twist pitch $P$ of the SETET. The Reynolds number range from 10,000 to 100,000 and from the results of the study the following conclusions can be drawn:
- With this kind of configuration the swirling flow could be decomposed into three phases: a developing, transitioning and decaying.
- The SETET with the shortest twist pitch features the highest swirl number $S$ and the best heat transfer enhancement at a given Re but also the highest pressure loss.
- The better heat transfer that leads to a higher bulk temperature is expected to yield a lower coking rate according to the coking model of Phehiers [24].
- The PEC of the three cases are in the same range and are above 1.
According to this last result, it can be concluded that minor modification of tube geometry or shape could generate a swirling flow which enhances heat transfer at a relative low pressure drop penalty and which might also reduce significantly the coking rate.
It was also shown in the last section of this study that the swirling flow generated by the SETET with the lowest twist pitch can achieve the same bulk temperature reached at the end of a straight tube within a shorter distance. Therefore, tube manufacturers could save money by using this configuration of SETET and by shortening their tubes. Furthermore, more heat is transferred with the generated swirling flow, meaning that the temperature at the wall is lower and therefore that the coking rate could be reduced thus extending run lengths and have less frequent shutting down for decoking operations.
Nevertheless, the large drop of swirl intensity in phase 2 is detrimental for the swirling flow and future work aims at creating another transition which could reduce this swirl intensity gap. Furthermore, to reduce the pressure drop another parametric study with the aspect ratio $c$ is expected to be undertaken with the most interesting configuration of SETET investigated namely the case 1-3.
ACKNOWLEDGMENT
The authors are extremely grateful for the financial support provided by Manoir Industries within the scope of the CIFRE convention 2017/1437.
NOMENCLATURE
| Symbol | Description |
|--------|-------------|
| $A$ | area of tube cross-section, m$^2$ |
| $a$ | major axis of the ellipse, m |
| $b$ | minor axis of the ellipse, m |
| $C_f$ | skin friction coefficient, dimensionless |
| $c$ | aspect ratio of the ellipse, $(b/a)$, dimensionless |
| $c_p$ | specific heat capacity, J/kgK |
| $D_h$ | hydraulic diameter, m |
| $E$ | circumference of tube cross-section, m |
| $e$ | quantity to evaluate for the GCI |
| $f$ | friction factor coefficient, dimensionless |
| $h$ | heat transfer coefficient, W/m$^2$K |
| $k$ | turbulent kinetic energy, J/kg |
| $L$ | total length of the tested tube, m |
| $\dot{m}$ | mass flow rate, kg/s |
| Nu | Nusselt number, dimensionless |
| $P$ | twist pitch, m |
| Pr | Prandtl number, dimensionless |
| Re | Reynolds number, dimensionless |
| $R$ | hydraulic radius, m |
| $r$ | refinement ratio, dimensionless |
| $S$ | Swirl number, dimensionless |
| $T$ | temperature, K |
| TMT | tube metal temperature |
| $U$ | fluid velocity, m/s |
| $U_b$ | dimensionless tangential velocity, $(U_b/U_b)$ |
| $z^*$ | dimensionless axial position $(z/D_h)$ |
| $\alpha$ | order of accuracy for the GCI, dimensionless |
| $\Delta p$ | pressure drop, Pa |
| $\lambda$ | thermal conductivity, W/mK |
| $\mu$ | dynamic viscosity, Pa.s |
| $\rho$ | fluid density, kg/m$^3$ |
| $\omega$ | specific dissipation rate, 1/s |
| $\tau_w$ | wall shear stress, Pa |
Subscripts
| Subscript | Description |
|-----------|-------------|
| 0 | inlet value |
| $b$ | bulk |
| down | downstream tube |
| eq | equivalent |
| LMTD | Logarithmic Mean Temperature Difference |
| i | inlet of SETET |
| o | outlet of SETET |
| p | straight tube |
| TET | twisted elliptical tube |
| tr | transition tube |
| up | upstream tube |
| w | wall |
| z | axial component |
$\theta$ tangential component
Superscripts
| Superscript | Description |
|-------------|-------------|
| $\bar{}$ | area averaged quantity |
REFERENCES
[1] T. Ren, M.K. Patel and K. Blok, Steam cracking and methane to olefins: Energy use, CO$_2$ emissions and production costs, Energy, 33 (2008): 817-833
[2] K. Sundaram and G. Froment, Kinetics of coke deposition in the thermal cracking of propane, Chemical Engineering Science, 34, (1979): 635-644
[3] P. Plehiers, G. Reyniers and G. Froment, Simulation of the Run Length of an Ethane Cracking Furnace, Industrial & Engineering Chemistry Research, 29 (1990): 636-641
[4] K. Kolmetz, J. Kivlen, J. Gray, C. Sim and C. Soyza, Advances in cracking Furnace Technology, Refining Technology Conference, Dubai Crown Plaza Hotel, 2002
[5] L. Vandewalle, D. Van Cauwenberge, J. Dedeyne, K. Van Geem and G. Marin, Dynamic simulation of fouling in steam cracking reactors using CFD, Chemical Engineering Journal, 329 (2017): 77-87
[6] S. Symoens et al., State-of-the-art of Coke Formation during Steam Cracking: Anti-Coking Surface Technologies, Industrial & Engineering Chemistry Research, 57, 48 (2018): 16117-16136
[7] N. Olahová et al., CoatAlloy Barrier Coating for Reduced Coke Formation in Steam Cracking Reactors: Experimental alidation and Simulations, Industrial & Engineering Chemistry Research, 57, 3 (2018): 897-907
[8] I. Wilson, E. Ishiyama and G. Polley, Twenty Years of Ebert and Panchal-What Next?, Heat Transfer Engineering, 38:7-8 (2017): 669-680
[9] M. Van Goethem and E. Jelsma, Numerical and experimental study of enhanced heat transfer and pressure drop for high temperature applications, Chemical Engineering Research and Design, 92 (2014): 663-671
[10] T. Torigoe, K. Hamada, M. Furuta, M. Sakashita, K. Otsubo and M. Tomita, Mixing Element Radiant Tube (MERT) improves cracking furnace performance, In: Proceedings of the 11th Ethylene Producers Conference (1999)
[11] M. Györfy, M. Hineno, K. Hashimoto, S.H. Park, M.S. You, MERT performance and technology update, AIChE Spring Meeting: Ethylene Producers Conference, 20 (2009), Tampa Bay, USA
[12] D. Van Cauwenberge, C. Schietekat, J. Floré, K. Van Geem and G. Marin, CFD-based design of 3D pyrolysis reactors: RANS vs. LES, Chemical Engineering Journal, 282 (2015): 66-76
[13] C. Schietekat, M. Van Goethem, K. Van Geem and G. Marin, Swirl flow tube reactor technology: An experimental and computational fluid dynamics study, Chemical Engineering Journal, 238 (2014): 56-65
[14] X. Tan, D. Zhu, G. Zhou and L. Zeng, Experimental and numerical study of convective heat transfer and fluid flow in twisted oval tubes, International Journal of Heat and Mass Transfer, 55 (2012): 4701-4710
[15] R. Webb and E. Eckert, Application of rough surfaces to heat exchanger design, International Journal of heat and Mass Transfer, 15 (1972): 1647-1658
[16] C.J. Greenshield, CFD direct Ltd, OpenFOAM User Guide version 4.0, ©2011-2016, OpenFOAM Foundation Ltd
[17] X. Tang, X. Dai and D. Zhu, Experimental and numerical investigation of convective heat transfer and fluid flow in twisted spiral tube, International Journal of Heat and Mass Transfer, 90 (2015): 523-541
[18] E. Robertson, V. Choudhury, S. Bhushan and D. Walters, Validation of OpenFOAM numerical methods and turbulence models for incompressible bluff body flows, Computers and Fluids 123 (2015): 122-145
[19] Y. Çengel and J. Cimbala, Fluid mechanics: fundamental and applications, New York, NY 10020, McGraw-Hill (2006)
[20] Juretic F, cfMesh User Guide (v1.1), 2015
[21] P.J Roache, Perspective: A Method for Uniform Reporting of Grid Refinement Studies, Journal of Fluids Engineering, 116 (1994): 405-413
[22] M. Ali, C. Doolan and V. Wheatley, Grid convergence study for two-dimensional simulation of flow around a square cylinder at a low Reynolds number, 7th International Conference on CFD in the Minerals and Process Industries (2009)
[23] O. Kitoh, Experimental study of turbulent swirling flow in a straight pipe, Journal of Fluid Mechanics, 225 (1991): 445-479
[24] P. Plehiers, D. Froment, Firebox simulation of olefin units, Chemical Engineering Communications, 80 (1989): 81-99
[25] K. Van Geem, G. Heynderickx and G. Marin, Effect of Radial Temperature Profiles on Yields in Steam Cracking, AIChE Journal, 50, 1 (2004): 173-183
[26] C. Schietekat, D. Van Cauwenberge, K. Van Geem and G. Marin, Computational Fluid Dynamics-Based Design of Finned Steam Cracking Reactors, AIChE Journal, 60 (2014), No. 2: 794-808
[27] L. Vandewalle, D. Van Cauwenberge, J. Dedeyne, K. Van Geem and G. Marin, Dynamic simulation of fouling in steam cracking reactors using CFD, Chemical Engineering Journal, 329 (2017): 77-87 |
Design and microalgae. Sustainable systems for cities
Original
Design and microalgae. Sustainable systems for cities / Peruccio, PIER PAOLO; Vrenna, Maurizio. - In: AGATHÓN. - ISSN 2464-9309. - ELETTRONICO. - 2019:6(2019), pp. 218-227. [10.19229/2464-9309/6212019]
Availability:
This version is available at: 11583/2775039 since: 2020-07-07T11:56:00Z
Publisher:
Palermo University Press
Published
DOI:10.19229/2464-9309/6212019
Terms of use:
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
default_article_editorial [DA NON USARE]
DESIGN E MICROALGHE
Sistemi sostenibili per le città
DESIGN AND MICROALGAE
Sustainable systems for cities
Pier Paolo Peruccio, Maurizio Vrenna
ABSTRACT
Nuove pratiche che si rifanno alle scienze biologiche stanno emergendo nel mondo del design e dell’architettura. Negli ultimi anni svariati interventi, anche su scala urbana, hanno coinvolto l’uso di organismi viventi e biomateriali. Il presente saggio analizza i progetti che hanno visto l’utilizzo delle microalghe, tracciando i loro limiti e possibilità. Vengono inoltre definite le linee guida per l’implementazione di progetti analoghi a livello di prodotto o di piccole installazioni. Nell’ottica di progettare a vantaggio dei cittadini e date le innumerevoli proprietà delle microalghe, soluzioni di questo tipo e innovativi servizi integrati potrebbero essere una risposta per mitigare i problemi ambientali, ma anche sociali ed economici delle città del presente e del futuro.
New practices linked to biological sciences are emerging in the world of design and architecture. In recent years various interventions have involved the use of living organisms and biomaterials even in an urban context. This essay analyzes those projects that have entailed the use of microalgae, tracing their limits and possibilities. The guidelines for the implementation of similar projects at the level of products or small installations are also defined. From the perspective of designing for the benefit of citizens and given the countless properties of microalgae, solutions of this kind and innovative integrated services could be a way to mitigate the environmental, but also social and economic problems, of present and future cities.
KEYWORDS
microalghe, produzione urbana, limiti e possibilità, prodotto/servizio/sistema, sostenibilità
microalgae, urban production, limits and possibilities, product/service/system, sustainability
Pier Paolo Peruccio, Architect and PhD, is an Associate Professor at the Department of Architecture and Design of Politecnico di Torino (Italy). He is Vice Head of the Design School, Director of the SYDORE (Systemic Design Research and Education) Center in Lyon (France), and Coordinator of the II Level Specializing Master in Design for Arts. He is currently working on several research projects concerning the history of sustainable design, systems thinking, and innovation in design education. Tel. +39 (0)11/030.65.40 | E-mail: email@example.com
Maurizio Vrenna is a PhD candidate in Management, Production, and Design at the Department of Architecture and Design of Politecnico di Torino (Italy). During the professional and academic career in Europe and Asia, he has been involved in the development of sustainable products and services. His current research revolves around the topics of air pollution and microalgae production in urban areas. Mob. +39 340/06.67.133 | E-mail: firstname.lastname@example.org
Le città sono luoghi in cui nascono culture, si sviluppano idee e prosperano sistemi economico-produttivi (Braudel, 1984). Allo stesso tempo è in questi contesti che si palesano i grandi problemi derivanti da modelli di crescita insostenibili, cambiamenti climatici, migrazioni e crisi finanziarie (Jacobs, 1961). Gli agglomerati urbani, che vedranno un forte sviluppo demografico e dimensionale nei prossimi decenni (United Nations, 2018), hanno un considerevole impatto negativo sugli ecosistemi naturali. Operare quindi sulla loro sostenibilità è imprescindibile e presenta difficoltà, ma anche opportunità uniche (Rees and Wackernagel, 1996). Fra le molteplici sfide, le città resilienti dovrebbero essere in grado di far fronte all’inquinamento atmosferico, pianificare l’approvvigionamento idrico e alimentare in funzione delle aree agricole venute meno, reindirizzare nuovi usi per le zone dismesse o degradate e supportare le fasce più deboli della popolazione. Questo non solo nell’ottica di mitigare danni, ma, soprattutto, adattare nuove proposte progettuali a un cambiamento radicale e irreversibile.
Il design, oggi, è chiamato a progettare prodotti, sistemi, servizi ed esperienze che possano portare a una maggiore qualità della vita\(^1\). Il design dovrebbe privarsi della sua visione antropocentrica e «i suoi metodi dovrebbero essere rivolti [...] a reintegrare il nostro rapporto con l’ambiente e con tutte le specie» (Antonelli, 2019, p. 38). Per la prima volta sta emergendo un approccio radicale alla progettazione che si rifà alle scienze biologiche, incorporando l’uso di materia vivente all’interno di prodotti, strutture e processi (Myers, 2018) e sono diversi i designer che hanno presentato alternative all’avanguardia – anche su scala urbana – coinvolgendo l’uso di animali, piante, alghe, muschi, funghi, batteri e altri materiali organici\(^2\). Il presente contributo analizza le caratteristiche dei progetti che hanno visto l’uso delle microalghe, fornendo a ricercatori e professionisti nel campo del design le linee guida per introdurne l’uso in sperimentazioni urbane. Vengono suggeriti, inoltre, quali possano essere le diverse contaminazioni con altri ambiti di studio, identificando anche possibili attori interessati. Sebbene ci sarà bisogno di tempo affinché la nostra cultura cambi prospettiva sulle potenzialità di proposte così inconsuete, queste potrebbero essere la chiave per adottare visioni a lungo termine che possano promuovere uno sviluppo urbano sostenibile, inclusivo ed equo.
**Il potenziale delle microalghe** | Le microalghe sono organismi fotosintetici di diversa natura, caratterizzati dall’assenza di radici, gambo o foglie, che si trovano tipicamente in acque dolci o salate. In ticolologia, il termine ‘microalga’ fa riferimento alle alghe microscopiche sensu stricto, includendo anche i cianobatteri (Tomaselli, 2004). Alcune fra le microalghe maggiormente coltivate sono la Chlorella e la Spirulina (Fig. 1). Quest’ultima, un cianobatterio di colore blu-verde, è oggigiorno diventato un integratore alimentare molto popolare dato l’alto contenuto proteico. Le proprietà delle alghe erano conosciute dall’uomo sin dall’antichità e in Messico la Spirulina veniva utilizzata come cibo dagli Aztechi. I locali raccolgevano l’alga fresca dal lago Texcoco, facendola asciugare al sole e vendendola nei mercati sotto forma di piccole torte (Sánchez et alii, 2003). Un modo molto simile di raccolta era adottato dagli indigeni della tribù Kanembu che mangiavano un cibo chiamato ‘dinté’, risultato dell’essiccazione della biomassa algale sulle rive del lago Ciad (Ciferrí, 1983; Fig. 2). Questa pratica tradizionale è ancora comune nella regione e il suo commercio rappresenta un importante contributo all’economia locale (Abdulqader et alii, 2000).
Nei Paesi in via di sviluppo la coltivazione di Spirulina è, inoltre, un efficace rimedio contro la malnutrizione cronica e permette la creazione di svariati posti di lavoro. Anche in Francia vi è un vasto network di produttori che coltivano Spirulina con pratiche di stampo artigianale\(^3\). A partire dagli anni ’50 e in seguito all’emergere di forti dubbi sull’efficacia dell’agricoltura convenzionale per far fronte alla dilagante fame nel mondo (Belasco, 1997), un crescente numero di esperti è stato attratto dalle parecchie possibilità d’uso delle microalghe. Ambiziosi studi sono stati finanziati dai maggiori istituti di ricerca e sono state avviate coltivazioni sotto condizioni controllate mirate a investigarne il potenziale in diversi campi (Garrido-Cardenas et alii, 2018; Fig. 3).
Le più recenti ricerche hanno dimostrato che le microalghe possono essere utilizzate a uso nutritivo umano e animale, per l’estrazione di componenti ricchi di principi attivi, ma anche per la fitodepurazione, la produzione di biocarburanti e fertilizzanti organici (Khan et alii, 2018). Per via della loro efficienza fotosintetica, le microalghe costituiscono in aggiunta uno strumento promettente per la mitigazione del biossido di carbonio in atmosfera (Singh and Ahluwalia, 2013). Considerando quindi i diversi ambiti di applicazione e la relativa facilità di coltivazione, le microalghe potrebbero ricoprire un ruolo importante all’interno di soluzioni pionieristiche su più livelli.\(^4\)
**Sperimentazioni, progetti, realizzazioni** | Nella sfera del design e dell’architettura le sperimentazioni che utilizzano le microalghe sono piuttosto circoscritte. Il paragrafo passa in rassegna alcuni fra i più significativi progetti avviati nell’ultimo decennio. Queste realizzazioni operano dal livello del prodotto fino a quello architettonico e sono state, in molti casi, risultato della collaborazione con biologi e ingegneri. Per quanto concerne le sperimentazioni con i pigmenti, il prototipo di Algaeprint Bioprinter è un innovativo dispositivo che permette la stampa digitale per mezzo di un bio-inkchiostro a base di microalghe (Sawa, 2016). Similmente, il gruppo di ricerca di Living Ink Technologies sta lavorando per la commercializzazione di un inchiostro simpatico (Fig. 4), con il quale si possano creare illustrazioni dinamiche\(^5\). Lo studio di design berlinese Blonde & Bieber ha invece optato per lavorare con la stampa tessile: forme astratte vengono impresse su tessuto per creare pattern unici dai colori variegati.\(^6\)
Sono stati inoltre ideati diversi sistemi per la produzione domestica di Spirulina. Degno di nota è Farma (Fig. 5), il lavoro di William Patrick del MIT Media Lab, ovvero un fotobioreattore da tavolo in grado di produrre e filtrare la Spirulina, creandone una polvere che può essere inserita in capsule. Le istruzioni per la costruzione del dispositivo sono state rese disponibili online per poterlo replicare in autonomia\(^7\). Living Things è infine un’installazione del 2015 di Jacob Douenias e Ethan Frier che mette in mostra futuristici elementi di arredo celebranti una relazione simbiotica fra le persone e i microrganismi. In questo caso la Spirulina viene coltivata per mezzo di bioreattori in vetro incorporati all’interno di mobili per la cucina, per la sala da pranzo e il salone (Fig. 6).
In contesti urbani le microalghe possono essere utilizzate per la realizzazione di rivestimenti di edifici e facciate verdi in grado di purificare le acque reflue degli stabili sulle quali sono installate (Marino and Giordano, 2015). «I vantaggi unici di queste bio-facciate, che combinano cicli tecnici e biologici, inaugurano un approccio innovativo alla sostenibilità integrando valori ambientali, energetici e iconici»
---
**Fig. 1** Filaments of *Arthrospira Platensis*, also known as Spirulina, under microscopic view. The name derives from its unique form (credit: www.illarassaco.com).
**Fig. 2** A woman harvests sun-dried Spirulina on the sandy shores of Lake Chad. The technique is handed down from mother to daughter (credit: M. Marzot, 2010).
**Fig. 3** First microalgae mass culture experiments on a rooftop at MIT, Massachusetts (credit: J. S. Burlew, 1953).
Un altro progetto rilevante è Floating Fields a Shenzhen (Fig. 9), che ha visto la trasformazione di una fabbrica di farina in disuso in «un paesaggio produttivo di bacini d’acqua, che dimostra come la progettazione architettonica possa integrare al suo interno temi quali acquaponica, coltivazione di alghe, ciclo di purificazione dell’acqua e produzione alimentare sostenibile» (Chung, 2016, p. 35). Quest’area è stata concepita come un laboratorio di ricerca sul design rigenerativo, nonché come punto di incontro e ricreazione per la comunità. L’integrazione architettonica e infrastrutturale di coltivazioni di microalghe apre quindi nuove dimensioni nel campo della sostenibilità per designer e architetti (Proksch, 2013).
Diverse altre strutture e padiglioni sono stati progettati negli ultimi anni. Fra tutte meritano menzione Algaeavator di Tyler Stevermer e Jie Zhang, Urban Algae Folly (Fig. 10), BIOtech-HUT e Photo.Synth.Etica di ecoLogicStudio e Algae Dome di SPACE10 (Fig. 11). Dotato di una struttura a cupola che permette l’allagamento di un tubo flessibile all’interno del quale cresce la Spirulina, Algae Dome è stato esposto a Copenaghen in occasione della CHART Art Fair del 2017. Obiettivo dell’installazione è stato quello di far riflettere sulle potenzialità delle microalghe nel contrastare la malnutrizione e mitigare i cambiamenti climatici globali, nel tentativo di creare uno spazio produttivo a disposizione dei cittadini. Con la biomassa coltivata, sono state realizzate visionarie ricette fra cui le patatine alla Spirulina e il Dogless Hotdog (Fig. 12). Per quanto concerne i fotobioreattori a uso comunitario, è senza dubbio ugualmente ammirevole il lavoro di Cesare Griffa, che ha costruito WaterLilly 3.17 per ingaggiare la collettività. È stato inoltre ipotizzato uno scenario economico comprensivo di un corollario di attività per assicurare la sostenibilità del progetto (Griffa and Vissio, 2018).
I casi illustrati sono estremamente innovativi e hanno ottenuto discreti riscontri mediatici. Ciononostante la loro effettiva sostenibilità, sia in termini ambientali che economici, è ancora da dimostrare. Facendo seguito a questo successo, molti altri designer e architetti stanno presentando concept avveniristici, ma che sovente risultano di difficile implementazione – se non impossibile – perfino con le tecnologie odierne (Fig. 13). Al fine, perciò, di realizzare progetti in grado di sfruttare appieno le proprietà delle microalghe, è necessario anzitutto possederne una buona conoscenza. Oltre a ciò è opportuno ragionare sistemicamente per cogliere potenziali connessioni con altri domini.
Metodologia di analisi | Dalla letteratura non emergono relevanti studi comparativi su progetti che coinvolgono la produzione di microalghe in scenari urbani. L’analisi dei casi studio qui di seguito illustrata permette di evidenziare quali siano le potenzialità e le criticità delle diverse realizzazioni, le tendenze progettuali e i relativi periodi di durata operativa. Allo scopo di avere una panoramica più completa possibile, lo studio ha preso in considerazione casi esemplari di diversa tipologia fra cui installazioni, integrazioni architettoniche e infrastrutture, oltre a progetti per il sociale. I casi esaminati sono 18 e si collocano temporalmente dal 2011 a oggi.
La produzione di microalghe in città possiede, in effetti, molti punti in comune con le ben più note pratiche di agricoltura urbana e periurbana. A motivo di tali rassomiglianze, i casi studio sono stati valutati attraverso una metodologia simile a quella impiegata da MADRE (2018) per una selezione di buone pratiche di orti cittadini. Questa ha previsto la determinazione di alcuni parametri, messi a confronto per mezzo di un grafico radar composito. L’analisi ha identificato il contributo di ognuno dei progetti a sei diverse sfide: i parametri assumono una connotazione di tipo qualitativo e sono visualizzati su una scala da 1 a 3, in cui 1 esprime un contributo minimo e 3 un contributo significativo. A seguire la descrizione delle sfide e il dettaglio della valutazione per ognuna di esse.
1) Creazione di lavoro: nuove attività lavorative attinenti alla manutenzione degli impianti di produzione, coltivazione, trasformazione, commercializzazione, promozione e distribuzione di prodotti (primari e trasformati), ma anche la formazione di personale specializzato. È utile per contrastare la povertà nelle aree più degradate.
2) Inclusione sociale: iniziative che coinvolgono direttamente le comunità locali senza distinzione di sesso, età o etnia e mirate all’integrazione delle fasce più deboli della popolazione.
3) Supporto educativo/divulgativo: trasmissione di principi relativi alla sostenibilità ambientale, al tema del cibo sano e della sicurezza alimentare per un pubblico allargato. Ci si riferisce all’educazione di bambini, adulti e anziani attraverso metodi pedagogici tradizionali o alternativi. L’aspetto divulgativo tiene conto, oltremodo, della risonanza mediatica del progetto o dell’influenza di visitatori in caso di mostre e fiere.
4) Impatto ambientale: per qualificare il contributo a questa sfida ci si è limitati a considerare la quantità di biomassa prodotta, la sostenibilità delle tecniche applicate per la produzione e la raccolta e le misure adottate per l’eventuale distribuzione.
5) Creazione di valore: la vendita al dettaglio e la realizzazione di un marchio sono modi per consolidare la qualità dei prodotti, permettendo ai consumatori di riconoscerne il valore aggiunto.
6) Avvio di sinergie sul territorio: la collaborazione con terzi favorisce lo sviluppo e il mantenimento dei progetti. Questo genera fruttose interazioni con cittadini e associazioni di consumatori, autorità pubbliche a livello locale e/o regionale, piccole e medie imprese private, professionisti di settore, scuole, Università e Centri di ricerca.
**Limiti e possibilità dei progetti**
La restituzione dei casi studio mostra anche la loro collocazione spaziale e temporale (Fig. 1.4). Si nota che la durata operativa media della maggior parte dei progetti risulta di pochi mesi (in alcuni casi addirittura giorni), quindi relativamente limitata. Quelli che si distinguono per un’operatività più lunga – anche di anni – sono principalmente integrazioni architettoniche o attività con un modello di business solido, dove il contributo multidisciplinare di più profili è stato fondamentale. Una maggiore durata è un fattore non indifferente che permette di pianificare e mettere in pratica una serie di collaborazioni virtuose. Il grafico radar (Fig. 1.5), risultato della sovrapposizione dei diagrammi ottenuti dall’analisi di ogni singolo caso studio, mostra il contributo globale dei progetti a ogni sfida. Le aree dal colore più scuro indicano un contributo superiore. Sperimentazioni così nuove e fuori dagli ordinari canoni necessitano di essere spiegate e raccontate tanto quanto le proprietà delle microalghe: il valore educativo dei progetti è, quindi, mediamente elevato. La sostenibilità ambientale è un ulteriore elemento molto importante. Tuttavia le prestazioni a livello tecnico–produttivo appaiono spesso inadatte, risultando in progetti esteticamente attraenti ma poco efficienti. In alcuni casi, inoltre, la biomassa prodotta non viene addirittura utilizzata in quanto non certificata a uso alimentare.
È interessante notare che, a prescindere dalla durata, vi è una tendenza a inserire i progetti all’interno di contesti allargati, come eventi cittadini e aree a uso polivalente, quasi a sottolineare la necessità di connetterli al tessuto locale. A esclusione di alcuni casi esemplari, la maggior parte non contempla la possibilità di utilizzare le microalghe come vettore di crescita economica e integrazione sociale. Queste sono indubbiamente aree di lavoro critiche che meriterebbero di essere approfondite. Ambito di ricerca e pratica particolarmente avvincente, ma scarsamente investigato, è, infine, quello della promozione, con speciale attenzione alla comunicazione dei valori legati al prodotto e alla filiera nella sua più ampia accezione. I casi studio analizzati presentano importanti limiti, ma al contempo aprono nuove possibilità. In ottica di progettare a vantaggio delle comunità, facendo fronte ai problemi delle città con un pensiero resiliente, soluzioni di questo tipo dovrebbero operare con uno spettro d’azione a più ampio raggio e sul lungo periodo, favorendone la scalabilità. Un approccio tale promuoverebbe nuove visioni legate sia alla realizzazione di un prodotto, ma anche alle infrastrutture economiche e sociali nella loro interezza (Peruccio et alii, 2018).
**Linee guida progettuali**
Di seguito vengono tracciate le linee guida per la progettazione di prodotti, servizi e sistemi adattivi integrati, attuabili non solo in aree urbane e tali da utilizzare le microalghe come forza trainante per incentivare la redditività economica, la sostenibilità ambientale e l’inclusione sociale. Queste indicazioni sono prevalentemente per la progettazione di prodotti o di installazioni urbane dalle dimensioni contenute, sebbene possano essere valide – con le dovute considerazioni – anche per interventi su scala architettonica. In primo luogo è necessario analizzare criticamente le cornici uniche in cui si opera, al fine di identificare le leve per far fronte ai problemi con cognizione di causa e rigore scientifico. La produzione di microalghe non deve essere tuttavia un’opzione calata dall’alto, bensì una risposta adeguata al contesto. Senza entrare a fondo in domini di competenza tipici di altre materie, lo studio delle microalghe è basilare per capirne il funzionamento.
Sulla base delle necessità progettuali è d’uopo identificare le specie più indicate per il caso particolare, preventivando che se usate a scopo alimentare devono essere coltivate in acque incontaminate. Il designer dovrebbe distogliere l’attenzione dalla sola componente materiale del progetto, focalizzandosi sulla definizione di servizi annessi, esperienze e sugli aspetti educativi. Data la scarsa conoscenza del tema a un pubblico di non esperti, è bene fornire nozioni di carattere generale che ne per-
---
**Fig. 6** | *Living Things* is an installation by J. Douenias and E. Frier at the Mattress Factory Museum in Pittsburgh, Pennsylvania, 2015 (credit: E. Frier, 2015).
mettano una maggiore comprensione. La consapevolezza permetterebbe di incoraggiare l’adozione, che a oggi è ancora limitata. La collaborazione con altri professionisti è fortemente consigliata per colmare le lacune disciplinari. Così come è importante il coinvolgimento dei cittadini e delle attività locali quali ristoranti, mense, negozi, scuole e palestre. Anche le istituzioni e le Università potrebbero partecipare in maniera più o meno diretta, fornendo contributi finanziari, scientifici e culturali.
Oltre all’impatto ambientale, bisogna verificare attentamente la sostenibilità economica dei progetti. Per fare ciò è consigliabile rifarsi a modelli dimostratisi vincenti, richiedendo il supporto di specialisti e migliorando i periodi di recupero dell’investimento. Per agevolare la replicabilità, si potrebbe, infine, prevedere di rendere i progetti ‘open source’, ovvero modificabili, adattabili e migliorabili da parte di chiunque. Si consideri, inoltre, che la scala fisica della realizzazione non è proporzionalmente correlata al suo impatto: piccoli prodotti possono cambiare radicalmente la qualità della vita di intere comunità, mentre installazioni più grandi potrebbero richiedere ingenti sforzi economici e gestionali. Per concludere, è bene tenere a mente che lo scopo finale del progetto non deve essere la sola coltivazione delle microalghe a uso commerciale – come d’altronde avviene già in enormi impianti produttivi situati fuori dai centri abitati – ma l’utilizzo di queste per la creazione di valore e l’avvio di un cambiamento urbano positivo per i residenti.
Considerazioni finali | La dimensione architettonica e quella del prodotto, così come l’autoproduzione e la coltivazione urbana, operano su diverse scale e necessitano pertanto di approcci progettuali ben distinti. Nonostante l’implementazione di progetti che prevedono la produzione microalgale non sia immediata a causa delle complessità tecniche, operative e gestionali, ve ne sono diversi in fase di avviamento anche in Italia, come quello di TNE (Torino Nuova Economia) presso gli ex stabilimenti Fiat di Mirafiori (Luise, 2019). Esempi virtuosi come i precedentemente citati Algae Dome e Skyline Spirulina (Fig. 16) dovrebbero essere presi come modello. Skyline Spirulina è il progetto di una start up tailandese che coltiva microalghe sul tetto di un hotel nel centro di Bangkok, per distribuirla sia a clienti alto spendenti che ai più bisognosi (Ortolani, 2016). I presenti progetti che di base posseggono molte differenze, sono in realtà efficaci perché accomunati dal coinvolgimento di più parti (aziende private, istituzioni, Centri di ricerca, ecc.). Per quanto concerne i modelli di business, si potrebbe, inoltre, adattare all’ambiente urbano quelli degli Spiruliners francesi e dei villaggi rurali nei Paesi in via di sviluppo che, a fronte di bassi investimenti e tecnologie elementari, consentono di generare occupazione e di produrre cibo sano e sostenibile in abbondanza.
Coltivazioni microalgali in città sui tetti piani, in aree inutilizzate, in spazi comuni ma anche in ambienti indoor, renderebbero l’aria più pulita e creerebbero nuove zone per l’approvvigionamento alimentare, non in competizione con quelle dedicate alle produzioni agricole tradizionali. La biomassa potrebbe essere utilizzata anche come fertilizzante per orti e giardini. Alla stregua di qualsiasi altra pratica di agricoltura urbana, prodotti, servizi e sistemi analoghi permetterebbero di coinvolgere la popolazione «migliorando l’identità sociale e culturale comune dei cittadini» (Ackerman et alii, 2014, p. 190). In un futuro prossimo le coltivazioni di microalghe potrebbero vedere una rapida ascesa. Se «alcuni prevedono grandi impianti centralizzati che producono cibo ed energia su vasta scala […] altri vedono invece produzioni più piccole connesse in rete» (Henrikson, 2013, p. 11). Si ipotizzano, ad esempio, dispositivi a uso comunitario che possano fungere anche da luogo di aggregazione. Questi potrebbero essere inseriti all’interno di quartieri, scuole o centri commerciali e mostrare in tempo reale le quantità di biomassa prodotta e di CO₂ sottratta. Il servizio di distribuzione potrebbe avvenire per opera di volontari, che otterrebbero dei crediti spendibili all’interno di una rete di attività commerciali partner.
Soluzioni simili potrebbero essere alcune fra le vie perseguibili per immaginare un futuro sostenibile per le città e di conseguenza pianificare prospetti resilienti caratterizzati da nuovi equilibri. Il maggior ostacolo, per il momento, sembra essere di tipo culturale poiché le microalghe non sono un alimento dal gusto e dall’aspetto familiari. La coltivazione locale in città potrebbe, però, cambiare la percezione e quindi fornire una motivazione in più per l’adozione, immaginando anche nuovi usi della biomassa fresca.
see a strong demographic and dimensional development in the next decades (United Nations, 2018), have a considerable negative impact on natural ecosystems. Therefore, operating on their sustainability is imperative, entailing difficulties but also unique opportunities (Rees and Wackernagel, 1996). Among the many challenges, resilient cities should be able to cope with air pollution, to plan water and food supply systems in the event of scarcity of agricultural areas, find new uses for abandoned or degraded zones, while supporting disadvantaged groups of the population. This is not only to mitigate damages but, primarily, to adapt new project proposals to a radical and irreversible change.
Nowadays design means conceiving products, systems, services, and experiences that lead to a better quality of life\(^1\). The Design should not have an anthropocentric vision anymore and «its methods should be aimed at [...] reintegrating our relationship with the environment and with all the species» (Antonelli, 2019, p. 38). For the first time, a radical approach to design is emerging. It draws on biological sciences and combines the use of living matter within products, structures, and processes (Myers, 2018). Several designers have already showcased progressive solutions – even to urban-scale problems – which involved the use of animals, plants, algae, mosses, fungi, bacteria, and other organic materials\(^2\). This paper analyzes the characteristics of those projects that included the use of microalgae, providing design researchers and professionals with the guidelines to introduce them in urban experiments. Moreover, possible disciplinary influences are suggested, and potential stakeholders identified. Although our culture will need time to shift its perspective to the potential of such unusual proposals, these could be the key for the adoption of long-term visions that can foster sustainable, inclusive, and fair urban development.
**The potential of microalgae** | Microalgae are photosynthetic organisms of different natures characterized by the absence of roots, stem or leaves, which are typically found in fresh or salt waters. In phycology, the term ‘microalga’ refers to the microscopic algae sensu stricto, and the cyanobacteria (Tomaselli, 2004). Some of the most widely cultivated microalgae are Chlorella and Spirulina (Fig. 1). The latter, a blue-green cyanobacterium, has become a popular food supplement due to its high protein content. The properties of algae have been known to man since ancient times, and in Mexico, they were used as food by the Aztecs. Fresh Spirulina was harvested from Lake Texcoco, exposing it to the sunshine for drying, and selling it in markets in the form of small cakes (Sánchez et alii, 2003). A similar harvesting method was adopted by the indigenous Kanembu tribe, who ate a substance called ‘dilhé’, obtained by sun-drying the algal biomass on the shores of Lake Chad (Ciferrí, 1983; Fig. 2). This traditional practice is still common in the region and its trading represents an important contribution to the local economy (Abdulgader et alii, 2000).
In developing countries, the cultivation of Spirulina produces also an effective remedy against chronic malnutrition and permits the creation of numerous jobs. There is also a vast network of producers in France who grow Spirulina with artisanal practices\(^3\). Since the ’50s significant doubts have emerged about the ability of conventional agriculture to feed the exponentially-growing world population (Belasco, 1997), and many experts have been attracted by the numerous possibilities of using microalgae. Major research institutes have funded ambitious studies, and cultivations under controlled conditions have started, aimed at investigating their potential in various fields (Garrido-Cardenas et alii, 2018; Fig. 3).
Recent research has shown that microalgae can be used as human food and animal feed, to extract added value components, but also for phyto-purification, the production of biofuels, and organic fertilizers (Khan et alii, 2018). Besides, because of their photosynthetic efficiency, microalgae represent a promising tool for mitigating carbon dioxide in the atmosphere (Singh and Ahluwalia, 2013). Therefore, considering the different fields of application and the relative ease of cultivation, microalgae could play an important role in pioneering solutions in different contexts.\(^4\)
**Experimentations, projects, installations** | In the sphere of design and architecture, experimentations with microalgae are still rather limited. This section reviews some of the most significant projects of the last decade. These developments range from products to architectural installations and, in multiple cases, were the result of collaboration with biologists and engineers. As regards experimentation with pigments, the prototype of Algaearium Bio-printer is an innovative device that allows digital printing utilizing an algal bio-ink (Sawa, 2016). Similarly, the research group of Living Ink Technologies is working on the marketing of a time-lapse ink (Fig. 4), with which dynamic illustrations can be created\(^5\). The Berlin-based design studio
Blonde & Bieber has instead opted to work with textile printing: abstract shapes are imprinted on fabric to create unique patterns with mottled colours.\textsuperscript{6}
Different systems have also been designed for the domestic production of Spirulina. Worthy of note is Farma (Fig. 5), the work of William Patrick from the MIT Media Lab, which is a table photobioreactor capable of producing and filtering Spirulina, creating a powder to be inserted in capsules. The instructions for the construction of the device have been made available online for users to be able to replicate it independently\textsuperscript{7}. Lastly, Living Things is a 2015 installation by Jacob Douenias and Ethan Frier which contains futuristic furnishings celebrating a symbiotic relationship between human beings and microorganisms. In this case, Spirulina is cultivated through glass bioreactors incorporated within the furniture of the kitchen, the dining room, and the living room (Fig. 6).
In urban contexts, microalgae can be used for claddings of structures and green facades, capable of purifying the wastewaters of the buildings on which they are installed (Marino and Giordano, 2015). «The unique benefits of the bio-facades through the combination of the technical and biological cycles within buildings inaugurate an innovative approach to sustainability by integrating environmental, energetic, and iconic values» (Elrayes, 2018, p. 1175). One example is the BIQ House in Hamburg (Fig. 7), the first building in the world that used microalgae to produce the biomass and thermal energy necessary for its needs. Microalgal production can also be integrated with
\textbf{Fig. 11} | People gathering and chatting under the Algae Dome in Copenhagen, 2017 (credit: N. A. Vindelev, 2017).
\textbf{Fig. 12} | Spirulina is commonly added to the dough for several recipes. The Dogless Hotdog, developed by SPACE10’s chef-in-residence S. Perez, replaces meat with mushrooms and has a high protein content (credit: K. Kristoffersen, 2017).
\textbf{Fig. 13} | The utopian Eco-Pod concept by Höweler + Yoon and Squared Design Lab. The modules are located in brownfield sites and continuously reconfigured to ensure optimal algae growth conditions, to be used for the production of biofuels (credit: Squared Design Lab, 2009).
other metropolitan infrastructures as in the case of Culture Urbaine in Geneva (Fig. 8). The project, which included the installation of a closed transparent pipe system on a viaduct, was carried out near a busy road to use sunlight and CO₂ – both abundantly present onsite. Culture Urbaine is of particular interest as it has succeeded in combining food production in an urban environment, in reinterpreting the existing infrastructures, and in the maintenance of green spaces.
Another relevant project is Floating Fields in Shenzhen (Fig. 9), the transformation of an abandoned flour factory into «a productive and leisure pond-scape, demonstrating how architectural design can integrate concepts of aquaponics, floating plots, algae cultivation, self-cleansing water cycle and sustainable food production» (Chung, 2016, p. 36). This area is conceived as a research laboratory on regenerative design, as well as a recreational location for the community. Thus, the architectural and infrastructural integration of microalgae cultivations opens up new dimensions in the field of sustainability for designers and architects (Proksch, 2013).
Several other structures and pavilions have been designed in recent years. Among them Algaeavator by Tyler Stevermer and Jie Zhang, Urban Algae Folly (Fig. 10), BIOtechHUT and PhotoSynthEtica by ecoLogicStudio, and Algae Dome by SPACE10 (Fig. 11). Characterized by a hemispherical-shaped structure that allows the housing of a flexible tube in which Spirulina grows, Algae Dome was presented in Copenhagen during the 2017 CHART Art Fair. The installation was aimed at stimulating reflection on the potential of microalgae for preventing malnutrition and mitigating climate change, in the attempt to create a productive space available to citizens. With the cultivated biomass, visionary recipes were created, including Spirulina chips and the Dogless Hotdog (Fig. 12)⁸. Regarding photobioreactors for a communarian use, the work of Cesare Griffa is undoubtedly admirable. Griffa built WaterLily 3.17 to engage the community, and an economic scenario was also hypothesised, including a corollary of activities to ensure the sustainability of the project (Griffa and Vissio, 2018)⁹.
All these designs are truly innovative and have gained considerable media coverage. Nevertheless, their sustainability both in environmental and economic terms has yet to be demonstrated. Following on from this success, many other designers and architects have presented futuristic concepts, which however are often difficult – if not impossible – to implement with present-day technologies (Fig. 13). Therefore, it is of primary necessity to possess a good knowledge of microalgae, to develop projects which can take full advantage of their countless properties. In addition to this, systemic reasoning is helpful to grasp potential connections with other domains.
**Analysis methodology** | The literature does not provide any relevant comparative studies on projects that involve the production of microalgae in urban scenarios. The analysis of the case studies illustrated below highlights the potential and criticality of the different developments, the design trends, and the relative periods of operation. To have a more comprehensive view, this study took into consideration exemplary cases of diverse types, including installations, architectural and infrastructural integrations, as well as social projects, 18 cases from 2011 to the present are examined.
The production of microalgae in cities has multiple points in common with the better-known urban and peri-urban farming practices. Because of these similarities, the case studies were analyzed adopting a methodology similar to the one used by MADRE (2018) for the assessment of selected urban farming activities. This involved the determination of several parameters, later compared through a composite radar chart. The analysis identified the contribution of each project to six different challenges. The parameters have a qualitative connotation and are displayed on a scale of 1–3, in which 1 expresses a minimum contribution and 3 a significant contribution. The description of the challenges and the evaluation details follow.
1) Job creation: new activities related to maintenance, production, harvesting, processing, marketing, promotion, and distribution of products (primary and processed). This also includes the training of specialized personnel and is useful for fighting poverty in the most degraded areas.
2) Social inclusion: initiatives that directly involve local communities without distinction of sex, age or ethnicity and are aimed at empowering people from disadvantaged neighbourhoods.
3) Education/divulgation support: transmission to a wider public of principles linked to environmental sustainability, healthy diet, and food security. This encompasses the education of children, adults and the elderly through traditional or alternative pedagogical methods. The divulgation takes into account the media coverage of the project or the number of visitors in the case of exhibitions and fairs.
4) Environmental impact: to evaluate the contribution to this challenge, the quantity of biomass produced, the sustainability of the techniques applied for the production and collection, and the measures adopted for the eventual distribution have been taken into account.
5) Value creation: retailing and brand building are ways to consolidate the quality of the products, permitting consumers to recognize the added value.
6) Generation of synergies on the territory: collaboration with third parties encourages the development and maintenance of the projects. This generates fruitful interactions with citizens and consumer associations, public authorities at local and/or regional level, small and medium private companies, professionals, schools, Universities, and Research centers.
**Limits and possibilities of the projects** | The analysis of the case studies also examined their spatial and temporal location (Fig. 14). Interestingly, the average duration of most projects is a few months (in certain cases only some days), therefore relatively limited. Those that stand out for a longer operational time – even years – are mainly architectural integrations or activities with a solid business model, where the multidisciplinary contribution of several experts has been crucial. A longer duration is a factor that allows a series of fruitful collaborations to be planned and put into practice. The radar chart (Fig. 15), resulting from the overlapping of the diagrams obtained from the analysis of each case study, shows the global contribution of the projects to each challenge. The darker areas indicate greater contribution. Such new experimentations, unfettered by the established canons, need to be explained and narrated as much as the properties of microalgae: the educational value of the projects is therefore generally high. Environmental sustainability is another very important element. However, technical performances often appear unsuitable, resulting in aesthetically attractive but inefficient projects. In some cases, moreover, the biomass produced is not even utilized as it is not certified for food use.
Regardless of duration, there is a tendency
---
**Fig. 14** | The most representative projects involving the production of microalgae in urban areas since 2011 (data as at September 2019).
| Project name | Designer(s) | Place | Starting period | Duration |
|-----------------------|------------------------------|------------------------------|-------------------|----------------|
| 1. Algaearden | Ring, Parker & Fredericks | Grand-Métis, Canada | 2011, June | 15 months |
| 2. Skyline Spirulina | EnerGaia | Bangkok, Thailand | 2013, January | 6 years and 8 months |
| 3. BIQ House | ARUP | Hamburg, Germany | 2013, April | 6 years and 5 months |
| 4. Algaeavator | Stevermer & Zhang | Cambridge, Massachusetts | 2013, December | 3 months |
| 5. Urban Algae Canopy | ecoLogicStudio | Milan, Italy | 2014, April | 12 days |
| 6. Urban Algae Facade | Cesare Griffa | Milan, Italy | 2014, April | 12 days |
| 7. Culture Urbaine | The Cloud Collective | Geneva, Switzerland | 2014, June | 5 months |
| 8. The Third Paradise | Michelangelo Pistoirotto | Copenhagen, Denmark | 2014, October | 2 months |
| 9. Urban Algae Folly | ecoLogicStudio | Milan, Italy | 2015, May | 6 months |
| 10. Facade System | HINT Engineering GmbH | Berlin, Germany | 2015, November | 3 years and 10 months |
| 11. Floating Fields | Thomas Chung | Shenzhen, China | 2016, March | 10 months |
| 12. BIOtechHUT | ecoLogicStudio | Astana, Kazakhstan | 2017, June | 3 months |
| 13. The Carbon Sink | Fermantalg & SUEZ | Paris, France | 2017, July | 2 years and 2 months |
| 14. Algae Dome | SPACE10 | Copenhagen, Denmark | 2017, September | 3 days |
| 15. Living Solar Modules | Solaga | Berlin, Germany | 2017, October | 1 year and 11 months |
| 16. WaterLily 3.17 | Cesare Griffa | Turin, Italy | 2018, February | 5 months |
| 17. BioIbarb 2.0 | BiomTech | Puebla, Mexico | 2018, June | 15 months |
| 18. Photo.Synth.Etica | ecoLogicStudio | Dublin, Ireland | 2018, November | 3 days |
to include projects within far more extended contexts, such as city events or multi-purpose areas, as if to emphasize the need to connect them to the surroundings. Apart from some exemplary cases, most do not ponder the possibility of using microalgae as a vector of economic growth and social integration. These are two undoubtedly critical areas of practice and research that deserve to be explored. Finally, the communication and promotion of the values of these projects are particularly compelling but poorly investigated. The case studies analyzed have significant limitations but at the same time present new possibilities. To design for the benefit of the community, dealing with the problems of cities with resilient thinking, projects like these should operate with a larger range of action, and in the long term, with the intent of favouring their scalability. This approach would promote new visions not only linked to the creation of a product but also the economic and social infrastructures in their entirety (Peruccio et alii, 2018).
**Design guidelines** | This paragraph draws the guidelines for the design of integrated adaptive products, services, and systems. These are useful for the implementation of projects which involve the use of microalgae as driving forces for fostering economic profitability, environmental sustainability, and social inclusion primarily in urban areas, but also everywhere. These indications are mainly for the design of products or small-scale urban installations, although they can be valid – with due consideration – also for interventions on an architectural scale. Firstly, it is necessary to critically examine the unique contexts of operation, to identify the key factors for facing the problems with full knowledge of the facts and scientific rigour. However, the production of microalgae must not be an imposed option, but rather an adequate response to the circumstances. The study of microalgae is fundamental to understand how they function, without the need to enter deeply into the domains of competence of other subjects.
Based on the project needs, it is necessary to identify the most suitable algal species for the particular case, planning that if they have to be used as food they must be cultivated in clean waters. Designers should divert attention from the sole material component of the project, focusing on the definition of related services, experiences, and educational aspects. Given the lack of knowledge on the topic possessed by an audience of non-experts, it is good to provide general notions for a better understanding. Increased awareness would encourage adoption, which is still limited. Collaboration with other professionals is strongly recommended to fill disciplinary gaps. It is also important to involve citizens and local activities such as restaurants, canteens, shops, schools, and gyms. Even institutions and Universities could participate in providing financial, scientific and cultural contributions.
In addition to the environmental impact, the economic sustainability of the projects must be thoroughly checked. To do this it is advisable to refer to models that have been proved successful and, if necessary, asking for the support of specialists to shorten payback periods. To facilitate its replicability, the project can be ‘open’, therefore modifiable, adaptable and improved by anyone. The physical scale of the outcome is not proportionally related to its impact: small products can radically improve the life quality of entire communities, while larger installations may require substantial economic and management efforts.
To conclude, it is important to bear in mind that the final aim of the projects must not be the mere cultivation of microalgae for commercial use – as indeed already happens in huge production plants located in the countryside – but the use of these for the creation of value leading towards positive urban change for residents.
**Final considerations** | The architectural dimension and that of the product, as well as the self-production and the urban farming, operate on different scales and therefore need distinct approaches. Although the implementation of projects which envisage microalgal production is not immediate due to technical, operational and management complexities, various projects are in the start-up phase also in Italy such as the one in Mirafiori (the former Fiat plant) by TNE – Torino Nuova Economia. (Luise, 2019). Worthy examples, such as the aforementioned Algae Dome and Skyline Spirulina (Fig. 16), should be taken as reference. Skyline Spirulina is the project of a Thai company that cultivates microalgae on the rooftop of a hotel in the center of Bangkok. It distributes the product to both high-spending customers and people in need (Ortolani, 2016). These two projects, considerably different from each other, are indeed effective because they involve several stakeholders (private companies, Institutions, Research centers, etc.). As far as business models are concerned, those of the French Spiruliners and the rural villages in developing countries could also be adapted to the urban environment. In the face of low investments and rudimentary technologies, these models enable the generation of employment and the production of healthy and sustainable food in abundance.
Urban microalgae farms on flat roofs, in brownfield sites, in common spaces, but also indoor, would make the air cleaner and create new food supply areas, without competing with those dedicated to traditional agriculture. The biomass could also be used as fertilizer for vegetable gardens and parks. Like any other urban farming practice, the relative products, services, and systems would allow the population to be involved «enhancing the common social and cultural identity for city residents» (Ackerman et alii, 2014, p. 190). Shortly, microalgae production could experience a rapid rise. If «some envision huge centralized algae farms producing food and energy on a vast scale [...] others see networks of smaller farms» (Henrikson, 2013, p. 11). Realizations can be, for example, devices for community use, that can also serve as places...
for social aggregation. These could be located in neighbourhoods, schools or shopping centers and show the real-time quantities of biomass produced, and of CO$_2$ removed. The distribution could be made by volunteers, who would get credits to be spent within a network of business partners.
**Acknowledgements**
The contribution is the result of a common reflection of the Authors. However, the introductory paragraph is to be attributed to P. P. Peruccio, while the paragraphs ‘The potential of microalgae’, ‘Experimentations, projects and installations’, ‘Analysis methodology’, ‘Limits and possibilities of the projects’, ‘Design guidelines’ and ‘Final considerations’ to M. Vrenna.
**Notes**
1) A definition of industrial design can be found on the World Design Organization’s website at: wdo.org/about/definition/ [Accessed 10 August 2019].
2) Examples are Oyster-tecure which aims to block wave motion and purify water through oyster colonies, and Pigeon d’Or, a series of installations that allow feeding the pigeons with a special yogurt, that gives cleansing properties to their faeces. Visit the websites: www.steapstudio.com/projects/oyster-tecure/ and www.colehen vanhoutte.com/pigeon/or/ [Accessed 13 August 2019].
3) Antenna Foundation has contributed to charitable activities in Africa, Asia and South America. The model has been adopted by the farmers belonging to the Fédération des Spiruliniers de France. Visit the websites: www.antenna.ch/en/andwww.spiruliniersdefrance.fr/ [Accessed 19 August 2019].
4) Microalgae need water, light, and nutrients to grow. Even though the cultivation in open ponds is productive, photobioreactors (aquarium-like controlled closed systems) guarantee higher yields by using less land and extending the growing season.
5) More information on the website: www.kickstarter.com/projects/livingink/living-ink-time-lapse-ink [Accessed 14 September 2019].
6) More information on the website: www.domusweb.it/design/2014/07/16/blond_bieber_algaemy.html [Accessed 15 September 2019].
7) Cfr. Patrick, W. (2015), *Farma – A Home Bioreactor for Pharmaceutical Drugs*. [Online] Available at: www.i amwillpatrick.com/FARMA [Accessed 18 September 2019].
8) SPACE10 works on projects related to sustainable living, and some of these focus on food production in urban environments. It is suggested to read: SPACE10 (2019), *Future food today*, Frame, Amsterdam.
9) The study noted that the system would be economically viable if the labour is considered as part of the family/community activities, or if it is shared with other activities (e.g., building management).
**References**
Abdulqader, G., Barsanti, L. and Tredici, M. R. (2000), “Harvest of *Arthrospira platensis* from Lake Kossorom (Chad) and its household usage among the Kanembu”, in *Journal of Applied Phycology*, vol. 12, issue 3-5, pp. 493-498.
Ackerman, K., Conard, M., Culligan, P., Plunz, R., Sutto, M-P. and Whittinghill, L. (2014), “Sustainable food systems for future cities: The potential of urban agriculture”, in *The Economic and Social Review*, vol. 45, issue 2, pp. 189-206.
Antonelli, P. (2019), “Broken nature”, in Antonelli, P. and Tannir, A. (eds), *Broken nature – XXII Triennale di Milano*, Mondadori Electa, Milano, pp. 16-42.
Belasco, W. (1997), “Algae Burgers for a Hungry World? The Rise and Fall of Chlorella Cuisine”, in *Technology and Culture*, vol. 38, issue 3, pp. 608-634.
Braudel, F. (1984), *The perspective of the world. Civilization & Capitalism, 15th -18th Century*, Harper & Row, New York.
Chung, T. (2016), “Floating Fields. Il riscatto della natura”, in *ACER*, vol. 6, issue 32, pp. 35-40.
Ciferr, O. (1983), “Spirulina, the edible microorganism”, in *Microbiological Reviews*, vol. 47, issue 4, pp. 551-578.
Elrayies, G. M. (2018), “Microalgae: Prospects for greener future buildings”, in *Renewable and Sustainable Energy Reviews*, vol. 81, pp. 1175-1191.
Garrido-Cardenas, J. A., Manzano-Aguilaro, F., Acien-Fernandez, F. G. and Molina-Grima, E. (2018), “Microalgae research worldwide”, in *Algal Research*, vol. 35, pp. 50-60.
Griffa, C. and Nissia, A. (2018), *WaterLily – Story of an architectural photosynthesis*. [Online] Available at: drive.google.com/file/d/1VsFkWDO5RZyv4yXEUdUZ_W1tf3Vh/view [Accessed 13 June 2019].
Henriksson, R. (2013), *Algae microforms: for home, school, community and urban gardens, rooftop, mobile and vertical farms and living buildings*, CreateSpace Independent Publishing Platform.
Jacobs, J. (1961), *The death and life of great American cities*, Vintage Books, New York.
Khan, M. I., Shin, J. H. and Kim, J. D. (2018), “The promising future of microalgae: current status, challenges, and optimization of a sustainable and renewable industry for biofuels, feed, and other products”, in *Microbial Cell Factories*, vol. 17, article number 36, pp. 1-21.
Luise, C. (2019), “Mitrafori, là dove c’era la fabbrica verrà coltivata l’alga spirulina”, in *La Stampa*, newspaper, March 6, p. 27.
MADRE (2018), *Urban and peri-urban agriculture – Best practice catalogue*. [Online] Available at: madre.interreg-med.eu/fileadmin/user_upload/Sites/Green_Grocery/2018/MADRE/MADRE_best_practice_catalogue.pdf [Accessed 17 June 2019].
Marino, V. and Giordano, R. (2015), “Requirements and performances of a façade integrated microalgae photo-bioreactor for domestic wastewater recycling”, in Astudillo, J. et alii (eds), *Proceedings of the VII International Congress on Architectural Envelopes*, Tecnalia Research & Innovation, San Sebastián, pp. 79-86.
Myers, W. (2018), *Biodesign – Nature, Science, Creativity*, Revised and expanded edition, Thames & Hudson, New York.
Ortolani, G. (2016), “Is spirulina the new kale? A Thai startup is bringing back the tiny green algae”, in *The Guardian*, newspaper online, April 7. [Online] Available at: www.theguardian.com/sustainable-business/2016/apr/07/spirulina-kale-thailand-urban-farming-environment-food [Accessed 23 September 2019].
Peruccio, P. P., Vrenna, M., Menzardi, P. and Savina, A. (2018), “From ‘the limits to growth’ to systemic design: Envisioning a sustainable future”, in Yinghao, Z., Yanyan, L., Dongliang, X., Gong, M. and Di, S. (eds), *Cumulus Conference Proceedings 2018 – Diffused Transition and Design Opportunities, Wuxi, China, October 31-November 3, 2018*, Huguang Elegant Print Co., pp. 751-759.
Proksch, G. (2013), “Growing Sustainability – Integrating Algae Cultivation into the Built Environment”, in *Edinburgh Architectural Research Journal*, vol. 33, pp. 147-162. [Online] Available at: sites.eea.ed.ac.uk/ear/files/2014/07/147-1621_Updated.pdf [Accessed 22 August 2019].
Rees, W. and Wackernagel, M. (1996), “Urban ecological footprints: Why cities cannot be sustainable, and why they are a key to sustainability”, in *Environmental Impact Assessment Review*, vol. 16, issue 4-6, pp. 223-248.
Sánchez, M., Bernal-Castillo, J., Rozo, C. and Rodriguez, I. (2003), “Spirulina (Arthrospira): An Edible Microorganism – A Review”, in *Universitas Scientiarum*, vol. 8, issue 1, pp. 7-24.
Sawa, M. (2016), “The laboratory life of a designer at the intersection with algal biotechnology”, in *Architectural Research Quarterly*, vol. 20, issue 1, pp. 65-72.
Singh, U. B. and Ahluwalia, A. S. (2013), “Microalgae: a promising tool for carbon sequestration”, in *Mitigation and Adaptation Strategies for Global Change*, vol. 18, issue 1, pp. 73-95.
Tomelli, P. (2004), “The microalgal cell”, in Richmond, A. (ed.), *Handbook of Microalgal Culture – Biotechnology and Applied Phycology*, Blackwell Science, Oxford, pp. 3-19.
United Nations – Department of Economic and Social Affairs, Population Division (2018), *World urbanization prospects – The 2018 revision*. [Online] Available at: population.un.org/wup/Publications/Files/WUP2018-KeyFacts.pdf [Accessed 12 September 2019]. |
Effect of planning policies on land use dynamics and livelihood opportunities under global environmental change: Evidence from the Mekong Delta
Tristan Berchoux\textsuperscript{a,b,*}, Craig W. Hutton\textsuperscript{c,**}, Oliver Hensengerth\textsuperscript{d}, Hal E. Voepel\textsuperscript{e}, Van P.D. Tri\textsuperscript{e}, Pham T. Vu\textsuperscript{f}, Nghia N. Hung\textsuperscript{g}, Dan Parsons\textsuperscript{h}, Stephen E. Darby\textsuperscript{c}
\textsuperscript{a} TETIS, University of Montpellier, Maison de la Télédétection, 500 Rue Jean-François Breton, F-34090 Montpellier, France
\textsuperscript{b} Mediterranean Agronomic Institute of Montpellier - CIRIEM-IAMM, 3191 Route de Mende, F-34090 Montpellier, France
\textsuperscript{c} School of Geography and Environmental Sciences, University of Southampton, Highfield Road, Southampton SO17 1BJ, UK
\textsuperscript{d} Department of Geography and Environmental Sciences, Northumbria University, Newcastle NE1 8ST, UK
\textsuperscript{e} Research Institute for Climate Change (DRAGON-Mekong Institute), Can Tho University, 3/2 Street, Ninh Kieu, Can Tho, Viet Nam
\textsuperscript{f} College of Environment and Natural Resources, Can Tho University, 3/2 Street, Ninh Kieu, Can Tho, Viet Nam
\textsuperscript{g} Southern Institute of Water Resource Research (SIWRR), 658 Vo Van Kiet Str., Ward 1, Dist. 5, Ho Chi Minh City, Viet Nam
\textsuperscript{h} Loughborough University, Loughborough LE11 3TU, UK
**Keywords:**
Rural livelihoods
Rural employment
Land-change model
Land-use planning policies
Land systems
Environmental Change
Future scenarios
Mekong Delta
**Abstract**
The Mekong Delta faces significant challenges in supplying Vietnam and its export market countries with agricultural commodities, while ensuring livelihoods and providing living space to its growing population in the context of climate change and the country’s agrarian transition. Anthropogenic factors, such as the construction of dykes to control river flooding, river sand mining, the further development of triple-cropping rice production, and infrastructure development, together with climate change impacts on sediment and water availability, are all combining to threaten agricultural production. One of the key challenges in sustainable development is the need to identify plausible future states of agricultural-based socio-ecological systems which draw upon differing strategies of land management, and to characterise the impacts of these systems on both the landscape and employment. It was hypothesised from the literature and rapid rural appraisals that each land system can only provide a certain number of jobs, which was further demonstrated using binomial regressions. We show that the odds of being employed are lower for intensive agricultural systems (OR=−0.78 for triple rice; 0.91 for intensive aquaculture) than for diversified systems (OR=−1.16 for rice-aquaculture; OR=−1.63 for mixed fruit trees). Drawing from workshops with local and national stakeholders, we then used Earth observation and national census data in a spatial land use systems dynamic framework to simulate two alternative Mekong Delta futures based upon the climate pathway RCP 4.5 in combination with two existing policies (i) Resolution No. 124 (Specialisation) which promotes triple crop rice and aquaculture intensification and (ii) Resolution No. 639 (Diversification), which states that there should be a development of sustainable rice aquaculture and crop diversification. Based on the quantitative objectives of each policy, we estimated likely changes of services provided by land use systems if either policy were to dominate. The estimated impacts of each future scenario on the provision of employment ultimately indicate that policies with a diversification development paradigm will provide more employment (−0.9%) than policies with a specialisation paradigm (−46%), and that current policies have potentially conflicting consequences. Decisions driving towards intensive farming risk triggering rural unemployment and outmigration, potentially exacerbating urban poverty in major cities such as Can Tho and Ho Chi Minh City. On the other hand, decisions aiming at increasing diversified agricultural systems can help secure more job opportunities. Our results indicate that spatial planning policies should rely on a broad-based assessment of land system services that include employment and environmental impacts to ensure a just transition towards resilient and environmentally sustainable rural territories.
**Abbreviations:** RCP, Representative Concentration Pathway; OR, Odds Ratio; NDVI, Normalised Difference Vegetation Index; GLM, Generalised Linear Model.
* Corresponding author at: TETIS, University of Montpellier, Maison de la Télédétection, 500 Rue Jean-François Breton, F-34090 Montpellier, France.
** Corresponding author.
E-mail addresses:** email@example.com, firstname.lastname@example.org (T. Berchoux), email@example.com (C.W. Hutton).
https://doi.org/10.1016/j.landusepol.2023.106752
Received 1 October 2022; Received in revised form 14 March 2023; Accepted 17 May 2023
Available online 29 May 2023
0264-8377/© 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
1. Introduction
Major economic and demographic shifts are driving human societies towards increased food demand: food demand with a projected rise of 59–98% by 2050 under the Shared Socio-economic Pathway “Middle of the Road” (SSP2) (Valin et al., 2014; Tilman et al., 2011). This increased demand leads to greater land use competition as they provide services such as food, housing, transportation, energy, employment, and water (Deichmann et al., 2019). Significant concerns over demand for land use services cannot be met without causing irreparable damage to the environment (Giller et al., 2021). Current approaches in land management for developing countries are often targeted to maximise economic factors via extractive models of land use, which exacerbate environmental degradation.
Managing agricultural social-ecological systems requires greater understanding of existing linkages between land, governance and people (Kremen and Merenlender, 2018; Hutton et al., 2021). Land use management, for example, through intensification or extensification of farming systems, can lead to substantial changes in terms of employment. High intensity or extractive models of land use may contribute to a substantial reduction in both quantity and quality of livelihood opportunities for local communities, which often underpins increased migration and unskilled labour (Berchoux et al., 2019; Garnett et al., 2013). As a consequence, policymakers need to balance trade-offs between high intensity production and sustainability (Hutton et al., 2021), hence shaping land management strategies through regulatory, voluntary, financial tools and spatial planning (Kremen and Merenlender, 2018).
The Vietnamese Mekong Delta, covering an extensive area of approximately 41,000 km$^2$, is home to a significant portion of the country’s population. As of the 2019 census, Vietnam’s total population was estimated at 96.2 million, with 17.3 million residing in the Mekong Delta, of which 12.9 million live in rural areas, underscoring the predominantly agrarian nature of the region’s livelihoods (General Statistics Office, 2020). However, despite its economic significance, the Mekong Delta experiences the highest out-migration and negative net migration rate in Vietnam. In terms of employment, the Statistical Yearbook reports that the Mekong Delta’s unemployment rate was estimated to be 4.1% in 2021, which is higher than the national average of 3.2%. Furthermore, the 2021 report indicates a significant increase in unemployment from 2.8% in 2020.
This highly complex and dynamic delta that has undergone rapid changes over the past decades as Vietnam developed economically (Smajgl et al., 2015). This development has been closely tied to the expansion of agriculture, primarily rice production, which has been the driving force behind many technological, economic, and environmental changes. However, this reliance on rice monoculture has also caused social unrest and created environmental problems, challenging the local social-ecological system’s ability to adapt to changing circumstances (Chapman and Darby, 2016). Moreover, the Vietnamese Mekong Delta is projected to undergo significant environmental changes, primarily with respect to salinity intrusion (Eslami et al., 2021a), reduced sediment fluxes (Bussi et al., 2021; Vasilopoulos et al., 2021), more frequent climate extremes (floods, droughts) and decreases of precipitation, thus leading to declining groundwater recharge (Shrestha et al., 2016). Simultaneously, demographic changes (Szabo et al., 2016) and economic orientations (industrial and infrastructure development, export-oriented commercial agriculture) pose additional challenges to land use systems by increased demand on some land services.
To address these challenges, the Socialist Republic of Vietnam has passed Resolution 120 on Sustainable and Climate-Resilient Development in the Mekong Delta and the Resolution’s 2019 Action Programme (Socialist Republic of Vietnam, 2017, 2019). Both resolutions are designed to move from a focus on food security towards high quality food production using a combination of high-tech, large-scale, and organic agriculture by developing: industry associated with agricultural production, water resource protection through increased production efficiency, markets and value chains, delta infrastructure for better regional connectivity as well as with neighbouring countries, and improved vocational training with the aim to prevent out-migration. Regarding aquaculture, the aim is to transform fisheries into a highly competitive, large-scale commodity production sector and to construct large fishing centres associated with major fishing grounds, concentrated material production zones, industrial parks and consumption markets. These measures, which oscillate between intensification and environmental protection, are designed to implement the Mekong Delta Plan - a strategic spatial plan the general principles of which were first set out in 2013 (Hutton et al., 2021) and formalised in 2022 with the Mekong Delta Master Plan. Apart from aforementioned general orientations, plan implementation is supported by a number of sectoral policies. Of these, decisions 124/939 (Socialist Republic of Vietnam, 2012a, 2012b) and 639/816 (Socialist Republic of Vietnam, 2014, 2018) are central policies governing land use. However, according to local stakeholders, the objectives of decision 124 are conflicting with decision 639 and with the 2017 Vietnamese Law on Planning. This law clearly stipulates that planning and development should “minimise the negative impacts due to the economy, society and environment on community livelihoods” while promoting “the development of the disadvantaged and slowly-developing areas and sustainable livelihoods for people therein” (Socialist Republic of Vietnam, 2017a; Hutton et al., 2021).
While the impacts of land use system changes on the Vietnamese Mekong Delta’s environment and society are evident, there is a lack of understanding of how these processes impact livelihoods on the ground. A better understanding of how policies influence land use systems and livelihoods is necessary. To address these knowledge gaps, it is important to characterise the associations between land use systems and employment, and to model the impact of spatial planning policies on future land use systems. In this paper, we combined Earth observation and national census data in a spatially explicit and dynamic land use system change model to explore the effects of the two main Vietnamese land use planning policies on employment and land use systems.
2. General approach
We followed a four-steps approach (Fig. 1): (1) characterise associations between employment and land use systems; (2) identify current trends of land use systems change; (3) model projections of land use systems under the combined effects of current policies and environmental change (4) predict employment for each policy scenario based on projected land use systems. Our working hypothesis is that local-scale variations in employment are at least partly explained by changes in land use systems at a higher level, such as the municipality, and thus by land use planning policies. Overall, the findings of this paper show that current policies have conflicting objectives with each aiming to produce differing future land use systems in the Mekong Delta. In particular, scenarios that favour food production will lead to an increase in the area of land use systems that support fewer jobs, while scenarios that promote agricultural diversification will ensure greater employment opportunities.
3. Methods
3.1. Conceptualising the links between land use systems and employment
Fieldwork on the Mekong River Delta was conducted between April and May 2018 to understand components of rural livelihoods and land use systems, from a household perspective, across a range of diverse socio-ecological contexts. This fieldwork enabled us to characterise agricultural land use systems and to identify the main associations between land use, household livelihood strategies, and employment in each land use system. We used Rapid Rural Appraisal as the principal field method to collect data and to highlight the perceptions and opinions of representative stakeholders and local residents (see
This method enables local people to share their knowledge, and to discuss and analyse their situation using their own terms (Mukherjee, 2005).
In total, ten villages were selected using a stratified sampling design of the main types of land use systems present in the community. Two villages for each main land use system (triple rice cropping, double rice cropping, aquaculture, orchards) were sampled to provide input from a variety of cases, based on the social-ecological characteristics of the community and on the main livelihood strategies conducted by households (Fig. 2). Rapid Rural Appraisals were conducted with 10–15 participants per community covering a range of livelihood strategies, socio-economic, and gender backgrounds. Different appraisal activities with communities were used to cross-check acquired data and to cover all aspects of land use systems and livelihoods. First, a participatory workshop was held as a focus group, where general information about the village, the land use systems and their evolution was discussed. Differences within the community regarding livelihood assets and employment strategies were investigated. Once different livelihood categories were identified by participants, they quantified the proportion of households falling into each category.
3.2. Characterising land use systems
Land use systems reflect a multi-functionality of landscapes, by taking into account land use, land management, and water management through irrigation infrastructures and practices (Malek et al., 2018). At the time of conducting the study, there was no delta-wide land use system map that included non-rice agricultural systems available for both 2010 and 2020. Some studies had created maps of a portion of the Delta (Truong et al., 2022), some for the whole delta but only for years 2010 and 2014 (Nguyen et al., 2015), while others only categorised rice agricultural systems (Vu et al., 2022). As a consequence, we generated our own land use systems maps by using supervised classification to ensure comparability between 2010 and 2020.
We used MODIS (Moderate Resolution Imaging Spectroradiometer) data to derive NDVI (Normalised Difference Vegetation Index) from a total of 46 MOD09Q1 cloud-free images at 8-days interval for each year, with a resolution of 250 m. We used maximum likelihood classification on NDVI time series (Fig. 3) to derive 9 land use systems classes representing the main agricultural systems identified during rapid rural appraisals (tripe rice, double rice, rice-aquaculture, cash crops, fruit trees, aquaculture, forest-aquaculture, forest, urban), as specified in Tran et al. (2015). We used the Natural Resources and Environment land planning maps collected from the local governments during fieldwork to train the model (stratified training sample, $R^2 = .85$). Finally, we characterised the links between agricultural land use systems, livelihoods and main types of employment thanks to the qualitative data collected during the rapid rural appraisals (Supplementary Material).
3.3. Associations between land use systems and employment
Logistic regression was used to investigate the effects of land use systems on the probability of being employed in a specific sector. Eight response variables (extracted from the 2010 Census on Population and Housing in Vietnam) were considered, derived as the number of people salaried or self-employed in one of the following sectors to total active population: (i) agriculture; (ii) forestry; (iii) aquaculture; (iv) industry; (v) construction; (vi) commercial; (vii) transportation; and (viii) inactive. The proportions of the response variables of interest varied continuously over a bounded range of [0,1]. Thus, an ordinary least squares regression model would be a model misspecification as it requires a response range over all real numbers. In this regard, a generalised linear model (GLM) with a logit link is a correct model specification as the logit function transforms the bounded proportion range from [0,1] to all real numbers as required. As contextual factors, such as socio-political and market contexts, strongly impact employment opportunities, outcomes and the ability of households to implement coping strategies (Berchoux et al., 2020), we used the proportion of ethnic minorities and travel duration as confounders to control for such factors.
$$\log(\pi_i) = \log \left( \frac{\pi_i}{1 - \pi_i} \right)$$
$$= \beta_0 + \beta_1 \text{EthnicMinorities}_i + \beta_2 \text{TDMarket}_i + \sum_j \beta_j \frac{\text{AreaLS}_j}{\text{TotalArea}_i}.$$
where $\pi_i$ refers to the probability of working in one of the sector listed above in the community $i$ and $\sum_j \beta_j \frac{\text{AreaLS}_j}{\text{TotalArea}_i}$ refers to the share of land use system $j$ in the community $i$.
3.4. Projections of land use system change
Plausible land use systems for 2030 were developed using CLUMondo, a model that simulates land use system change as a function of exogenously derived demands for commodities and services while accounting for local suitability and competition between land use systems (Van Asselen and Verburg, 2012). While other approaches to predict future land use like Cellular Automata (Rahaman et al., 2022) use past dynamics to derive future dynamics, CLUMondo can simulate future projections based on different scenario parameters that can be informed by current policies (Supplementary Material S3). The 2020 land use system map of the Vietnamese Mekong Delta was used as a baseline. For each year, a suitability map was created using logistic regression between the distribution of each land use system and a set of explanatory factors (Diep et al., 2022) (Supplementary Material S3). A total of 21 biophysical and socioeconomic explanatory variables were used. Explanatory variables included static factors that were assumed to not change over the ten years modelling span, such as soil properties, water logging, population density (we accounted for population change through demand, which drove the model), and access to markets. We also added dynamic factors that updated yearly to represent climate change under RCP4.5, including climate variables from CMIP6 models (Fick and Hijmans, 2017) and salinity projections (Eslami et al., 2021b).
Land use systems in 2030 were simulated under two different scenarios (Table 1), which were based on sets of demands for commodities and services from existing policy decisions (built-up, agricultural products, aquatic products). Policy-based scenarios were designed according to the main land use orientations of the development plans for the Mekong Delta: a scenario “Specialisation”, based on the pair of decisions No.124 and No. 939 (Socialist Republic of Vietnam, 2012a, 2012b), which promote triple crop rice and aquaculture intensification; and (ii) a scenario “Diversification” for the pair of decisions No. 639 and No. 816 (Socialist Republic of Vietnam, 2014, 2018), which state that there should be a development of sustainable rice aquaculture and crop diversification (Supplementary Material S2).
For all scenarios, stakeholder input was key in setting model parameters during workshops, which were based on an analysis of policy decisions affecting the development orientations of the Mekong Delta (Hutton et al., 2021). Each scenario was modelled using different provisions of services (ability of each land use system to provide a service), demands for services (policy objectives) and spatial constraints. The latter represents transitions from one land use system to another, which were strictly prohibited or only allowed by predefined changes. For example, conversions from single-season rice to intensive aquaculture and later to triple rice were facilitated in the “Specialisation” scenario based on decision No. 124 to reflect the policy orientations found in the decision: “convert some one-rice crop in water-logged areas into fish or shrimp areas” and “to increase the area of triple rice and to do intensive farming”. On the contrary, transitions to single rice were encouraged in the Decision No. 639 scenario, based on the policy statement “to increase the area of rice-aquaculture in both fresh and brackish water, rotationally lush flood into fields”, while aquaculture and orchards were favoured compared to intensive freshwater rice area.
4. Results
4.1. Employment opportunities in complex land use systems
Odds ratios (OR) were used to quantify the relationships between the response variable (employment or labour type) and the explanatory variables (land use systems), controlling for travel time to the closest district capital and the effects of ethnic minorities. An odds ratio above one indicates that, as the explanatory variable increases, the odds of being employed in a specific sector or labour type also increases.
Concerning the effects of land use systems on main employment sectors (Table 2), agricultural land use systems (triple rice, double rice, cash crops, fruits) had a significant ($p \leq 0.001$) positive effect on the odds of engaging in agricultural employment, while other land use systems (forest, aquaculture, mixed rice-aquaculture, urban) had a significant negative effect. Fruits/orchards had the greatest positive effect on providing agricultural employment (OR = 2.01, 95% CI = [1.92, 2.11]), cash crops had the lowest, and double rice had a greater effect (OR = 1.85, 95% CI = [1.77, 1.94]) compared to triple rice. Moreover, ethnic minority inclusion increased the odds of engaging in an agricultural job (OR = 1.57, 95% CI = [1.55, 1.59]).
Aquaculture, rice-aquaculture, and forest land use systems had a statistically significant positive effect on the odds of engaging in fishery-related employment, while agricultural and urban land use systems had a negative effect. Aquaculture had the greatest positive effect on providing fishery-related employment (OR = 7.12, 95% CI = [6.71, 7.57]), followed by mixed land use systems: rice-aquaculture and forest-aquaculture. Moreover, ethnic minority inclusion reduced the odds of engaging in a fishery-related job (OR = 0.48, 95% CI = [0.47, 0.49]).
For non-farming employment, land use systems with a greater share of urban areas increased the odds of engaging in commercial activities, construction activities, industrial activities and transport activities.
Four fitted models were used to analyse the effects of land use systems on labour types (Table 3). Agricultural and urban land use systems had a statistically significant positive effect on the odds of being salaried, urban land use systems having the greatest effect (OR = 5.75, 95% CI = [5.42, 6.10]), followed by cash crops, double rice, triple rice and fruits. On the contrary, the share of forest (OR = 0.68, 95% CI = [0.63, 0.73]), rice-aquaculture and aquaculture decreased the odds of being salaried compared to being self-employed. Finally, ethnic minority inclusion increased the odds of being salaried compared to self-employment (OR = 1.32, 95% CI = [1.30, 1.34]). In terms of
employment, a greater share of forest (OR = 1.97, 95% CI = [1.79, 2.17]), fruits, urban, cash crops or rice-aquaculture increased the odds of being active compared to inactive. On the contrary, a greater share of triple rice (OR = 1.28, 95% CI = [1.20, 1.38]), double rice or aquaculture increased the odds of being inactive compared to being active. Similarly, ethnic minority inclusion also increased the odds of being inactive compared to being active (OR = 1.09, 95% CI = [1.07, 1.12]).
As shown in the model summary (Fig. 4), triple rice, double rice and intensive aquaculture systems increase the likelihood of being unemployed. In these land use systems, the main source of employment is agriculture or aquaculture. As most of the work is mechanised, only petty tasks provide employment to landless households. Although there is a peak in labour demand during harvest, farmers tend to hire large organised groups of labourers from other provinces, which drives landless locals into migration to the industrial zones, especially in triple rice cropping systems. In the double rice cropping system, flooded fields provide fishing for home consumption (for better-off households) or to generate income by selling fish on the road side (for landless households), enabling the poorest households the opportunity to generate a livelihood.
One model was fitted to analyse the effects of the type of employment on the odds of falling under the poverty line for people within the legal working ages of 15–64 (Table 4). It was apparent that ethnic minority inclusion had the greatest positive effect on the odds of being poor (OR = 3.16, 95% CI = [2.98, 3.35]). Being unemployed (OR = 1.35, 95% CI = [1.16, 1.57]), being employed in the fishery sector, or in agriculture increased the odds of falling into poverty. On the contrary, engaging in non-farming activities decreased the odds of being poor (OR = 0.64, 95% CI = [0.59, 0.69]).
### 4.2. Current trends of land use systems
In 2010, double rice was the land use system with the largest coverage in the Vietnamese Mekong Delta (35%), followed by triple rice (19%), fruits (14%), aquaculture (12%), forest-aquaculture (9%), rice-aquaculture (6%), cash crops (4%) and urban (1%). By 2020, triple rice became the land use system with the largest coverage (32%), followed by double rice (24%), aquaculture (15%), fruits (10%), forest-aquaculture (7%), rice-aquaculture (6%), cash crops (5%) and urban (1%). The biggest changes that occurred between 2010 and 2020 were
### Table 1
Summary of main storyline elements of the two scenarios. Scenario “specialisation” is based on the pair of decisions No. 124 and No. 939, which promote triple crop rice and aquaculture intensification. Scenario “diversification” is based on the pair of decisions No. 639 and No. 816, which state that there should be a development of sustainable rice aquaculture and crop diversification.
| | Specialisation | Diversification |
|--------------------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------|
| **Population and livelihoods** | Population in 2030: 9% increase (SSP2) | 9% increase (SSP2) |
| | Demand for livelihoods: N.A. | N.A. |
| | Demand for built-up: Matching annual population growth rate | Undershoot annual population growth rate |
| | Spatial pattern: Urban sprawl allowed, urban land has priority over all other uses | Compact and denser urban areas promoted |
| **Agriculture and aquaculture** | Demand for products: Agriculture 4.5% yearly increase; aquaculture 3% yearly increase (No.124); maintain rice export volume of 6–7 million tons (No.939) | Agriculture 3% yearly increase; aquaculture 7% increase (No.639) |
| | Structure of overall value: Agriculture 55%; aquaculture 43.5% (No.939) | Agriculture 51.9%; aquaculture 40% (No.639) |
| | Yields: Increase average value of land production by 50% to reach 110Mđr/ha (No.124) | Agriculture 180Mđr/ha; aquaculture 400Mđr/ha (No.639) |
| | Land-use planning: Convert one-rice crop to aquaculture (No.939); increase the area of triple rice and promote large-scale farming (No.124) | Reduce freshwater and ineffective rice systems, increase aquaculture and orchards (No.816); increase rice-aquaculture area (No.639) |
| **Climate change and salinity** | Climate change scenario: RCP4.5 | RCP4.5 |
| | Salinity projection scenario: Subsidence B2; riverbed level incision of 0.05 m/yr | Subsidence B2; riverbed level incision of 0.05 m/yr |
| | Water management: Develop hard infrastructure for water management with high dykes and sluice gates (No.124) | Increase of lush flood in fields; encourage small dykes system with agricultural rotation (No.816) |
### Table 2
Results of the logistic models for the three main employment sectors. The dependent variable represents the odds of engaging in certain employment for people who are within the legal working age. The explanatory variables represent the share of area of land use systems found in the Vietnamese Mekong Delta. Statistical modelling based on the 2010 Census on Population and Housing in Vietnam and Authors’ land use systems map derived from 2010 MODIS data.
| | AGRICULTURAL | FISHERY | INDUSTRY |
|--------------------------|--------------|---------|----------|
| | Odds | LB | UB | pval | Odds | LB | UB | pval | Odds | LB | UB | pval |
| **(Intercept)** | 0.52 | 0.50 | 0.55 | 0.00 | *** | 0.33 | 0.31 | 0.35 | 0.00 | *** | 0.07 | 0.06 | 0.08 | 0.00 | *** |
| **Ethnic minorities** | 1.57 | 1.55 | 1.59 | 0.00 | *** | 0.48 | 0.47 | 0.49 | 0.00 | *** | 0.72 | 0.70 | 0.73 | 0.00 | *** |
| **Travel duration** | 1.00 | 1.00 | 1.00 | 0.00 | *** | 1.00 | 1.00 | 1.00 | 0.00 | *** | 0.99 | 0.99 | 0.99 | 0.00 | *** |
| **Triple rice** | 1.70 | 1.62 | 1.78 | 0.00 | *** | 0.02 | 0.02 | 0.02 | 0.00 | *** | 2.95 | 2.66 | 3.26 | 0.00 | *** |
| **Double rice** | 1.85 | 1.77 | 1.94 | 0.00 | *** | 0.12 | 0.11 | 0.12 | 0.00 | *** | 2.45 | 2.22 | 2.71 | 0.00 | *** |
| **Cash crops** | 1.43 | 1.35 | 1.50 | 0.00 | *** | 0.11 | 0.10 | 0.12 | 0.00 | *** | 8.44 | 7.60 | 9.37 | 0.00 | *** |
| **Fruits** | 2.01 | 1.92 | 2.11 | 0.00 | *** | 0.04 | 0.03 | 0.04 | 0.00 | *** | 3.09 | 2.79 | 3.42 | 0.00 | *** |
| **Forest** | 0.57 | 0.54 | 0.61 | 0.00 | *** | 3.72 | 3.45 | 4.01 | 0.00 | *** | 0.91 | 0.80 | 1.04 | 0.17 | |
| **Aquaculture** | 0.11 | 0.11 | 0.12 | 0.00 | *** | 7.12 | 6.71 | 7.57 | 0.00 | *** | 1.65 | 1.48 | 1.83 | 0.00 | *** |
| **Rice shrimp** | 0.32 | 0.30 | 0.33 | 0.00 | *** | 5.07 | 4.77 | 5.37 | 0.00 | *** | 1.87 | 1.69 | 2.08 | 0.00 | *** |
| **Urban** | 0.50 | 0.47 | 0.52 | 0.00 | *** | 0.02 | 0.02 | 0.03 | 0.00 | *** | 14.57 | 13.13 | 16.19 | 0.00 | *** |
| **AIC** | 433939 | | | | | 216026 | | | | | 239561 | | | | |
Table 3
Results of the logistic models for labour types. The dependent variable represents the odds of engaging in certain labour types for people who are within the legal working age. The explanatory variables represent the share of area of land use systems found in the Vietnamese Mekong Delta. Statistical modelling based on the 2010 Census on Population and Housing in Vietnam and Authors’ land use systems map derived from 2010 MODIS data.
| | ACTIVE | SELF-EMPLOYED | SALARIED |
|------------------|--------------|---------------|--------------|
| | Odds | LB | UB | pval | Odds | LB | UB | pval | Odds | LB | UB | pval |
| (Intercept) | 9.60 | 8.93 | 10.32 | 0.00 | *** | 1.85 | 1.76 | 1.94 | 0.00 | *** | 0.35 | 0.33 | 0.37 | 0.00 | *** |
| Ethnic minorities| 0.92 | 0.90 | 0.94 | 0.00 | *** | 0.75 | 0.74 | 0.76 | 0.00 | *** | 1.32 | 1.30 | 1.34 | 0.00 | *** |
| Travel duration | 1.00 | 1.00 | 1.00 | 0.00 | *** | 1.00 | 1.00 | 1.00 | 0.00 | *** | 1.00 | 1.00 | 1.00 | 0.00 | *** |
| Triple rice | 0.78 | 0.73 | 0.84 | 0.00 | *** | 0.70 | 0.67 | 0.74 | 0.00 | *** | 1.34 | 1.27 | 1.41 | 0.00 | *** |
| Double rice | 0.88 | 0.82 | 0.94 | 0.00 | *** | 0.65 | 0.62 | 0.68 | 0.00 | *** | 1.54 | 1.46 | 1.63 | 0.00 | *** |
| Cash crops | 1.17 | 1.08 | 1.26 | 0.00 | *** | 0.44 | 0.42 | 0.47 | 0.00 | *** | 2.70 | 2.55 | 2.86 | 0.00 | *** |
| Fruits | 1.63 | 1.52 | 1.76 | 0.00 | *** | 0.94 | 0.89 | 0.98 | 0.01 | ** | 1.29 | 1.22 | 1.36 | 0.00 | *** |
| Forest | 1.97 | 1.79 | 2.17 | 0.00 | *** | 1.76 | 1.65 | 1.88 | 0.00 | *** | 0.68 | 0.63 | 0.73 | 0.00 | *** |
| Aquaculture | 0.91 | 0.85 | 0.98 | 0.02 | * | 1.16 | 1.11 | 1.22 | 0.00 | *** | 0.78 | 0.74 | 0.82 | 0.00 | *** |
| Rice shrimp | 1.16 | 1.07 | 1.25 | 0.00 | *** | 1.33 | 1.26 | 1.40 | 0.00 | *** | 0.75 | 0.71 | 0.80 | 0.00 | *** |
| Urban | 1.32 | 1.22 | 1.43 | 0.00 | *** | 0.22 | 0.21 | 0.23 | 0.00 | *** | 5.75 | 5.42 | 6.10 | 0.00 | *** |
| AIC | 78800 | | | | | 196516| | | | | 165204| | | | |
Fig. 4. Effect of land use systems on the likelihood of engaging in the four main types of rural employment. The dependent variables represent the odds of working in a certain sector for people who are within legal working age. The explanatory variables represent the share of area of land use systems found in the Vietnamese Mekong Delta. Statistical modelling based on the 2010 Census on Population and Housing in Vietnam and Authors’ land use systems map derived from 2010 MODIS data. Non-significant entries are drawn as hollow points.
The conversion of 34% of the 2010 double rice area and 23% of the 2010 fruits area into triple rice area (Fig. 5), which is in line with national statistics and estimations (Hui et al., 2022; Van Kien et al., 2020). The increase in aquaculture area was mostly due to the conversion of 35% of the 2010 rice-aquaculture area to aquaculture, while the increase in cash crops was mostly due to the conversion of 4% of the 2010 double rice area. Overall, we found that 58% of the Mekong area did not face land system change between 2010 and 2020. In particular, the two most specialised land use systems had little conversion of their areas to other land use system (84% of persistence for aquaculture and 83% of persistence for triple rice), while only half of the area of the others land use systems persisted: 53% for double rice, 48% for fruit trees and for rice-aquaculture.
Based on the model “Active” previously depicted, we predicted employment opportunities in 2020 based on land use systems distribution and controlling for population change (+0.9% annual increase). We found that land use systems transitions observed between 2010 and 2020 have led to a 3.2% decrease in employment.
4.3. Projected changes of land use systems and their impacts on employment
The largest expansions of triple rice areas were observed under the specialisation scenario in the West (Trans-Bassac depression), North-West (Long Xuyen-Ha Tien quadrangle), and in the North East (Plain of Reeds) following the intensification of double rice cropping systems into triple rice, while such a transition remained moderated in the diversification scenario (Fig. 5). As a consequence, double rice systems only covered 9% of the total area under the specialisation scenario,
Table 4
Results of the logistic model for poverty. The dependent variable represents the odds of falling under the national poverty line. The explanatory variables represent main employment types. Statistical modelling based on the 2010 Census on Population and Housing in Vietnam.
| | POVERTY |
|------------------|---------------|
| | Odds | LB | UB | pval |
| (Intercept) | 0.00 | 0.00 | 0.00 | 0.00 | *** |
| Ethnic minorities| 3.16 | 2.98 | 3.35 | 0.00 | *** |
| Agricultural employment | 1.17 | 1.08 | 1.27 | 0.00 | *** |
| Fishery employment | 1.26 | 1.17 | 1.36 | 0.00 | *** |
| Non-agricultural employment | 0.64 | 0.59 | 0.69 | 0.00 | *** |
| Inactive | 1.35 | 1.16 | 1.57 | 0.00 | *** |
| AIC | 10274 | | | | |
Table 5
Spatial extent of the Vietnamese Mekong Delta land use systems in 2030 under different scenarios and evolution of associated land use system services. Analysis for 2010 and 2020 based on Earth observation, while projections for 2030 were simulated using CLIMondo, modelling global environmental change (RCP4.5) under different policy options (specialisation; policy scenario based on resolution No124; diversification; policy scenario based on resolution No639). The overall impact on employment was estimated based on logistic regressions developed using the 2010 Census on Population and Housing in Vietnam.
| Share of land systems | 2010 (% of total area) | 2020 (% of total area) | 2030 (% of total area) |
|-----------------------|------------------------|------------------------|------------------------|
| Triple rice | 19.0 | 35.0 | 37.0 |
| Double rice | 35.0 | 24.0 | 9.1 |
| Cash crops | 3.6 | 5.0 | 0.0 |
| Fruits | 13.8 | 24.9 | 27.0 |
| Forest-aquaculture | 9.0 | 6.7 | 5.5 |
| Aquaculture | 11.9 | 16.1 | 20.6 |
| Rice-aquaculture | 6.0 | 6.0 | 0.0 |
| Urban | 0.8 | 0.8 | 0.8 |
Evolution of land system services between 2020 and 2030 (in %)
| Livelihoods | -46.0 | + 0.9 |
| Built-up | -2.9 | -8.2 |
| Agricultural products| -1.0 | -6.8 |
| Aquaculture products | -13.6 | -6.5 |
compared to 25% under the diversification scenario. Symmetrically, triple rice systems covered 37% (scenario specialisation) and 20% (scenario diversification) of the total area (Table 5). More rice-aquaculture systems were preserved in the diversification policy scenario: 3% more compared to the specialisation scenario. In the two scenarios, the area of intensive aquaculture increased around the South-West (Cà Mau peninsula), ranging between 18% (diversification scenario) and 27% (specialisation scenario) of the total area. Of the two scenarios, cash crops were the land use system that decreased the most, disappearing completely in the specialisation scenario while only a very small area remained under the diversification scenario, located on the South-East (coastal flat) near Trà Vinh. The reduction in cash crop land use systems in the freshwater alluvial area near Cần Thơ is mostly due to conversion to orchards and rice land use systems. There was also a substantial increase in urban systems (+14%) in the Cà Mau peninsula under the specialisation scenario at the expense of rice-aquaculture and fruits. (Fig. 6).
The overall change in employment opportunities between 2020 and 2030 (Table 5) is higher (+0.9%) under the diversification scenario than under the specialisation scenario (−46.0%). However, an increase in employment opportunities under the diversification scenario comes at the expense of agricultural gross product (diversification −6.8%, specialisation −1.0%). Interestingly, despite an increase of intensive aquaculture under the specialisation scenario, overall relative change of aquaculture gross product is the lowest in such a scenario (specialisation −13.6%; diversification −6.5%). Finally, as urban sprawl was constrained in the diversification scenarios, the relative change in built-up area is significantly lower in this scenario (−8.2%) while it grew under the specialisation scenario (+2.9%).
5. Discussion
5.1. Land use systems and employment opportunities
Understanding the links between land use systems and the services they provide under increasing pressure exerted by global environmental change is a central part in spatial planning and sustainable management of resources, as it underpins a balance of options for land use (Kremen and Merenlender, 2018). However, current approaches to spatial planning are often targeted on maximising economic value via extractive models of land use, thus exacerbating environmental degradation. While the latest evidence emphasises the importance of explicitly including livelihood provision in agri-food system planning (Davis et al., 2022), no earlier studies have quantitatively explored the associations between land use systems and employment opportunities.
Our findings show that mode of production has a significant effect on employment provision. Intensive farming land use systems (rice monoculture, aquaculture) support less employment than more diversified systems, thus supporting earlier findings, which demonstrated that transitions from shifting cultivation to intensive cropping has a negative impact on employment opportunities (Dressler et al., 2017; Tran, 2019).
The results of our study demonstrate that diversified farming systems (forest-aquaculture, orchards, rice-aquaculture), as promoted under decision 639, support more employment, yet they do not require a large agricultural labour force in comparison with decision 124. As agricultural land use systems are more diversified (e.g. smaller fields), they provide more alternative employment opportunities for landless households than decision 639, such as employment in processing companies and small businesses (packaging, upcycling of agri-processing waste) (Brunerová et al., 2020). Finally, we showed that access to urban areas decreases the likelihood of unemployment, thanks to the provision of off-farm employment (industry, construction, transport) (de Bruin et al., 2021).
5.2. Sustainability of the Mekong Delta under global environmental change and current policies
Despite the commitment to sustainable agriculture in post-2013 policies, Decisions 124 and 939 (2012) aim at intensifying production through the transition from one-rice crop into water-logged areas with intensive aquaculture and by increasing rice production through triple rice cropping. This strategic planning is expected to lead to a growth of production of 4.2% with the following farm-related GDP outputs: agriculture (55%), aquaculture (43.5%) and forestry (1.5%). The results from our model show that achieving this target under RCP4.5 will lead to exacerbating current observed trends, with an increase in intensive aquaculture and triple rice cropping systems at the expense of rice-aquaculture, forest-aquaculture and double rice cropping systems. This land planning strategy is vulnerable to global environmental change, as the three main land use systems (triple rice cropping, intensive aquaculture, orchards) will face (i) hot temperatures during the rainy season, leading to heat stress (rice and shrimp farming); (ii) prolonged rain during the wet season, leading to floods (rice and orchards); and (iii) salinity intrusion (orchards). Moreover, such systems lead to increased conflicts of natural resource management regarding water allocation but also cross-field contamination and pollution (Tran et al., 2021). Finally, while the policy narrative emphasises the need to prevent out-migration, our findings show that pushing for intensive farming decreases employment opportunities by 46%, especially for low-skilled labourers.
Decisions 639 and 816 (Socialist Republic of Vietnam, 2014, 2018)
have objectives that align better with Resolution 120 and its Action Programme, but also with national-level climate change policies such as the 2021 National Strategy on Green Growth and the 2017 Sustainable Development Goals National Action Plan. They take climate change into account (transition of rice land to aquaculture due to sea-level-rise) and aim at decreasing the area of ineffective and freshwater land use systems (especially rice area), while increasing more diversified systems (rice-aquaculture, forest-aquaculture and orchards). In parallel, they aim to develop programs that create opportunities for agricultural labourers while accompanying livelihood transition. The findings from our model suggest that these two strategic decisions lead to a more balanced approach in land services in 2030 compared to the
implementation of Decisions 124 and 939. Although favouring rice-aquaculture and double rice cropping systems to triple rice cropping systems negatively affects agricultural production, it is greatly improved compared to the other scenario, while livelihood opportunities remain stable. The greater diversity of land use systems reduces vulnerability to global environmental change that prevents yield loss by reducing sediment starvation associated with triple rice cropping (Chapman and Darby, 2016).
In practice, however, considerable obstacles remain as tensions continue to exist in the policy landscape between intensification and sustainability, which threatens to derail the sustainability goals in favour of a GDP-centred growth model (Hutton et al., 2021). Although provincial governments were successfully brought on board during the formulation of the 2013 Mekong Delta Plan (Seijger et al., 2017; Vo et al., 2019), a lack of political will and financial resources are proving to be obstacles for strategic spatial planning and the realisation of sustainability goals (Malekpour et al., 2017; Demazière, 2018; Gustafsson et al., 2019) that would put the Mekong delta onto a path towards a climate resilient future.
5.3. Implications for sustainable and inclusive strategic spatial planning
In this study, we translated land use planning policies into a land use change modelling framework. In analysing global change effects on local scale land management, we went beyond applying global demand projections only. We developed two scenarios representing the main Vietnamese planning policies under climate change and salinity intrusion (RCP4.5). One caveat to this study is that policy scenarios were modelled as mutually exclusive (specialisation vs. diversification). In reality, both scenarios might coexist spatially and across scales, depending on implementing agencies and place-based strategic decisions by local governments. Moreover, our prediction of employment based solely on land use system do not capture the complexity of livelihood strategies that households put in place in rural areas. Nonetheless, the patterns that we found allow us to advocate for the inclusion of employment as a service of land use systems in planning policies. Putting employment at the core of land use planning facilitates a more balanced future, situated in the middle of the two current strategies in terms of agriculture and aquaculture gross product, while ensuring an increase in livelihood opportunities.
Our modelling approach has a caveat that assumes the association between land use systems and employment remains constant over time. Furthermore, to develop a full picture of rural employment, additional studies are needed, which include gender-sensitive modelling approaches (Markussen et al., 2018). Nevertheless, the above findings suggest several courses of action for public policies and schemes to sustain rural livelihoods and reduce rural out-migration; hence, reducing urban and rural poverty. We show the necessity of moving from a profit-driven model of land use systems, as seen in current agricultural policies around the world (Garrone et al., 2019), towards a more holistic approach incorporating livelihood and biodiversity goals in rural policy design. Characterising the extent to which livelihoods can be supported by land use systems contributes to the wider framework of ecosystem services that helps design more socially and environmentally viable strategies for the future.
6. Conclusion
This study determined associations between land use systems and employment opportunities and to model future land use systems in the Vietnamese Mekong Delta under global environmental change and current policies. Our findings bring a new perspective on land use system science and livelihood studies by demonstrating that more intensive farming systems provide less employment opportunities than more diversified systems at a territorial level. We showed that current development policies potentially have conflicting aims and that current policy goals under RCP4.5 may lead to a drastic increase in intensive aquaculture and triple rice cropping systems at the expense of more diversified systems. We argue that inclusion of employment opportunities as a policy target leads to more diversified systems that provide more employment without compromising overall agriculture and aquaculture gross product.
This paper provides an approach for researchers and policy-makers to consider employment in land use planning policy design, and thus to target specific land use systems to maximise their environmental and social services rather than solely focusing on their economic value per unit. Future interventions should address rural development holistically rather than focus on agricultural development, or at least ensure that current planning policies do not have a detrimental effect on rural employment. As such, they should support transitions to off-farm livelihoods in well-connected communities, while steering to more diversified land use systems that could offer on-farm livelihoods to landless households.
Declaration of Competing Interest
All authors declare that they have no conflicts of interest.
Data Availability
Data will be made available on request.
Acknowledgment
This research was co-funded by the Biotechnology and Biological Sciences Research Council under a Global Challenges Research Foundation Award for Global Agriculture and Food Systems Research (BB/P022693/1), and by the UK National Environmental Research Council (NERC) and the Viet Nam National Foundation for Science and Technology Development (NAFOSTED) under the project ‘The resilience and sustainability of the Mekong delta to changes in water and sediment fluxes (RAMESSES)’ (grant agreement NE/P014704/1). Prior to commencing the study, ethical clearance was obtained from the University of Southampton [ERGO number 27665]. Data used in this research come from the Population and Housing Census of Vietnam provided by the General Statistics Office of Vietnam. The authors wish to thank all participants for providing their time and knowledge. Additional gratitude goes to Nguyễn Ngọc Diệp who helped in the organisation, planning and interpretation of the Rapid Rural Appraisal.
Appendix A. Supporting information
Supplementary data associated with this article can be found in the online version at doi:10.1016/j.landusepol.2023.106752.
References
Berchoux, T., Watmough, G.R., Hutton, C.W., et al., 2019. Agricultural shocks and drivers of livelihood precariousness across Indian rural communities. Landsc. Urban Plan. 189, 307–319. https://doi.org/10.1016/j.landurbplan.2019.04.014.
Berchoux, T., Watmough, G.R., Amoako Johnson, F., et al., 2020. Collective influence of household and community capitals on agricultural employment as a measure of rural poverty in the Mahanadi Delta, India. Ambio 49, 281–298. https://doi.org/10.1007/s13280-019-01150-4.
Brunerová, A., Roulih, H., Brozek, M., et al., 2020. Briquetting of sugarcane bagasse as a proper waste management technology in Vietnam. Waste Manag. Res. 38 (11) https://doi.org/10.1177/0734242X20938438.
Bussi, G., Darby, S.E., Whitehead, P.G., et al., 2021. Impact of dams and climate change on suspended sediment flux to the Mekong delta. Sci. Total Environ. 755, 142468 https://doi.org/10.1016/j.scitotenv.2020.142468.
Chapman, A., Darby, S.E., 2016. Evaluating sustainable adaptation strategies for vulnerable mega-deltas using system dynamic modelling: rice agriculture in the Mekong Delta’s An Giang Province, Vietnam. Sci. Total Environ. 559, 326–338. https://doi.org/10.1016/j.scitotenv.2016.01.162.
Davis, B., Lipper, L., Winters, P., 2022. Do not transform food systems on the backs of the rural poor. Food Secur. https://doi.org/10.1007/s12571-021-01214-3.
Deichmann, J.L., Canty, S.W.J., Akre, T.S.B., et al., 2019. Broadly defining “working lands”. Science 363 (6431), 1046–1048. https://doi.org/10.1126/science.aaw3007.
Demazière, C., 2018. Strategic spatial planning in a situation of fragmented local government: the case of France. DisP - Plan. Rev. 54 (2), 58–76. https://doi.org/10.1080/02513625.2018.1467045.
de Bruin, S., Dengel, A., van Vliet, J., 2021. Urbanisation as driver of food system transformation and opportunities for rural livelihoods. Food Secur. 13, 781–798. https://doi.org/10.1007/s12571-021-01182-8.
Diep, N.T.H., Nguyen, C.T., Diem, P.K., et al., 2022. Assessment on controlling factors of urbanization possibility in a newly developing city of the Vietnamese Mekong delta using logistic regression analysis. Phys. Chem. Earth 126 (10365). https://doi.org/10.1016/j.pce.2021.103605.
Dressler, W.H., Wilson, D., Clendenning, J., et al., 2017. The impact of swidden decline on livelihoods and ecosystem services in Southeast Asia: a review of the evidence from 1990 to 2015. Ambio 46, 291–310. https://doi.org/10.1007/s13280-016-0836-z.
Eslami, S., Hoekstra, P., Minderhoud, P.S.J., et al., 2021a. Projections of salt intrusion in a mega-delta under climatic and anthropogenic stressors. Nat. Commun. Earth Environ. 2, 142. https://doi.org/10.1038/s43247-021-00208-5.
Eslami, S., Hoekstra, P., Kernkamp, H.W.J., 2021b. Dynamics of salt intrusion in the Mekong Delta: results of field observations and integrated coastal–inland modelling. Earth Surf. Dyn. 9, 953–976. https://doi.org/10.5194/esurf-9-953-2021.
Fick, S.E., Hijmans, R.J., 2017. Worldclim 2: new 1-km spatial resolution climate surfaces for global land areas. Int. J. Climatol. 37 (2), 4302–4315. https://doi.org/10.1002/joc.5086.
Garrett, T., Appleby, M.C., Balmford, A., et al., 2013. Sustainable Intensification in agriculture: premises and policies. Science Vol 341 (6141), 33–34. https://doi.org/10.1126/science.1234485.
Garrone, M., Emmers, D., Olper, A., et al., 2019. Jobs and agricultural policy: Impact of the common agricultural policy on EU agricultural employment. Food Policy 87, 101744. https://doi.org/10.1016/j.foodpol.2019.101744.
Giller, K.E., Delaune, T., Silva, J.V., et al., 2021. The future of farming: who will produce our food. Food Secur. 13, 1073–1099. https://doi.org/10.1007/s12571-021-01184-6.
Gustafsson, S., Hermelin, B., Smas, L., 2019. Integrating environmental sustainability into strategic spatial planning: the importance of management. J. Environ. Plan. Manage. 62 (8), 1321–1338. https://doi.org/10.1080/09566166.2018.1405620.
Hutton, C.W., Heusenberger, O., Berchoux, T., et al., 2021. Stakeholder expectations of future policy implementation compared to formal policy trajectories: scenarios for agricultural food systems in the Mekong Delta. Sustainability 13 (10), 5534. https://doi.org/10.3390/su13105534.
Kremen, C., Merenlender, A.M., 2018. Landscapes that work for biodiversity and people. Science 362 (6412). https://doi.org/10.1126/science.aao6020.
Malek, Z., Verburg, P.H., Geijzendorffer, I.R., et al., 2018. Global change effects on land management in the Mediterranean region. Glob. Environ. Change Vol 50, 238–254. https://doi.org/10.1016/j.gloenvcha.2018.04.007.
Malekpour, S., Brueckner, K.R., de Haan, F.J.S., et al., 2017. Preparing for disruptions: a diagnostic strategy planning instrument for sustainable development. Cities Volume 63 (58–69). https://doi.org/10.1016/j.cities.2016.12.016.
Markussen, T., Fibæk, M., Tarp, F., et al., 2018. The happy farmer: self-employment and subjective well-being in Rural Vietnam. J. Happiness Stud. 19, 1613–1636. https://doi.org/10.1007/s10902-017-9858-x.
Mukherjee N. (2005). Participatory rural appraisal: methodology and applications.
Nguyen, D.B., Clausss, K., Cao, S., et al., 2015. Mapping rice seasonality in the Mekong Delta with Multi-Year Envisat ASAR WSM data. Remote Sens. 7 (12), 15868–15893. https://doi.org/10.3390/rs71215808.
Rahaman, Z.A., Al Kafi, A., Al-Faisal, A., et al., 2022. Predicting Microscale Land Use/Land Cover Changes Using Cellular Automata Algorithm on the Northwest Coast of Peninsular Malaysia. Earth Syst. Environ. 6, 817–835. https://doi.org/10.1007/s41748-022-00318-4.
Seijger, C., Douven, W., van Halsema, G., et al., 2017. An analytical framework for strategic delta planning: negotiating consent for long-term sustainable delta development. J. Environ. Plan. Manag. 60 (8), 1485–1509. https://doi.org/10.1080/09640568.2016.1231667.
Shrestha, S., Bach, T.V., Pandey, V.P., 2016. Climate change impacts on groundwater resources in Mekong Delta under representative concentration pathways (RCPs) scenarios. Environ. Sci. Policy Vol 61, 1–13. https://doi.org/10.1016/j.envsci.2016.03.010.
Smajgl, A., Toan, T., Nhan, D., et al., 2015. Responding to rising sea levels in the Mekong Delta. Nat. Clim. Change 5, 167–174. https://doi.org/10.1038/nclimate2469.
Socialist Republic of Vietnam, 2012b. Decision No. 939/QĐ-TTg dated July 19, 2012 of the Prime Minister approving the overall plan on socio-economic development of the Mekong river delta till 2020. (https://thuvienphapluat.vn/van-ban/Xay-dung-do-thi/Decision-No-939-QD-TTg-approving-the-overall-plan-on-socio-economic-development-of-the-mekong-river-delta-till-2020).
Socialist Republic of Vietnam, 2012a. Decision No. 124/QD-TTg of February 2, 2012, approving the master plan for agricultural production development through 2020, with a vision to 2030. (https://thuvienphapluat.vn/van-ban/Linh-vuc-khac/Quyet-dinh-124-QD-TTg-phe-duyet-Quy-hoach-tong-the-phat-trien-san-xuat-134358.aspx).
Socialist Republic of Vietnam, 2014. Decision No. 639/QĐ-BNN-KH, dated April 2, 2014, approving agricultural and rural planning in the Mekong Delta to 2020 under conditions of climate change, with a vision to 2030. (https://thuvienphapluat.vn/van-ban/Tang-nghiep-Moi-truong/Quyet-dinh-639-QD-BNN-KH-2014-Quy-hoach-nong-sanh-trong-Dong-bang-Mekong-2014-2030).
Socialist Republic of Vietnam, 2017. Resolution 120/NQ-CP on sustainable and climate-resilient development in the Mekong Delta. (https://www.mekongdelaplan.com/resolution-coordination/government-resolution-120/).
Socialist Republic of Vietnam , 2018. Decision No. 816/QĐ-BNN-KH, dated March 7, 2018, promulgating the government’s action plan for the implementation of resolution No. 120/NQ-CP of November 17, 2017. (https://thuvienphapluat.vn/van-ban/Tai-nguyen-Moi-truong/Quyet-dinh-816-QD-BNN-KH-2018-thuc-hien-120-NQ-CP-phat-trien-dong-bang-Cung-Long-378109.aspx).
Socialist Republic of Vietnam, 2019. Decision 417/QD-TTg: Action program on sustainable development of the Mekong Delta in regard to climate change. (http://thuvienphapluat.vn/van-ban/Tai-nguyen-Moi-truong/Decision-417-QD-TTg-2019-program-sustainable-and-climate-resilient-development-of-the-Mekong-Delta/437759/tien-anh.aspx).
Szabo, S., Bronzidou, E., Renaud, F.G., et al., 2016. Population dynamics, delta vulnerability and environmental change: comparison of the Mekong, Ganges–Brahmaputra and Amazon delta regions. Sustain. Sci. 11 (539–554) https://doi.org/10.1007/s11625-016-0372-6.
Tilman, D., Balzer, C., Hill, J., et al., 2011. Global food demand and the sustainable intensification of agriculture. Proc. Natl. Acad. Sci. 108 (50) https://doi.org/10.1073/pnas.1116427108.
Tran, D.D., Huu, L.H., Hoang, L.P., et al., 2021. Sustainability of rice-based livelihoods in the upper floodplains of Vietnamese Mekong Delta: prospects and challenges. Agric. Water Manag. Volume 243 (106495) https://doi.org/10.1016/j.agwat.2020.106495.
Tran, H., Tran, T., Kervyn, M., 2015. Dynamics of Land Cover/Land Use Changes in the Mekong Delta, 1973–2011: A Remote Sensing Analysis of the Tran Van Thoi District, Ca Mau Province, Vietnam. Remote Sens. 7 (3) https://doi.org/10.3390/rs7030289.
Tran, T.A., 2019. Land use change driven out-migration: evidence from three flood-prone communities in the Vietnamese Mekong Delta. Land Use Policy Volume 88 (104157). https://doi.org/10.1016/j.landusepol.2019.104157.
Truong, Q.C., Nguyen, T.H., Tatsumi, K., et al., 2022. A land-use change model to support land-use planning in the Mekong Delta (MEKOLUC. Land 11 (2), 297. https://doi.org/10.3390/land11020297.
Valin, H., Sands, R.D., van der Mensbrugge, D., et al., 2014. The future of food demand: understanding differences in global economic models. Agric. Econ. Vol 45 (1), 51–67. https://doi.org/10.1111/agec.12089.
Van Asselen, S., Verburg, P.H., 2012. A Land System representation for global assessments and land use modeling. Glob. Change Biol. Vol 18 (10), 3125–3148. https://doi.org/10.1111/j.1365-2486.2012.02759.x.
Vasquez-Gomez, O., Quazi, Q.I., Parsons, P.R., et al., 2021. Establishing sustainable sediment budgets is critical for climate-resilient mega-deltas. Environ. Res. Lett. Vol 16 (064089) https://doi.org/10.1088/1748-9326/ac0f6c.
Vo, H.T.M., van Halsema, G., Seijger, C., et al., 2019. Political agenda-setting for strategic delta planning in the Mekong Delta: converging or diverging agendas of policy actors and the Mekong Delta Plan. J. Environ. Plan. Manag. https://doi.org/10.1080/09640568.2019.1571328.
Vu, H.T.D., Tran, D.D., Schenk, A., et al., 2022. Land use change in the Vietnamese Mekong Delta: new evidence from remote sensing. Sci. Total Environ. Vol.813 (151918) https://doi.org/10.1016/j.scitotenv.2021.151918. |
How to build the thermofield double state
William Cottrell\textsuperscript{a,b,c}, Ben Freivogel\textsuperscript{b,c}, Diego M. Hofman\textsuperscript{b} and Sagar F. Lokhande\textsuperscript{b}
\textsuperscript{a}Physics Department, Stanford University, Stanford, CA 94305, U.S.A.
\textsuperscript{b}Institute for Theoretical Physics, University of Amsterdam, 1090 GL Amsterdam, The Netherlands
\textsuperscript{c}GRAPPA, University of Amsterdam, 1090 GL Amsterdam, The Netherlands
E-mail: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com
ABSTRACT: Given two copies of any quantum mechanical system, one may want to prepare them in the thermofield double state for the purpose of studying thermal physics or black holes. However, the thermofield double is a unique entangled pure state and may be difficult to prepare. We propose a local interacting Hamiltonian for the combined system whose ground state is approximately the thermofield double. The energy gap for this Hamiltonian is of order the temperature. Our construction works for any quantum system satisfying the Eigenvalue Thermalization Hypothesis.
KEYWORDS: AdS-CFT Correspondence, Black Holes, Effective Field Theories, Conformal Field Theory
ARXIV ePRINT: 1811.11528
1 Introduction
Given two copies of any quantum mechanical system, the thermofield double state $|\text{TFD}\rangle$ is the unique pure state
$$|\text{TFD}\rangle \equiv \frac{1}{\sqrt{Z}} \sum_n e^{-\beta E_n/2} |n\rangle_L \otimes |n\rangle_R,$$
where $|n\rangle_{L,R}$ are the energy eigenstates of the individual systems. This is an entangled pure state of the full system with the property that each of the two copies is in the thermal density matrix with temperature $\beta^{-1}$.
In a quantum theory with a gravity dual, this state is dual to an eternal black hole. Black holes remain poorly understood; it is a matter of debate whether an observer falling into a large black hole falls freely through the horizon, as predicted by the equivalence principle, or encounters a ‘firewall’ at the horizon.
Despite many papers on this topic, a consensus has not yet been reached. The primary difficulty is that the notion of a firewall depends on experiences of observers localized near an event horizon. However, local observables are not believed to exist in quantum gravity. In principle, all we should discuss is the S-matrix, but, from this data alone it is essentially impossible to decipher the experiences of the brave soul who sailed into the black hole and was long ago scrambled into Hawking radiation.
A key step forward was taken in [1, 2] where it was realized that by applying a simple perturbation coupling the two sides of an eternal AdS black hole one may make the wormhole traversable. This, in principle, allows us to probe behind the horizon without dealing with issues of bulk locality - all we need to do is send an observer from one side to the other and ask them how they felt. Susskind has predicted that we will be able to perform experiments of this type ‘within the next decade or two’ [3]. The eternal AdS black hole is dual to the thermofield double state of the two boundary CFTs [4, 5]. The thermofield double state is also of interest beyond the context of black holes, in the study of thermal field theories.
The first step in performing such experiments is to prepare two copies of a quantum system in the thermofield double (TFD) state. The goal of this article is to propose a simple way to do so. Our approach will be to look for an interacting Hamiltonian for the combined system whose ground state is the TFD state. If this ‘TFD Hamiltonian’ can be experimentally realized, and the system has a way to dissipate energy, then the system will eventually approach the TFD state.
One might worry that it is difficult to construct the TFD state: it is one state in the very large Hilbert space, and it is not defined in terms of a minimization principle. One simple definition is that the TFD state is generated by evolution in Euclidean time, but we have not been able see how to use this definition in the laboratory. A particular worry is there are many states that look roughly like the TFD state but differ by relative phases,
\[
| \text{TFD}_\phi \rangle = \frac{1}{\sqrt{Z}} \sum_n e^{i \phi n} e^{-\beta E_n/2} |n\rangle_L \otimes |n\rangle_R .
\]
These states have the same thermal density matrix for each of the two subsystems as the bona fide TFD, but they do not correspond to a bulk dual with a ‘short’ AdS wormhole. Trying to send the teleportee through these bulk geometries will result in disaster. Furthermore, there are of order $\exp(S)$ of these states, while the ‘real’ thermofield double is unique.
Our central claim is that in quantum systems satisfying the Eigenvalue Thermalization Hypothesis (ETH), the thermofield double state is in fact the ground state of a relatively simple Hamiltonian. Schematically, our claim is that a simple Hamiltonian of the form
\[
H_S \sim H^0_L + H^0_R + \sum_k c_k \left( O^k_L - O^k_R \right)^2
\]
has a ground state that is approximately the thermofield double. Here $H^0_{L,R}$ are the original Hamiltonians of the left and right systems, and the $\mathcal{O}^k_{L(R)}$ are any operators in the left (right) system. In the Quantum Field Theory context this Hamiltonian is local (in the sense of Effective Field Theory) if the system in question is in the thermodynamic limit. We summarize our results more precisely at the end of this introduction.
Of particular importance in this program is the gap, $\Delta E$, in the Hamiltonian we will be constructing. This is a measure of how quickly the desired state can be reached and how carefully the experiment must be controlled. It is also indirectly a measure of the complexity of the TFD state since the complexity scales like the time required to reach the state, which scales like $\Delta E^{-2}$ [6, 7]. We present evidence that the gap does not become exponentially small in the black hole entropy; in fact, the gap is of order the temperature as long as the number of different operators $\mathcal{O}^k$ is larger than a few.
1.1 Summary of results
To be more precise, we define $|\text{TFD}\rangle$ by
$$|\text{TFD}\rangle \equiv \frac{1}{\sqrt{Z}} \sum_n e^{-\beta E_n/2} |n\rangle_L \otimes |n^\ast\rangle_R.$$
(1.4)
Here
$$|n^\ast\rangle \equiv \Theta |n\rangle$$
(1.5)
where $\Theta$ is an anti-unitary operator, such as CPT, that commutes with the original Hamiltonian. This definition is motivated by the path integral construction of the $|\text{TFD}\rangle$, which entangles electrons in the right theory with positrons in the left theory, etc. Our results can be summarized as follows:
- $|\text{TFD}\rangle$ is the ground state of the Hamiltonian
$$H_{\text{TFD}} = \sum_k c_k d_k^\dagger d_k,$$
$$d_k \equiv e^{-\beta(H^0_L + H^0_R)/4} \left(\mathcal{O}^k_L - \Theta \mathcal{O}^{k\dagger}_R \Theta^{-1}\right) e^{\beta(H^0_L + H^0_R)/4}.$$
(1.6)
where $H^0_L$ is the original Hamiltonian of the left theory, $\mathcal{O}^k_L$ is any operator in the left system and $\mathcal{O}^k_R$ is the same operator in the right system. This Hamiltonian has the exact TFD as the ground state but it may be quite complicated in the case of interest where the original hamiltonian $H^0$ is strongly coupled.
- $|\text{TFD}\rangle$ is the approximate ground state of the simple Hamiltonian
$$H_S = \sum_k c_k d_k^\dagger d_k + H^0_L + H^0_R, \quad d_k = \mathcal{O}^k_L - \Theta \mathcal{O}^{k\dagger}_R \Theta^{-1},$$
(1.7)
in systems satisfying the Eigenvalue Thermalization Hypothesis (ETH), where the $c_k$ are appropriately chosen positive numbers.
• The energy gap between the ground state and the first excited state is of order the temperature scale in systems satisfying ETH, as long as the number different operators \( \mathcal{O}^k \) included in the Hamiltonian is at least a few. Therefore, introducing couplings between a handful of simple operators in the two theories is sufficient to pick out \( |TFD\rangle \) uniquely.
We begin in the next section by giving our general prescription for the TFD Hamiltonian, applying it to a number of examples in section 3. In section 4 we analyze our Hamiltonian using effective field theory and show that the UV cutoff for the EFT is of order the temperature of the TFD state. In sections 5 and 6 we analyze in detail the gap for the exact and approximate Hamiltonians. In section 7.1 we discuss some sources of error. In section 7.2 we point out a connection between our setup and certain NP complete problems, as well as offering a speculative interpretation of our construction as a model for a quantum learning algorithm. More precisely, we are trying to ‘learn’ a state given a small number of operator relations on this state and successful learning may be interpreted as the absence of a firewall. We close with a number of directions for future research in section 7.3.
**Previous work.** During the lengthy interval it took us to complete this work, the interesting paper [8] by Maldacena and Qi appeared, where the problem of constructing the TFD state was considered and a similar expression for the TFD Hamiltonian was proposed. In [8], the TFD Hamiltonian is similar to equation (1.7), with the interaction term being just \( \mathcal{O}_L \cdot \mathcal{O}_R \). However, they only study the \( q \)-body SYK model at large \( N \). Further, the coupling constant is taken to be \( \mathcal{O}(1) \), unlike our situation. They show that the ground state of this Hamiltonian is approximately the TFD state, albeit with a small overlap with the real TFD state. This is different from our case, where the overlap is significant.
A few years ago, McGreevy and Swingle [9, 10] introduced a general formalism to build mixed states in many-body systems using quantum circuits. They called this formalism \( s \)-sourcery. Using this formalism, they constructed TFD Hamiltonians for free theories. These Hamiltonians are similar to what we construct here in the free case, and we compare our results where appropriate. Our main goal, however, is to offer a simple proposal for the strongly coupled theories that are of interest for holography.
## 2 General construction of the TFD state
Our goal is now to provide a prescription for preparing the thermofield double state. We start with two identical quantum systems and then attempt to construct some interaction such that the ground state of the combined system is precisely \( |TFD\rangle \). As explained in the introduction, we define \( |TFD\rangle \) by
\[
|TFD\rangle \equiv \frac{1}{\sqrt{Z}} \sum_n e^{-\beta E_n/2} |n\rangle_L \otimes |n^\ast\rangle_R ,
\]
(2.1)
where the \( |n\rangle \) are energy eigenstates and \( |n^\ast\rangle \equiv \Theta |n\rangle \), with \( \Theta \) being an anti-unitary operator such as CPT.
To find this Hamiltonian, let us start with any operator in the left theory \( \mathcal{O}_L \). Let \( \mathcal{O}_R \) be the corresponding operator in the right theory. Then an operator \( d \) of the form,
\[
d \equiv e^{-\beta (H^0_L + H^0_R)/4} \left( \mathcal{O}_L - \Theta \mathcal{O}_R^\dagger \Theta^{-1} \right) e^{\beta (H^0_L + H^0_R)/4}
\]
(2.2)
will annihilate the TFD state,
\[
d \ket{\text{TFD}} = 0.
\]
(2.3)
This is because in the energy eigenbasis, the matrix elements of the two terms in \( d \) are equal in magnitude. After some algebra,
\[
d \ket{\text{TFD}} = \frac{1}{\sqrt{Z}} \sum_{ij} e^{-\beta (E_i + E_j)/4} \left( (\mathcal{O})_{ij} \ket{i} \ket{j^*} - (\mathcal{O}^\dagger)^*_{ji} \ket{i} \ket{j^*} \right) = 0.
\]
(2.4)
with
\[
(\mathcal{O})_{ij} \equiv \langle i | \mathcal{O} | j \rangle
\]
(2.5)
The two terms in the parentheses in (2.4) come from the action of \( \Theta \) and are equal. Then the TFD hamiltonian in general will be a sum over such operators,
\[
H_{\text{TFD}} = \sum_i c_i d_i^\dagger d_i,
\]
(2.6)
where \( c_i \) is a set of positive numbers. A useful simplification is that we do not need to include such terms for every operator separately. A state that is annihilated by the \( d \) operator built from \( \mathcal{O}_1 \) and the \( d \) operator built from \( \mathcal{O}_2 \) is automatically annihilated by the \( d \) operator built from their commutator. This is straightforward to show but important for us, so we formalize this as the
**Commutator property.** Given \( \mathcal{O}_1 \) and \( \mathcal{O}_2 \) such that
\[
d_1 \ket{\text{TFD}} = d_2 \ket{\text{TFD}} = 0.
\]
(2.7)
Then \( d_3 \ket{\text{TFD}} = 0 \), where
\[
d_3 \equiv e^{-\beta (H^0_L + H^0_R)/4} \left( \mathcal{O}_{3,L} - \Theta \mathcal{O}_{3,R}^\dagger \Theta^{-1} \right) e^{\beta (H^0_L + H^0_R)/4}
\]
(2.8)
and \( \mathcal{O}_3 \equiv [\mathcal{O}_1, \mathcal{O}_2] \). This property can be shown by considering \([d_1, d_2] \ket{\text{TFD}} = 0\) and simplifying the expression while using \([\mathcal{O}_L, \mathcal{O}_R] = 0\) and properties of Hermitian conjugation.
Thus if one has a set \( \mathcal{A} \) such that the elements of \( \mathcal{A} \) generate all the operators in the QFT by commutation algebra, then the TFD hamiltonian need only be defined as,
\[
H_{\text{TFD}} = \sum_{i \in \mathcal{A}} c_i d_i^\dagger d_i.
\]
(2.9)
\( H_{\text{TFD}} \) is manifestly positive-definite, being a sum of positive-definite terms.
In principle, this hamiltonian could have more than one ground state. We return to this later when we calculate the gap for Quantum Field Theories. For finite dimensional Hilbert spaces it is easy to prove.
**Uniqueness of the ground state.** One can easily prove that the ground state of this Hamiltonian is unique. The trick consists in linearly mapping the Hilbert space of the double theory to the space of operators of the single sided left theory. Provided a choice of an anti-unitary operator $\Theta$ we can define a linear map $\mathcal{M}$ as
$$\mathcal{M} : |n\rangle_L \otimes |m^*\rangle_R \rightarrow |n\rangle \otimes \langle m|.$$
(2.10)
The problem of finding the original ground state has now mapped to that of finding the set of operators that lie in the kernel of the super-operators related through the linear map $\mathcal{M}$ to (2.2)
$$D_i = [\mathcal{O}_i, \cdot]_\beta$$
(2.11)
for all $i \in A$. The $\beta$-commutator above is defined as:
$$[\mathcal{O}, \mathcal{Q}]_\beta = e^{-\beta H^0/4}\mathcal{O}e^{+\beta H^0/4}\mathcal{Q} - \mathcal{Q}e^{+\beta H^0/4}\mathcal{O}e^{-\beta H^0/4}$$
(2.12)
Now, the only operator that $\beta$-commutes with all operators in $A$ is the $\beta$-identity $\mathcal{I}_\beta = e^{-\beta H^0/2}$. Under the inverse map $\mathcal{M}^{-1}$, this operator corresponds to the thermofield double state. Therefore, $\mathcal{I}_\beta$ is the unique ground state of the super-Hamiltonian.
$$\mathcal{H}_{TFD} = \sum_{i \in A} c_i [\mathcal{O}_i^\dagger, [\mathcal{O}_i, \cdot]_\beta]_{-\beta}$$
(2.13)
As before, $c_i$ is a set of positive numbers. It is easy to check that this super-Hamiltonian has a positive semi-definite spectrum as the expectation value in any state $\mathcal{Q}$ is given by:
$$\langle \mathcal{H}_{TFD} \rangle_\mathcal{Q} = \sum_{i \in A} c_i \text{Tr} \{ \mathcal{Q}^\dagger [\mathcal{O}_i^\dagger, [\mathcal{O}_i, \mathcal{Q}]_\beta]_{-\beta} \} = \sum_{i \in A} c_i \text{Tr} \{ [\mathcal{O}_i, \mathcal{Q}]_\beta^\dagger [\mathcal{O}_i, \mathcal{Q}]_\beta \} \geq 0$$
(2.14)
and has a unique ground state with $\mathcal{H}_{TFD} = 0$ for the state $\mathcal{I}_\beta$.
This proof is valid in QFT provided we can regularize the sum over all $i$'s in $A$ in the case of an infinite dimensional Hilbert space.
**Conformal theories.** The special case where the quantum systems have conformal invariance is particularly interesting for holography. For now we do not specify whether the theory is conformal quantum mechanics or conformal field theory. Consider the following form of the TFD Hamiltonian
$$H_{TFD} = \sum_\alpha \lambda_\alpha d_\alpha^\dagger d_\alpha + \sum_i c_i d_i^\dagger d_i,$$
$$d_\alpha \equiv e^{-\beta(H^0_L + H^0_R)/4} \left(J_\alpha^L - \Theta J_\alpha^{R,\dagger} \Theta^{-1}\right) e^{\beta(H^0_L + H^0_R)/4}$$
(2.15)
where $J^{L,R}_\alpha$ are the generators of the conformal algebra in the left and right theories, and $d_i$ is as defined in (2.2) for a *primary* operator $\mathcal{O}_i$. We can show that there is no need to separately include $d$ operators constructed from the descendants in the TFD Hamiltonian:
$$d_{\mathcal{O}_i} |\text{TFD}\rangle = 0 \implies d_{J(\mathcal{O}_i)} |\text{TFD}\rangle = 0$$
(2.16)
where $d_{J(\mathcal{O}_i)}$ denotes the $d$ operator constructed from a descendant of $\mathcal{O}_i$. This property follows immediately upon using the commutator property.
Notice that in this case the terms proportional to $\lambda_\alpha$ make the Hamiltonian non-local at all scales.
Ambiguity in the TFD Hamiltonian. We also note that the TFD Hamiltonian is ambiguous. In fact, all the operators of the following form also annihilate the TFD state and hence in principle can compose the TFD Hamiltonian,
- \( d^{(1)} \equiv e^{-\frac{\beta}{4}(H_L + H_R)} \left( O_L - \Theta O_R^\dagger \Theta^{-1} \right) e^{\frac{\beta}{4}(H_L + H_R)} \)
- \( d^{(2)} \equiv O_L - e^{-\beta H_R/2} \Theta O_R^\dagger \Theta^{-1} e^{\beta H_R/2} \)
- \( d^{(3)} \equiv e^{-\beta H_L/2} \Theta O_L \Theta^{-1} e^{\beta H_L/2} - O_R^\dagger \)
- \( d^{(4)} \equiv e^{-\beta H_L/4} O_L e^{\beta H_L/2} - \Theta O_R^\dagger \Theta^{-1} e^{\beta H_R/4} \)
We will primarily use the first and the second of these.
Nonlocality in the TFD hamiltonian. In the context of quantum field theory, it is natural to ask how local the interactions are. In other words, do they only couple operators at the same spacetime point in the two copies, or is the coupling non-local? We will address this more fully in the examples, but we can give a quick answer now.
Looking at the operator \( d^{(1)} \), we can take \( O_L \) to be a local operator. We see that this is coupled to an operator in the right theory that is evolved by \( \beta/2 \) in Euclidean time. Therefore, we expect the right operator to have non-locality roughly on the temperature scale \( \beta \). This statement is not precise in general, because evolution in Euclidean time is not contained in any lightcone, so we will calculate the scale of non-locality explicitly in examples.
Another argument for nonlocality on scale \( \beta \) is that creating the TFD state from two identical QFTs requires entangling them. In many cases, the entanglement between the two systems extends a distance \( \beta \) in space.
Given two quantum systems in the lab, we can connect wires coupling nearby points in the two theories. The speed of light in the lab may be much faster than the speed of light in the QFT’s, so there is no obstacle to introducing interactions at spacelike separation.
However, note that the operators in the simple Hamiltonian (1.7) are local on the scale \( \beta \). They are smeared over some short distance \( \frac{1}{\sigma_E} \) to regulate their UV behavior, so this yields an Effective Field Theory whenever \( \beta \sigma_E \gg 1 \). This suggests that the ground state of an approximately local Hamiltonian is close to the TFD state. We will discuss in detail the overlap between the two in section 6.
3 Examples
In this section we illustrate the general construction above with some concrete, albeit simple examples.
3.1 Simple harmonic oscillator
We begin with the harmonic oscillator. We will take the anti-unitary operator in this case to be time reversal. One could also choose \( PT \); this would yield a different TFD state that is related to the one we construct here by flipping the axis of one system.
**Exact TFD Hamiltonian.** We first construct our exact TFD Hamiltonian. A convenient choice of annihilation operators is
\[
d_1 = a_L - e^{-\beta w/2}a_R^\dagger, \quad d_2 = a_R - e^{-\beta w/2}a_L^\dagger.
\]
The Hamiltonian becomes
\[
H_{\text{TFD}} = E_0 \left( d_1^\dagger d_1 + d_2^\dagger d_2 \right),
\]
where \(E_0\) is an arbitrary constant. We have chosen the relative coefficient between the first and the second term above to be one. This is the unique quadratic Hamiltonian that respects the symmetry under the exchange of left and right oscillators. Collecting terms and dropping a constant shift gives
\[
H_{\text{TFD}} = E_0 (1 + e^{-\beta w}) \left( a_L^\dagger a_L + a_R^\dagger a_R \right) - 2E_0 e^{-\beta w/2} \left( a_L^\dagger a_R^\dagger + a_L a_R \right).
\]
This can be diagonalized by defining
\[
a_L \equiv (a + b)/\sqrt{2}, \quad a_R \equiv (a - b)/\sqrt{2},
\]
so that \(a, a^\dagger\) and \(b, b^\dagger\) have the canonical commutators and commute with each other. Then the Hamiltonian becomes
\[
H_{\text{TFD}} = E_0 (1 + e^{-\beta w})(a^\dagger a + b^\dagger b) - E_0 e^{-\beta w/2}(a^\dagger a^\dagger + aa - b^\dagger b^\dagger - bb).
\]
We now have two decoupled systems, each with Hamiltonian of the form
\[
H = Ba^\dagger a + D[a^2 + (a^\dagger)^2].
\]
The spectrum can be calculated by doing a Bogoliubov transformation. The spectrum is that of a harmonic oscillator with a frequency given by
\[
w' = \sqrt{B^2 - 4D^2} = E_0 (1 - e^{-\beta w}).
\]
Looking back at our full TFD Hamiltonian, we see the spectrum is that of two decoupled harmonic oscillators with the same energy spacing. The gap is
\[
\text{Gap} = E_0 (1 - e^{-\beta w}).
\]
The high temperature limit of the term in parentheses is \(\beta w\), so we need to have our overall constant \(E_0\) scale at least like
\[
E_0 \sim T
\]
at high temperatures to maintain a finite gap. This is a reasonable requirement. Note that by making different choices for how the overall scale in the Hamiltonian scales with temperature, we can make the gap scale in any way we like. We will find that our simple Hamiltonian has less freedom.
Simple Hamiltonian. We can also try out our simple Hamiltonian (1.7) for the harmonic oscillator. There is no guarantee that this will give even approximately the correct ground state since we only claim it works in systems satisfying ETH, but we will try anyway. We take
\[ H_S = H_L^0 + H_R^0 + c_1 w^2 (x_L - x_R)^2 + c_2 (p_L + p_R)^2 . \]
(3.10)
Note that the relative sign is different in the momentum coupling due to conjugation by the time reversal operator. We will tune the constants \( c_1 \) and \( c_2 \) to try to get the TFD state as the ground state. We have defined them so that the \( c_i \) are dimensionless. If we choose
\[ c_1 = c_2 = C/2 , \]
(3.11)
the interaction term becomes (up to a constant shift)
\[ Cw \left[ a_L^\dagger a_L + a_R^\dagger a_R - a_L a_R - a_L^\dagger a_R^\dagger \right] , \]
(3.12)
so that the full Hamiltonian is
\[ H_S = w (1 + C)(a_L^\dagger a_L + a_R^\dagger a_R) - wC(a_L a_R + a_L^\dagger a_R^\dagger) . \]
(3.13)
This is precisely of the same form as the exact TFD Hamiltonian from equation (3.3)! We were lucky in this case because everything is quadratic.
Matching parameters, we can relate the interaction coefficient \( C \) to the temperature
\[ C = \frac{1}{2 \sinh^2(\beta w/4)} , \]
(3.14)
indicating that the simple Hamiltonian gives a TFD state with temperature that ranges from \( T = 0 \) when \( C = 0 \) up to \( T = \infty \) at \( C = \infty \).
The gap can be found by relating \( C \) to \( E_0 \) and is given by
\[ \text{Gap} = w \coth \left( \frac{\beta w}{4} \right) . \]
(3.15)
Note that in this simple Hamiltonian we do not have the freedom to choose the gap. The temperature dependence of the gap is nice: the gap is given by the frequency of the oscillator at low temperature, and by the temperature at high temperature.
3.2 Free fermion
We may repeat the steps above for fermions, though we must be careful about orderings. For fermions, our conventions are
\[ \{ a_{L,R}, a_{L,R}^\dagger \} = 1 , \quad \{ a_{L,R}, a_{R,L}^\dagger \} = \{ a_{L,R}, a_{R,L} \} = \{ a_{L,R}^\dagger, a_{R,L}^\dagger \} = 0 . \]
(3.16)
Note that the Hilbert space of each fermionic oscillator is finite dimensional, in fact, spanned by two independent states. When we write the vacuum of the doubled theory, we specifically have the following ordering in mind \( |0,0\rangle = |0\rangle_L \otimes |0\rangle_R \). The excited state is then
\[ a_R^\dagger a_L^\dagger |0,0\rangle \equiv |1,1\rangle . \]
(3.17)
The anti-unitary operator $\Theta$ from equation (2.2) acts as follows
$$\Theta a_{L,R}^\dagger \Theta^{-1} = -a_{L,R}^\dagger.$$ \hspace{1cm} (3.18)
Then, using this and keeping track of orderings, the thermofield double state becomes
$$|TFD\rangle = \exp(e^{-\beta w/2} a_R^\dagger a_L^\dagger) |0,0\rangle = |00\rangle + e^{-\beta w/2} |11\rangle.$$ \hspace{1cm} (3.19)
It is annihilated by
$$d_L = a_L + e^{-\beta w/2} a_R^\dagger, \quad d_R = a_R - e^{-\beta w/2} a_L^\dagger.$$ \hspace{1cm} (3.20)
Then the exact TFD Hamiltonian with the TFD state as the ground state can be shown to be,
$$H_{TFD} = E_0 \left(1 - e^{-\beta w}\right)(a_L^\dagger a_L + a_R^\dagger a_R) + 2E_0 e^{-\beta w/2}(a_L^\dagger a_R^\dagger - a_L a_R).$$ \hspace{1cm} (3.21)
We can now ask for the gap of this exact Hamiltonian. Since the form of the Hamiltonian is similar to equation (3.5), the gap can be calculated in a similar way. It becomes
$$\text{Gap} = E_0(1 + e^{-\beta w}).$$ \hspace{1cm} (3.22)
Comparing this gap to the one in equation (3.8), we see that bosons and fermions behave very differently at low energies.
### 3.3 Free quantum field theory
We would also like to analyze a simple quantum field theory example in order to diagnose locality. We will analyze the free massless scalar in $3+1$ dimensions for simplicity. This is of course just a bunch of harmonic oscillators. (The $1+1$ case has IR divergences that are special to that case, so we work in higher dimensions.)
**Exact TFD Hamiltonian**
By using the same approach as the harmonic oscillator example for each momentum mode, the exact TFD Hamiltonian becomes,
$$H_{TFD} = \int d^3k \, E(k) \left(1 + e^{-\beta \omega_k}\right) \left((a_k^L)^\dagger a_k^L + (a_k^R)^\dagger a_k^R\right)$$
$$- 2E(k) \, e^{-\beta \omega_k/2} \left((a_k^L)^\dagger (a_{-k}^R)^\dagger + a_k^L a_{-k}^R\right).$$ \hspace{1cm} (3.23)
We are free to choose $E(k)$ to be any positive function we like. We would like to go to position space to diagnose locality. For this we use,
$$a_k = \int d^3x \, e^{-ikx} \left[\sqrt{\frac{\omega_k}{2}} \phi(x) + \frac{i}{\sqrt{2 \omega_k}} \pi(x)\right].$$ \hspace{1cm} (3.24)
Then, up to additive constant factors which will not be important for further discussion, the Hamiltonian becomes
\[
H = \frac{1}{2} \int d^3 x \, d^3 y \left[ f(x - y) \left( \pi_L(x) \pi_L(y) + \pi_R(x) \pi_R(y) \right) \right. \\
+ g(x - y) \left( \phi_L(x) \phi_L(y) + \phi_R(x) \phi_R(y) \right) \bigg] \\
+ \frac{1}{2} \int d^3 x \, d^3 y \left[ h(x - y) \pi_L(x) \pi_R(y) + k(x - y) \phi_L(x) \phi_R(y) \right].
\]
Here the first line contains terms that do not couple the two theories, while the second line contains coupling terms. This Hamiltonian is bi-local, with the scale of nonlocality set by the four functions \(f, g, h, k\). These functions are all determined by our choice of \(E(k)\) via the definitions
\[
\begin{align*}
f(x) &= \int d^3 k \, e^{i k x} \frac{E(k)}{\omega_k} (1 + e^{-\beta \omega_k}), \\
g(x) &= \int d^3 k \, e^{i k x} E(k) \omega_k (1 + e^{-\beta \omega_k}), \\
h(x) &= 2 \int d^3 k \, e^{i k x} \frac{E(k) e^{-\beta \omega_k/2}}{\omega_k}, \\
k(x) &= -2 \int d^3 k \, e^{i k x} E(k) \omega_k e^{-\beta \omega_k/2}.
\end{align*}
\]
It is tempting to choose \(E(k)\) so that the non-interacting terms take their canonical local form. This choice corresponds to
\[
E(k)(1 + e^{-\beta \omega_k}) = \omega_k.
\]
However, we do not want to make this choice because the gap for each mode is given by our harmonic oscillator formula (3.8),
\[
\text{Gap}(k) = E(k)(1 - e^{-\beta \omega_k}).
\]
If we choose \(E(k)\) according to (3.27), we would have
\[
\text{Gap}(k) = \omega_k \tanh(\beta \omega_k/2).
\]
Note that the appearance of \(\tanh\) here is not inconsistent with the appearance of \(\coth\) in equation (3.15). These are gaps of two different Hamiltonians: equation (3.29) is that of the exact TFD Hamiltonian while equation (3.15) is that of a Simple Hamiltonian.
Further, at small \(\omega\), the gap in equation (3.29) becomes \(\text{Gap} \sim \beta \omega_k^2\), so if \(\omega\) becomes very small, the gap is very small if we insist on the canonical choice for the non-interacting terms. In fact, it is not possible in this case to confine the nonlocality to the thermal scale while also avoiding a small gap. The gap equation at small \(k\) becomes
\[
\text{Gap}(k) \approx E(k) \beta \omega_k.
\]
For a massless field $\omega_k = |k|$. Thus if we want the gap to remain finite as $k \to 0$, we need $E(k)$ to diverge at least as $E(k) \sim 1/k$. However, this behavior leads to non-locality at large scales. Roughly, this is because the low $k$ behavior corresponds to long distances. More precisely, if we look for example at the function $f(x)$ defined above in equation (3.26), we see that it is the Fourier transform of a function that diverges as at least $1/k^2$ at small $k$, since we want $E(k) \sim 1/k$. In general, the Fourier transform of a function that is non-analytic at $k = 0$ cannot fall off exponentially at large $x$, as is true in this particular case since the function $f(x) \sim x$ as $k \to 0$.
Therefore, in this example we have to choose between a small gap and an approximately local Hamiltonian. We will describe the case of an approximately local Hamiltonian. That is, we take the function $E(k)$ defined by equation (3.27). Then the non-interacting terms become completely local. This becomes manifest when we calculate the functions appearing in the Hamiltonian, obtaining (up to constants)
\begin{align}
f(x) &= \delta^3(x), \\
g(x) &= -\nabla^2 \delta^3(x), \\
h(x) &= \frac{1}{8\beta^2|x|} \frac{\sinh \left( \frac{\pi|x|}{2\beta} \right)}{\cosh^2 \left( \frac{\pi|x|}{2\beta} \right)}, \\
k(x) &= \nabla^2 h(x).
\end{align}
Here, we have included the important property that $h$ and $f$ are equal at long wavelengths. Collecting everything, the TFD Hamiltonian becomes (up to constant additive factors)
\begin{equation}
H_{\text{TFD}} = H_L^0 + H_R^0 + \frac{1}{2} \int d^3x \, d^3y \, h(x-y) \left[ \pi_L(x) \pi_R(y) - \nabla \phi_L(x) \cdot \nabla \phi_R(y) \right].
\end{equation}
The scale of nonlocality is set by the function $h$ in (3.31), so it is nonlocal on the thermal scale. The gap can be calculated from equations (3.28) and (3.27), giving at small $k$
\begin{equation}
\text{Gap} \sim \beta \omega_k^2.
\end{equation}
This is very small at high temperature. We believe that this small gap is an artifact of working in the free theory. We will argue later that interacting theories have a gap of order the temperature. This is reminiscent of the appearance of thermal masses in finite temperature QFT at non-zero coupling.
### 3.4 Free fermion field theory
We now make a brief comment about free fermion field theory. We start by thinking of the field theory as a collection of decoupled fermion oscillators. The case of free fermionic oscillator was worked out in detail in subsection 3.2. Using the results there we can immediately write down the gap of the exact TFD Hamiltonian as
\begin{equation}
\text{Gap}(k) = E(k)(1 + e^{-\beta \omega_k}).
\end{equation}
If we choose to make the non-interacting terms canonical, corresponding to the choice
\[ E(k) = \frac{\omega_k}{1 - e^{-\beta \omega_k}} , \]
the gap becomes
\[ \text{Gap}(k) = \omega \coth(\beta \omega_k / 2) . \]
This is quite different from the free boson field theory gap in equation (3.29), and disagrees with the claim in [10] that the TFD Hamiltonian for free fermions and free bosons behave similarly.
Therefore, for free fermionic fields, a finite range interaction can give the TFD as a ground state while maintaining a gap of order the temperature, unlike the free bosonic case. This is analogous to the finite temperature behavior of free fermions: due to the anti-periodic boundary conditions fermions have no zero mode on the thermal circle, so their finite temperature correlation function is exponentially suppressed with length scale set by the temperature, corresponding to all light modes acquiring a mass of order the temperature.
On the other hand, free bosonic theories do have a zero mode on the thermal circle, leading to power law correlation functions, which implies that some modes remain much lighter than the thermal scale. Therefore, the different gaps we find for free fermions and free bosons are surprising, but the same as known finite temperature physics. We expect interactions to modify the unusual finite temperature behavior of free bosonic fields. We return to this in the following section.
### 3.5 Ising conformal field theory
It is illustrative to consider the critical (\( \beta J = 1/4 \)) Ising model in the language of conformal field theory. The central charge of this theory is \( c = \frac{1}{2} \) and there are three conformal primaries:
| Operator | Symbol | Conformal dimension \( h \) |
|----------|--------|-----------------------------|
| Identity | \( I \) | \( h = 0 \) |
| Spin | \( \sigma(z, \bar{z}) \) | \( h = \frac{1}{16} \) |
| Energy | \( \epsilon(z, \bar{z}) \) | \( h = \frac{1}{2} \) |
The operator product expansions are:
\[
\begin{align*}
\epsilon(z, \bar{z}) \epsilon(w, \bar{w}) &= \frac{1}{|z - w|^2} , \\
\sigma(z, \bar{z}) \sigma(w, \bar{w}) &= \frac{1}{|z - w|^{1/4}} + \frac{1}{2}|z - w|^{3/4}\epsilon(w) , \\
\epsilon(z, \bar{z}) \sigma(w, \bar{w}) &= \frac{1}{2|z - w|}\sigma(w) .
\end{align*}
\]
For each operator, we define states and their conjugates via
\[
|\phi_{\text{in}}\rangle \equiv \lim_{z,\bar{z} \to 0} \phi(z, \bar{z}) |0\rangle, \quad \langle \phi_{\text{out}}| = |\phi_{\text{in}}\rangle^\dagger,
\]
\[
\phi(z, \bar{z})^\dagger = \bar{z}^{-2h} z^{-2\bar{h}} \phi(1/\bar{z}, 1/z).
\]
Using these equations we can convert the operators into $3 \times 3$ matrices acting on the basis of states formed from the primaries. Ordering the basis vectors as $|1\rangle = |\mathbb{I}\rangle$, $|2\rangle = |\sigma\rangle$ and $|3\rangle = |e\rangle$, we have
\[
\langle i| \epsilon(z, \bar{z}) |j\rangle = \begin{pmatrix}
0 & 0 & \frac{1}{|z|^2} \\
0 & \frac{1}{2|z|} & 0 \\
1 & 0 & 0
\end{pmatrix},
\quad
\langle i| \sigma(z, \bar{z}) |j\rangle = \begin{pmatrix}
0 & \frac{1}{|z|^{1/4}} & 0 \\
1 & 0 & \frac{1}{2|z|} \\
0 & \frac{|z|^{3/4}}{2} & 0
\end{pmatrix}.
\]
Using the general construction of section 2 we can form operators which annihilate the thermofield double
\[
d_\epsilon = \epsilon_L(z) - e^{-\beta H/2} \epsilon_R^\dagger(z) e^{\beta H/2},
\]
\[
d_\sigma = \sigma_L(z) - e^{-\beta H/2} \sigma_R^\dagger(z) e^{\beta H/2},
\]
Now it is a simple matter to find the matrices representing $d_\epsilon$ and $d_\sigma$. Recall that these are acting on a 9 dimensional Hilbert space (i.e., tensor product of two sides). We should thus consider the $9 \times 9$ matrix:
\[
(d_\sigma(z))_{ii',jj'} = \sigma_{ii'}(z) \otimes \mathbb{I}_{jj'} - e^{\beta/16} \mathbb{I}_{ii'} \otimes \sigma^*(e^{\beta/2} z)_{j'j},
\]
and likewise for $d_\epsilon(z)$. It is now straightforward to check that $d_\sigma$ and $d_\epsilon$ annihilate the thermofield double for any value of $z$. Moreover, in the nine-dimensional tensor product space $\mathcal{H}_L \otimes \mathcal{H}_R$ (truncated to primaries) one can easily check that the thermofield double is the unique state annihilated by both $d_{\epsilon(z)}$ and $d_{\sigma(z)}$ for all $z$. Therefore, it is natural to propose the Hamiltonian
\[
H_{\text{TFD}} = \int d\theta \left( c_1 d_\epsilon^\dagger d_\epsilon + c_2 d_\sigma^\dagger d_\sigma \right).
\]
Note that this can be put into the form (2.15) if we identify the sum over $i$ with the integral over $\theta$.
This Hamiltonian may have the TFD as the unique ground state, but we have only established this in the space of primaries. It may be necessary to add terms quadratic in the conformal generators, as in (3.42). As mentioned near that equation, these terms may introduce UV issues. It would be interesting to understand the construction further in tractable examples such as the Ising model, but we leave this for future work.
4 Effective field theory
The exact TFD Hamiltonian has two annoying features in general: it is nonlocal, and it is complicated. This motivates us to seek a simpler approximate form for the Hamiltonian. This will turn out to be possible, but at the cost of working in an effective field theory whose UV cutoff is the temperature scale.
We start with the symmetric form of our annihilation operator
\[ d_O \equiv e^{-\frac{\beta}{4}(H_L + H_R)} \left( O_L - \Theta O_R^\dagger \Theta^{-1} \right) e^{\frac{\beta}{4}(H_L + H_R)}. \]
(4.1)
For convenience, let us define the following shorthands,
\[ O \equiv O_L - \Theta O_R^\dagger \Theta^{-1}, \quad H \equiv H_L + H_R. \]
(4.2)
Then we can make the appearance of higher-dimension operators manifest in \( d_O \) by using the BCH formula to expand it in powers of \( \beta \). We obtain,
\[ d_O = O - \frac{\beta}{4}[H, O] + \frac{\beta^2}{32}[H, [H, O]] + \cdots. \]
(4.3)
The contribution from \( d_O \) to the TFD Hamiltonian is
\[ H_O = O^\dagger O + \frac{\beta}{4} \left( [H, O^\dagger]O - O^\dagger[H, O] \right) \]
\[ + \frac{\beta^2}{32} \left( [H, [H, O^\dagger]]O + O^\dagger[H, [H, O]] - [H, O^\dagger][H, O] \right). \]
(4.4)
If we now take \( O_L \) to be a local operator, this is an expansion in local operators. Due to additional commutators with the Hamiltonian, the higher powers of \( \beta \) multiply higher dimension operators. Therefore, this Hamiltonian must be interpreted in a theory with a UV cutoff below the temperature scale. We will illustrate the use of this formula using an example.
**Free field theory example.** It is straightforward to apply the previous results to free quantum field theories. Let us consider a massive bosonic theory, although a similar procedure will hold for a fermionic theory. The original Hamiltonian in position space may be written as,
\[ H = \frac{1}{2} \int d^3 x \left( \pi(x)^2 + \hat{\omega}^2 \phi(x)^2 \right), \]
(4.5)
where \( \hat{\omega}^2 \) is shorthand for the operator
\[ \hat{\omega}^2 \equiv -\nabla^2 + m^2. \]
(4.6)
In this case if we take \( O_{L,R} = \phi_{L,R}(x) \), we can calculate the full operator appearing in the TFD Hamiltonian,
\[ e^{-\beta H/4} O e^{\beta H/4}, \]
(4.7)
where \( H = H_L + H_R \). To avoid factors of 4 we define
\[ \gamma \equiv \beta/4. \]
(4.8)
Again using the BCH formula and the canonical commutator \( [\phi(x), \pi(y)] = i\delta(x-y) \), we can show that for each of the field theory,
\[ e^{-\gamma H} \phi(x) e^{\gamma H} = \cosh(\gamma \hat{\omega}) \phi(x) + \frac{i}{\hat{\omega}} \sinh(\gamma \hat{\omega}) \pi(x). \]
(4.9)
Plugging this expression into the formula for $d_{\phi(x)}$ we get
$$d_{\phi(x)} = E_0 \left[ \cosh(\gamma \hat{\omega}) \phi_L(x) + i \frac{\sinh(\gamma \hat{\omega})}{\hat{\omega}} \pi_L(x) - \cosh(\gamma \hat{\omega}) \phi_R(x) - i \frac{\sinh(\gamma \hat{\omega})}{\hat{\omega}} \pi_R(x) \right],$$
(4.10)
where the overall constant $E_0$ has units of energy.
Now we construct the TFD Hamiltonian. Since we are working with local operators $\phi(x)$, the natural TFD Hamiltonian is an integral
$$H_{\text{TFD}} = \int d^3 x \, d^\dagger_{\phi(x)} d_{\phi(x)}. \quad (4.11)$$
Explicitly, the full Hamiltonian is a bit of a mess,
$$H_{\text{TFD}} = E_0^2 \int d^3 x \left[ \cosh(\gamma \hat{\omega})(\phi_L - \phi_R) - i \frac{\sinh(\gamma \hat{\omega}_L)}{\hat{\omega}_L} (\pi_L - \pi_R) \right] \times \left[ \cosh(\gamma \hat{\omega})(\phi_L - \phi_R) + i \frac{\sinh(\gamma \hat{\omega}_L)}{\hat{\omega}_L} (\pi_L - \pi_R) \right]. \quad (4.12)$$
Note that terms like $\cosh(\gamma \hat{\omega})$ tell us that this Hamiltonian is only local on the thermal scale, because expanding out the $\cosh$ gives higher powers of the momentum $\beta^2 \nabla^2$. This might lead one to suspect that the $T \to \infty$ limit is completely local. Indeed, in this limit, if we expand for small $\beta$ and keep operators up to dimension 2 the TFD Hamiltonian just approaches
$$H'_{\text{TFD}} = E_0^2 \int d^3 x \left[ (\phi_L - \phi_R)^2 + \gamma^2 (\phi_L - \phi_R) \hat{\omega}^2 (\phi_L - \phi_R) + \gamma^2 (\pi_L - \pi_R)^2 \right]. \quad (4.13)$$
It is natural to choose the overall dimensionful constant $E_0$ to be set by the temperature scale, $E_0 = \frac{\gamma^{-1}}{\sqrt{2}}$, giving
$$H_{\text{TFD}} \approx \frac{1}{2} \int d^3 x \left[ (\phi_L - \phi_R) \hat{\omega}^2 (\phi_L - \phi_R) + (\pi_L - \pi_R)^2 + 8T^2 (\phi_L - \phi_R)^2 \right]. \quad (4.14)$$
This Hamiltonian is weird because it only has a kinetic term for one linear combination of the fields. It is natural to add a second term to the Hamiltonian where we start from the operator $\pi(x)$ instead of $\phi(x)$. This operator is higher dimension so we only need to expand to leading order in $\beta$. Due to the conjugation by the anti-unitary operator, this gives a term
$$d^\dagger_\pi d_\pi = (\pi_L + \pi_R)^2 + \ldots. \quad (4.15)$$
We can also add a term from $\mathcal{O} = \partial \phi$ which contributes
$$d^\dagger_{\partial_i \phi} d_{\partial_i \phi} = (\partial_i \phi_L - \partial_i \phi_R)^2 + \ldots. \quad (4.16)$$
Combining all three of these terms with arbitrary positive coefficients $c_i$ gives the full Hamiltonian at quadratic order in the fields up to the operator dimension we are working,
$$H_{\text{TFD}} \approx \frac{1}{2} \int d^3 x \left[ c_1 (\pi_L - \pi_R)^2 + c_2 (\pi_L + \pi_R)^2 \right.$$ $$+ (\phi_L - \phi_R)(c_1 \hat{\omega}^2 - c_3 \nabla^2 + 8c_1 T^2)(\phi_L - \phi_R) \left. \right], \quad (4.17)$$
where the dimensionless constants $c_i$ can be freely chosen. The most notable aspect of this Hamiltonian is that while the combination $\phi_L - \phi_R$ gets a mass as well as a gradient term, the combination $\phi_L + \phi_R$ has no potential or gradient term to this order. This term will appear at higher order in the term that begins with $\pi$,
$$d_\pi^\dagger d_\pi = (\pi_+)^2 + \gamma^2 \pi_+ \hat{\omega}^2 \pi_+ + \gamma^2 (\hat{\omega}^2 \phi_+)^2 + \ldots,$$
(4.18)
where we have defined $\pi_+ \equiv \pi_R + \pi_L$. The Hamiltonian factorizes into a $\phi_+$ and $\phi_-$ piece,
$$H_{\text{TFD}} \approx \frac{1}{2} \int d^3 x \left[ c_1 \pi_-^2 + \phi_- (c_1 \hat{\omega}^2 - c_3 \nabla^2 + 8c_1 T^2) \phi_- \right.$$
$$\left. + c_2 \pi_+^2 + \frac{8c_2}{T^2} \pi_+ \hat{\omega}^2 \pi_+ + \frac{8c_2}{T^2} (\hat{\omega}^2 \phi_+)^2 \right].$$
(4.19)
This confirms what we saw previously in a simpler way: we have one mode with a gap set by the temperature, and a light mode with
$$\text{Gap} \sim c_2 \frac{k^2 + m^2}{T}.$$
(4.20)
The constant $c_2$ can be freely chosen, but in a massless theory the gap is set by lowest allowed value of $k$; in other words, the gap is set by the IR cutoff of the theory.
It would be interesting to know if there are more general situations where a light mode appears, or if this is simply an artifact of free field theory. Our prejudice is the latter. This is an important question because using our Hamiltonian to cool to the TFD state becomes difficult whenever the gap is small.
As motivation that the small gap is an artifact, consider a $\lambda \phi^4$ interaction in four dimensions. We have
$$d_\pi = \pi_+ - \gamma [\mathcal{H}, \pi_+] + \ldots,$$
(4.21)
where as above $\mathcal{H} \equiv H_L + H_R$. One of the terms in the commutator gives
$$d_\pi = \pi_+ + \# i \gamma \lambda \phi_+^3 + \ldots,$$
(4.22)
which contributes to the Hamiltonian as
$$H_{\text{TFD}} = \int d^3 x \left[ \pi_+^2 + i \gamma \lambda [\phi^3(x), \pi(x)] + \ldots \right].$$
(4.23)
The commutator gives
$$[\phi(x)^3, \pi(x)] = -i \delta^3(0).$$
(4.24)
Since we are working in effective field theory with cutoff of order temperature, $\delta^3(0)$ should be replaced by $T^3$, so that the TFD Hamiltonian includes the terms
$$\int d^3 x \left[ \pi_+^2 + \# T^2 \lambda \phi_+^2 \right] \subset H_{\text{TFD}}.$$
(4.25)
This is a mass term for the $\phi_+$ mode with
$$m_+^2 \sim \lambda T^2.$$
(4.26)
This shows that instead of a gap that depends on the IR cutoff, as in the free theory, the $\lambda \phi^4$ theory has a gap set by the temperature scale. As discussed earlier, this phenomenon is not special to our TFD analysis; the same thing happens in analyzing field theory at finite temperature. Free bosonic theories at finite temperature have a small gap that depends on the IR cutoff, while interacting theories (as well as free fermionic theories) have a gap proportional to the temperature.
5 TFD in systems satisfying ETH
In order to construct the TFD in systems that are dual to classical wormholes, we need to move beyond these simple examples. It turns out that we can find a relatively simple Hamiltonian whose ground state is the thermofield double in any quantum field theory satisfying the eigenvalue thermalization ansatz (ETH). Conformal symmetry is not needed. We will start by discussing the spectrum of the TFD Hamiltonian in an energy window in section 5.1 and show that the ground state is the (infinite temperature) TFD state. In section 5.2 we will extend our analysis to the full Hilbert space and study the finite temperature spectrum of the TFD state. We will show in section 5.2 that the gap is order one, indicating that the TFD state can be reached in reasonable time.
5.1 Analysis in an energy window
To get a feel for what happens for an ensemble of operators obeying ETH, consider our TFD hamiltonian in the infinite temperature limit. We include $K$ operators in our Hamiltonian, which is
$$H_{\text{TFD}} = \sum_{k=1}^{K} c_k \left( O_L^{\dagger k} - O_R^{k*} \right) \left( O_L^k - O_R^{kT} \right). \quad (5.1)$$
To avoid clutter, we have defined
$$O^T \equiv \Theta O^\dagger \Theta^{-1}, \quad O^* \equiv (O^T)^\dagger. \quad (5.2)$$
The notation is natural because in the case that the eigenstates are invariant under $\Theta$, $O^T$ is simply the transpose in the energy basis. We further assume that the eigenvalue thermalization hypothesis (ETH) applies to each copy of the original theory. ETH says that the matrix elements of the operator $O$ obey
$$\langle i | O | j \rangle = \langle O \rangle_T \delta_{ij} + \frac{1}{\sqrt{\rho(E)}} \xi_O(\bar{E}, \omega) R_{ij}, \quad (5.3)$$
where $\bar{E}$ is the average energy of the two states, $\omega$ is the energy difference, and $\rho(E) \approx \exp(S)$ is the density of states. $R_{ij}$ is a random matrix whose elements have mean zero and unit variance, while $\xi_O$ is a smooth function of the energy difference $\omega$ and the average energy $\bar{E}$.
The Hamiltonian is an operator in the doubled Hilbert space. A general matrix element of the Hamiltonian in the basis of energy eigenstates is
\[
\langle a_L i_R | H_{\text{TFD}} | b_L j_R \rangle = \sum_{k=1}^{K} c_k \left( \langle a | O_k^\dagger O_k | b \rangle \delta_{ij} + \delta_{ab} \langle j | O_k^\dagger O_k | i \rangle \right) \\
- \sum_{k=1}^{K} c_k \left( \langle a | O_k^\dagger | b \rangle \langle j | O_k | i \rangle + \langle a | O_k | b \rangle^* \langle i | O_k | j \rangle \right).
\] (5.4)
Note that all matrix elements appearing here are quantities in a single copy of the theory. After inserting a complete basis of energy eigenstates in the formula (5.4), it becomes
\[
\langle ai | H_{\text{TFD}} | bj \rangle = \delta_{ij} \sum_{k=1}^{K} c_k \sum_d \langle a | O_k^\dagger | d \rangle \langle d | O_k | b \rangle + \delta_{ab} \sum_{k=1}^{K} c_k \sum_\ell \langle j | O_k^\dagger | \ell \rangle \langle \ell | O_k | i \rangle \\
- \sum_{k=1}^{K} c_k \left( \langle a | O_k^\dagger | b \rangle \langle j | O_k | i \rangle + \langle a | O_k | b \rangle^* \langle i | O_k | j \rangle \right).
\] (5.5)
We now specialize our discussion to an energy window with $N$ states, where $N = \exp(S)$. In the window, the energy-dependence of the operators $O_k$ can be taken to be constant. The ETH then simplifies to
\[
\langle a | O_k | b \rangle = \frac{1}{\sqrt{N}} R_{ab}^k,
\] (5.6)
where $a, b, k = 1, 2, \cdots, N$ and the matrix element $R_{ab}$ is a real random variable with mean $\tilde{\mu} = 0$ and standard deviation $\nu = 1$. It is often taken to be Gaussian. If we use equation (5.6) to insert the Hamiltonian (5.4) in a computer, we can numerically calculate the spectrum of the Hamiltonian. For $N = 50$, we plot it in figure 1. Although it is not clear from the figure, there is an eigenvalue at zero, since we know that $H_{\text{TFD}}$ annihilates the TFD state.
We further assume that the operators $O_k$ are low-energy and have a soft UV behavior. They can be thought of as having an effective radius in energy space and do not connect energy eigenstates separated by distance larger than the width of the window. Then the complete basis inserted in equation (5.5) simplifies and only those eigenstates $|d\rangle, |\ell\rangle$ give a non-zero contribution which are in the energy window. There are $N$ such states. The simplified ETH in equation (5.6) then implies that the first two terms in equation (5.5) have an explicit $N$ in front of them.
### 5.1.1 Large K
Unlike the terms on the first line, those on the second line in equation (5.5) are in general not sign definite. However, the off-diagonal terms in the second line are random variables with mean zero and variance one, as per ETH. If $K$ is large, we can use the central limit theorem to conclude that the sum over $K$ of these will give us a random Gaussian variable with mean zero but variance $\sqrt{K}$. In that case, the terms in the second line will be of order $\mathcal{O}(\sqrt{K})$ in general.
In this subsection, we will then assume $K \gg 1, K \approx N$ and study the spectrum of the TFD Hamiltonian (5.5). Since the terms in the second line in equation (5.5) are of order $\mathcal{O}(\sqrt{N})$, we can ignore the terms in the second line relative to those in the first line. However, there are some off-diagonal terms in the second line that have a definite sign and for which the above argument using central limit theorem does not hold. These are the terms where $a = i$ and $b = j$, so that the last two terms in the Hamiltonian are sums of absolute squares. In this case, the contributions from different operators are of the order $\mathcal{O}(N)$ and not $\mathcal{O}(\sqrt{N})$. Keeping these terms, the Hamiltonian becomes
\begin{align}
\langle ai | H_{\text{TFD}} | bj \rangle &= \delta_{ab} \delta_{ij} \sum_k c_k \left( \langle a | O_k^\dagger O_k | a \rangle + \langle i | O_k^\dagger O_k | i \rangle - 2 \langle a | O_k^\dagger | a \rangle \langle i | O_k | i \rangle \right) \\
&- \delta_{ai} \delta_{bj} \sum_k c_k \left( |\langle i | O_k | j \rangle|^2 + |\langle j | O_k | i \rangle|^2 \right).
\end{align}
Writing the approximate sizes of the matrix elements explicitly, this becomes
\begin{equation}
\langle ai | H_{\text{TFD}} | bj \rangle = d_1 \delta_{ab} \delta_{ij} - d_2 \delta_{ai} \delta_{bj},
\end{equation}
where $d_1$ and $d_2$ are approximately equal and are of order $\mathcal{O}(1)$. This implies that there is an eigenvalue at zero, and a bunch of eigenvalues coming from the second term separated from it by a distance $d_2$ roughly. The extra terms in the full Hamiltonian (5.5) that we threw away are of order $\mathcal{O}(\sqrt{N})$. Thus they do not spread the eigenvalue spectrum by too much at large $K$, giving us a finite gap.
We can verify this estimate by studying the gap of the full TFD Hamiltonian (5.4) numerically. In figure 2, we plot this gap as a function of $N$, with the dimensions of the Hamiltonian being $N^2 \times N^2$.
It is important to verify that the ground state of the TFD Hamiltonian in equation (5.4) is indeed the TFD state. Since we would like to construct an infinite temperature TFD
state, analytically it is clear that the maximally entangled state in the energy window (with no phases) is the TFD state. The structure of the Hamiltonian in (5.1) implies that this is precisely the ground state. We can also verify this numerically. In figure 3 we plot the ground state and the first excited state of the full TFD Hamiltonian.
The ground state is plotted in red, the first excited state in blue. Part (a) shows the projection of these states in the symmetric subspace, and part (b) their projection in its complement. As we expect, the ground state (red) lies fully in the symmetric subspace, with a norm of 1.0000. The fact that it is constant in the symmetric subspace means it has an overlap of 1.0000 with the (infinite temperature) TFD state. The first excited state (blue) has little support in the symmetric subspace, with a norm of 0.1514. This supports the existence of a finite gap, as shown in figure 2.
5.1.2 Small K
We studied the eigenvalue spectrum of the TFD Hamiltonian in the case when $K$ is large (of the order of $N$). However, we would like to improve the situation by considering $K$ to be order $\mathcal{O}(N^0)$. There are two main motivations to do this. Firstly, it is expected that there are few light operators in holography. This comes from the sparseness of low-energy spectrum of a holographic field theory. Thus it is more interesting to consider a TFD Hamiltonian with contributions from few operators. Secondly, it is more feasible to couple few operators in lab and such a scenario will then be easier to realize in a future experiment.
Now we recall the general matrix elements of the TFD Hamiltonian, equation (5.4). We start by doing some numerical experiments to get some intuition for their behavior. In our toy model, we now choose $K = 2$ and $N = 100$. The energy eigenvalues of the original theories are again given by equation (5.12) and the operators by equation (5.6). With these choices, a typical eigenvalue distribution of the TFD Hamiltonian looks like figure 4.
This is not a semi-circular distribution of eigenvalues, but the numerics do suggest the existence of a finite gap. The distribution is distorted from that in figure 1 because the off-diagonal matrix elements of the TFD Hamiltonian in equation (5.4) do not cancel each other. However, the ground state of the Hamiltonian is still the infinite temperature TFD state. We can test this in our toy model. In part (a) of figure 5, we show the ground state (in red) and the first excited state (in blue) of the TFD Hamiltonian projected in the symmetric subspace. The norm of the projected ground state is 1.0000, with no relative phases. We also plot the first excited state, whose norm in the symmetric subspace is only 0.0997.
This further supports the finite gap one sees in the histogram 4. In fact, one can numerically study the behavior of gap as a function of $N$ and confirm that there is a finite gap of order $\mathcal{O}(N^0)$. We show this in figure 6.
It is tantalizing to imagine that there could be an analytic formula for the eigenvalue distribution shown in figure 4. In fact, the distribution does look like some deformation of Marchenko-Pastur Distribution. But unfortunately, we were not able to complete the
Figure 5. Components of the ground state (in red) and the first excited state (in blue) of the full Hamiltonian in (a) symmetric subspace and (b) the complement of the symmetric subspace. We have set $N=100$, $K=2$.
Figure 6. Gap as a function of $N$ for $H_{\text{TFD}}$ of dimension $N^2 \times N^2$ with $K = 2$.
analytic study of the eigenvalue spectrum of the TFD Hamiltonian with contributions from few operators. We leave this to future work.
5.2 Gap equation in full system
We now want to extend the reasoning above to the full spectrum of the theory. In particular, we want to take into account that the operators being used in the construction only connect states whose energy difference is not too big. We saw above that the TFD is the unique ground state within an energy window, and that the gap is not small. However, the TFD state has support over a wide range of energies, so there may be other states that look like the TFD within a small energy window but are orthogonal on the whole Hilbert space. Since the operators of our TFD Hamiltonian are somewhat local in energy space, these new states could be light. New light states would be a major obstacle to using our TFD Hamiltonian to cool to the ground state.
To analyze this, in this section we calculate the gap for these states that look like the TFD in a narrow window of energies but have arbitrary dependence on the energy on longer scales. Making use of the above results, we focus only within the diagonal subspace and use the standard ETH ansatz (5.3).
We begin with the full finite-temperature Hamiltonian, in the form
\[ H = \sum_k c_k d^\dagger_k d_k, \quad d_k = e^{-\gamma(H^0_L + H^0_R)} (O_L - O_R) e^{\gamma(H^0_L + H^0_R)}, \]
where \( \gamma = \beta/4 \). Focussing on the symmetric subspace, the eigenvalue equation becomes
\[ H_{aa} \xi_a + \sum_b H_{ab} \xi_b = \lambda \psi_a. \]
Here we have expanded the state in the energy basis, and
\[
H_{aa} = \sum_k c_k \sum_b e^{\alpha(E_a - E_b)} \left( |O^k_{ab}|^2 + |O^k_{ba}|^2 \right),
\]
\[
H_{ab} = -\sum_k c_k \left( |O^k_{ab}|^2 + |O^k_{ba}|^2 \right).
\]
Note that this \( H_{ab} \) term is nonzero for \( a = b \), so it must be combined with \( H_{aa} \) to obtain the full diagonal matrix element.
One can verify that the thermofield double state is an eigenstate of this Hamiltonian with zero energy. Due to the form of the Hamiltonian as a sum of terms \( d^\dagger d \) with positive coefficients, there cannot be negative eigenvalues, so the TFD is the ground state.
This can also be verified by a numerical calculation of the eigenvalues and eigenvectors of the Hamiltonian. For example, suppose the energy eigenvalues of the original Hamiltonians are given by
\[ E(i) = \delta \left[ 1 + \left( W(\eta i) \right)^{1/p} \right], \]
where \( W(\cdot) \) satisfies \( x = W(xe^x) \). The density of states is defined as
\[ \rho(E) \equiv \frac{d i(E)}{dE}, \]
which turns out to be
\[ \rho(E) = \frac{p E^{(2p-1)} e^{(E/\delta)^p-1}}{\delta^{2p} \eta}. \]
This density function mimics the density of states of a QFT and hence the motivation to choose the energy to have the functional form in (5.12).
Using this model with the choices \( \delta = 1, \eta = 1/3 \) and \( p = 0.6 \) we show in figure 7 the numerical ground state and the first excited state in the symmetric subspace and in its complement in the full Hilbert space. The numerics support our expectation that the ground state is in the symmetric subspace, with its norm in the subspace being 1. Furthermore, even the first excited state is in the symmetric subspace, with a norm of 0.999657. We would now like to estimate the gap between the ground state and the first excited state.
It is difficult to find the eigenvectors in general. However, it is natural to go to the continuum limit since the density of states is large in our ETH regime.
Define the state as
$$|\chi\rangle = \sum_a \frac{\psi(E_a)}{\sqrt{\rho(E)}} |aa\rangle .$$ \hspace{1cm} (5.15)
The eigenvalue equation becomes
$$H_{aa} \frac{\psi(E_a)}{\sqrt{\rho(E_a)}} + \sum_b H_{ab} \frac{\psi(E_b)}{\sqrt{\rho(E_b)}} = \lambda \frac{\psi(E_a)}{\sqrt{\rho(E_a)}} .$$ \hspace{1cm} (5.16)
Using the ETH ansatz, along with the ensemble average,
$$H_{ab} = -\frac{1}{\rho(\bar{E})} f(\bar{E}, \omega) ,$$ \hspace{1cm} (5.17)
where $f$ is a positive function that depends strongly on the energy difference $\omega$ and weakly on the average energy $\bar{E}$. Converting from sums to integrals, the eigenvalue equation becomes
$$\int dE' \rho(E') \left[ \frac{1}{\rho(\bar{E})} e^{\alpha(E-E')} f(\bar{E}, \omega) \frac{\psi(E')}{\sqrt{\rho(E)}} - \frac{1}{\rho(\bar{E})} f(\bar{E}, \omega) \frac{\psi(E')}{\sqrt{\rho(E')}} \right] = \lambda \frac{\psi(E)}{\sqrt{\rho(E)}} .$$ \hspace{1cm} (5.18)
Recall that $\bar{E} = \frac{1}{2}(E + E')$ and $\omega = E' - E$. Define the density of states by
$$\rho(E) = \Lambda e^{S(E)} ,$$ \hspace{1cm} (5.19)
where $\Lambda$ is an arbitrary constant with dimensions of inverse energy. Then the eigenvalue equation becomes
$$\int dE' f(\bar{E}, \omega) \left[ e^{S(E')-S(\bar{E})-\alpha\omega} \psi(E) - e^{S(E')/2+S(E)/2-S(\bar{E})} \psi(E') \right] = \lambda \psi(E) .$$ \hspace{1cm} (5.20)
To obtain the above equation, we have used the ETH ansatz, the statistics of our operators, and the continuum limit.
Now to make further progress, assume that the characteristic energy scale of our operators, $\sigma_E$, is small compared to the total energy of the system, so the function $f(\bar{E}, \omega)$ is small unless the energy difference $\omega$ is small. Therefore, we can Taylor expand the entropy as a function of energy. In fact, we will see that the characteristic size of the energy difference $\omega$, which is set by the energy scale $\sigma_E$ of our operators, is small compared to many other scales in the problem. Therefore we expand everything using $\omega$ as a small parameter (we will check later that this approximation is self-consistent) to obtain
\begin{equation}
\psi(E) \int d\omega f(\bar{E}, \omega) \left[ \omega(S'(E)/2 - \alpha) + \frac{1}{2} \omega^2 (S'(E)/2 - \alpha)^2 + S''(E)\omega^2/4 \right] \\
- \int d\omega f(\bar{E}, \omega) \left( \omega \psi'(E) + \frac{1}{2} \omega^2 \psi''(E) \right) = \lambda \psi(E).
\end{equation}
Define two functions encoding the first two moments of our operators,
\begin{align}
g(E) &\equiv \int d\omega f(E + \omega/2, \omega) \omega, \\
h(E)^2 &\equiv \int d\omega f(E + \omega/2, \omega) \omega^2/2.
\end{align}
With these definitions, the eigenvalue equation is simply
\begin{equation}
-h^2 \partial_E^2 \psi - g \partial_E \psi + \left[ \frac{1}{2}(S'(E) - \beta)g + \frac{1}{4}(S'(E) - \beta)^2 h^2 + \frac{1}{2} S''(E) h^2 \right] \psi = \lambda \psi.
\end{equation}
An important consistency check is that this operator should be Hermitian. Since we absorbed appropriate factors of the density of states into the definition of the wavefunction, the inner product is simply $\int dE \psi_1^* \psi_2$. Hermiticity then requires that our two functions obey the consistency condition
\begin{equation}
g = \partial_E(h^2).
\end{equation}
This is not obvious from the definitions, but note that $f(E, \omega)$ is an even function of $\omega$, and by assumption has a mild dependence on its first argument. Therefore we can expand
\begin{align}
g(E) &\approx \int d\omega \left[ f(E, \omega) \omega + \partial_E f(E, \omega) \omega^2/2 \right] = \int d\omega \partial_E f(E, \omega) \omega^2/2, \\
\partial_E(h^2) &\approx \int d\omega \partial_E f(E, \omega) \omega^2/2.
\end{align}
Therefore everything is consistent as long as the operators we are using have a mild dependence on the average energy, which is expected. In the following we will freely substitute $g = \partial_E(h^2)$.
In order to find the eigenvalues, it is convenient to redefine so that it is a standard Schrodinger equation. Define a new independent variable $y$ and rescale the wavefunction as follows:
\begin{equation}
dy = \frac{dE}{h(E)}, \quad \psi = \frac{\bar{\psi}}{\sqrt{h}}.
\end{equation}
This leads to an equation for the rescaled wavefunction
\[- \frac{\partial^2}{\partial y^2} \bar{\psi} + V \bar{\psi} = \lambda \bar{\psi}. \tag{5.28}\]
The potential \(V\) takes the supersymmetric form
\[V = W^2 + \partial_y W. \tag{5.29}\]
The superpotential is
\[W = \frac{1}{2} (\partial_E h + h S'(E) - \beta h). \tag{5.30}\]
It is also useful to note the potential in terms of the more intuitive variable \(E\),
\[V = \frac{1}{4} (\partial_E h)^2 + \frac{1}{2} h \partial_E^2 h + (S'(E) - \beta) h \partial_E h + \frac{1}{4} (S'(E) - \beta)^2 h^2 + \frac{1}{2} S''(E) h^2. \tag{5.31}\]
In analyzing these equations, keep in mind that the independent variable \(y\) of the Schrodinger equation is different from \(E\).
Now we can calculate the spectrum. The ground state wavefunction has zero energy and is given by \(\bar{\psi} = \exp(\int W dy)\). The integral can be done explicitly, giving
\[\int W dy = \int W dE/h = \frac{1}{2} (\log h + S - \beta E). \tag{5.32}\]
Therefore, explicitly
\[\bar{\psi} = \sqrt{h} \exp \left[ \frac{1}{2} (S - \beta E) \right], \tag{5.33}\]
which is the TFD state disguised by our redefinitions.
It is crucial that this ground state is normalizable from the perspective of this Schrodinger equation. This is the case as long as we choose our operators such that at large energy, the function \(h(E)\) grows slower than exponentially in the energy, a mild requirement.
We will see that it is a good approximation to expand near \(E_\beta\) to obtain
\[\psi_{\text{TFD}} \approx \# \exp \left[ \frac{1}{4} S''_\beta (E - E_\beta)^2 \right]. \tag{5.34}\]
Since \(S''(E)\) is negative, the Gaussian has the correct sign. Higher order terms in the exponential that we have neglected are given by \(\partial_E^n S (\Delta E)^n\) with \(n > 2\).
As long as the entropy as a function of energy has a simple form such as a power law,
\[S'' \sim \frac{S}{E^2}, \tag{5.35}\]
so that the characteristic spread of the wavefunction is given by
\[\frac{\Delta E}{E} \sim \frac{1}{\sqrt{-S''} E_\beta} \sim \frac{1}{\sqrt{S}}. \tag{5.36}\]
The fluctuations around the mean energy are suppressed as $1/\sqrt{S}$ at large entropy. This behavior is familiar from statistical mechanics, and had to happen because the probability distribution for the energies in the TFD state is given by the standard canonical ensemble.
We can now check that the Gaussian approximation is good, continuing to assume that the entropy as a function of energy takes a simple power law form. The terms we have neglected are
$$\partial_E^n S (\Delta E)^n \sim S \left( \frac{\Delta E}{E} \right)^n \sim S^{1-n/2},$$
where in the last equation we have used the Gaussian formula for $\Delta E$. Since we are working at large entropy $S$, we are justified in dropping terms with $n > 2$.
In addition, we can now check whether our approximation of small energy difference is self-consistent. We require
$$\sigma_E \ll (\partial_E^2 S)^{-1/2} = \beta^{-1} \sqrt{C_V},$$
where $C_V$ is the heat capacity at the temperature $\beta^{-1}$. This is easily satisfied for large systems because the heat capacity is extensive in the size of the system.
### 5.3 Solving for the gap
We can take advantage of the SUSY quantum mechanics structure in order to estimate the gap. Note that under our assumptions, the ground state of the original potential, $\bar{\psi} = \exp(\int W dy)$ is normalizable. The ‘partner potential’
$$\tilde{V} = W^2 - \partial_y W,$$
shares all of the energy eigenvalues with the original potential, except for the ground state. The wavefunction $\psi = \exp(-\int W dy)$ is formally an eigenstate with eigenvalue 0, but it is non-normalizable under our assumptions.
Therefore, the ground state energy of the partner potential $\tilde{V}$ is the same as the first excited state of the original potential. Since the ground state of the original potential has eigenvalue 0, the gap is simply given by the ground state energy of the partner potential. Explicitly the partner potential is
$$\tilde{V} = \frac{h^2}{4} (\partial_E S - \beta)^2 - \frac{1}{2} h^2 \partial_E^2 S + \frac{1}{4} (\partial_E h)^2 - \frac{1}{2} h \partial_E^2 h.$$
It is worthwhile considering how the different terms in this potential scale with the volume in a large system. Assuming we choose the number of operators to scale with the volume, $h^2 \sim V$. At high temperatures, $E \sim V$ and $S \sim V$ so $\partial_E^2 S \sim V^{-1}$. Therefore, in the partner potential $\tilde{V}$, the first term scales linearly with the volume, while the second term is volume independent and the remaining terms depend inversely on volume. The same analysis holds if instead of large volume we consider large entropy or central charge.
At large volume, it is therefore sensible to ignore the last two terms and treat the second term as a perturbation of the first. Furthermore, we can expand around energy $E_\beta$ defined by
$$S'(E_\beta) = \beta,$$
to obtain
\[
\tilde{V} \approx \frac{\hbar^2}{4} S''_\beta (E - E_\beta)^2 - \frac{1}{2} \hbar^2 S''_\beta,
\]
(5.42)
where \(S''\) denotes the second derivative of entropy with respect to energy evaluated at energy \(E_\beta\) corresponding to temperature \(\beta^{-1}\). Note that \(S''\) is typically negative and is related to the heat capacity.
Our potential therefore becomes approximately quadratic with an overall shift. Calculating the ground state gives simply
\[
\text{Gap} \approx \hbar^2_\beta |S''_\beta|.
\]
(5.43)
In simple situations, the \(S''\) term is typically of order
\[
|S''_\beta| \sim \frac{S}{E^2} \sim \frac{\beta}{E}.
\]
(5.44)
To see how the gap behaves we need to know the behavior of \(h(E)\). Using the definition of \(f\), we can write \(h\) in terms of matrix elements of the operators,
\[
h(E)^2 = \sum_{k,b} c_k \frac{1}{2} (E_b - E_a)^2 \left( \left| O^k_{ab} \right|^2 + \left| O^k_{ba} \right|^2 \right) \frac{\rho(\bar{E})}{\rho(E_b)}.
\]
(5.45)
Evaluating this at \(E_\beta\) and expanding for small energy difference gives
\[
h^2_\beta \approx \sum_{k,b} c_k \frac{1}{2} (E_b - E_\beta)^2 \left( \left| O^k_{ab} \right|^2 + \left| O^k_{ba} \right|^2 \right) e^{-\beta (E_b - E_\beta)/2},
\]
(5.46)
where the state \(|a\rangle\) has energy \(E_\beta\). This quantity has a simple description in terms of the Euclidean path integral
\[
h^2_\beta \approx \frac{1}{2} \sum_k c_k \langle E_\beta | \left( \dot{O}_k^\dagger(0) \dot{O}_k(\beta/2) + \dot{O}_k^*(0) \dot{O}_k^T(\beta/2) \right) | E_\beta \rangle,
\]
\[
\approx \frac{1}{2} \sum_k c_k \langle \left( \dot{O}_k^\dagger(0) \dot{O}_k(\beta/2) + \dot{O}_k^*(0) \dot{O}_k^T(\beta/2) \right) \rangle_\beta.
\]
(5.47)
where dot denotes the derivative with respect to Euclidean time, and the operators are separated by \(\beta/2\) in Euclidean time. In other words, this can be thought of as a correlator where the operators are in the two different CFT’s.
To be more explicit, we need to be more specific about the theory. We will now focus on quantum field theories such that the sum over \(k\) represents a sum over different space positions as well as different species of operators \(O_k\). In field theories without a mass gap, or with a gap small compared to the temperature scale, we expect
\[
\left\langle \dot{O}_k^\dagger(0) \dot{O}_k(\beta/2) + \dot{O}_k^*(0) \dot{O}_k^T(\beta/2) \right\rangle_\beta \sim T^{2\Delta+2},
\]
(5.48)
where \(\Delta\) is the scaling dimension of the operator \(O\).
Further, it is natural that the number of operators scales with the volume. In general, we could choose the separation between operators to be any length scale, but to keep things
simple we choose to insert one operator per thermal volume (in other words, our operators are separated by $\beta$). We also use factors of $\beta$ to make the Hamiltonian have the correct dimensions. Putting these assumptions together gives
$$h^2(\beta^{-1}) = b(\text{Vol})T^{d+3}, \quad (5.49)$$
where $b$ is a dimensionless constant that adjusts the overall normalization of the Hamiltonian.
Using in addition that in CFT’s
$$E \sim c(\text{Vol})T^{d+1}, \quad S \sim c(\text{Vol})T^d, \quad S'' \sim \frac{1}{c(\text{Vol})T^{d+2}}, \quad (5.50)$$
where $c$ is the central charge, we get
$$\text{Gap} \sim \frac{b}{c}T. \quad (5.51)$$
The central charge in the denominator is a bit annoying, but it is far better than the naive guess $\exp(-c)$. If we want an order one gap, we should either have $c$ terms in the Hamiltonian, or simply a large coefficient $b$ proportional to $c$.
6 Approximate TFD Hamiltonians
In the above section, we worked with a Hamiltonian whose ground state is exactly the thermofield double state. However, it may be difficult to implement this Hamiltonian because operators of the form $\exp(-\gamma H)\mathcal{O}\exp(\gamma H)$ are generally difficult to compute in a strongly coupled theory. In addition, if we want a simple bulk dual with two asymptotic regions, we would like the Hamiltonian at high energy to be dominated by the original Hamiltonians of the left and right systems. This motivates us to consider a simpler Hamiltonian and study its ground state, as we will do below.
6.1 Simplest Hamiltonian
The simplest Hamiltonian we are aware of that produces something close to the TFD state is
$$H_S = H_L^0 + H_R^0 + \sum_k c_k(O_L^k - O_R^k)(O_L - O_R^T). \quad (6.1)$$
We want to find the ground state and gap for this Hamiltonian. Using ETH, we can argue that the low-energy eigenstates lie in the symmetric subspace, like in the previous section. Random statistics will then help us calculate the gap. In the symmetric subspace, the eigenvalue equation for the Hamiltonian $H$ simplifies to
$$H_{aa}\psi_a + \sum_b H_{ab}\psi_b = \lambda\psi_a, \quad (6.2)$$
with the expressions
\begin{align}
H_{aa} &= 2E_a + \sum_k c_k \sum_b \left( \left| O^k_{ab} \right|^2 + \left| O^k_{ba} \right|^2 \right), \\
H_{ab} &= - \sum_k c_k \left( \left| O^k_{ab} \right|^2 + \left| O^k_{ba} \right|^2 \right).
\end{align}
Note that there is a competition here: the interaction terms would be minimized by the infinite-temperature thermofield double state, while the free Hamiltonians are minimized in the vacuum.
Assuming that the interaction terms are strong enough, the ground state will occur in the high energy regime where ETH works. The analysis is similar to what we did in the previous section, but slightly different. We again define the wavefunction by
\[
|\xi\rangle = \sum_a \frac{\psi(E_a)}{\sqrt{\rho(E)}} |aa\rangle.
\]
With this choice of the eigenstate, the eigenvalue equation becomes
\[
\sum_{k,b} c_k \left( \left| O^k_{ab} \right|^2 + \left| O^k_{ba} \right|^2 \right) \left( \psi(E_a) - e^{-S(E_b)/2+S(E_a)/2} \psi(E_b) \right) = (\lambda - 2E_a) \psi(E_a).
\]
Now almost everything can be expanded for small energy difference, except that we do not wish to assume
\[
S(E_a) - S(E_b) \approx \frac{E_a - E_b}{T(E_a)},
\]
is small. However, $S''$ is still small compared to the characteristic energy differences as in (5.38). Notice that for this inequality to be satisfied simultaneously with the requirement of locality at the temperature scale we must satisfy:
\[
1 \gg \frac{T^2}{\sigma_E^2} \gg \frac{1}{C_V}.
\]
This means that locality pushes us into the thermodynamic limit. Now expanding the second term gives
\begin{align}
e^{-S(E_b)/2+S(E_a)/2} \psi(E_b) &\approx e^{-S'(E_a)\omega/2} \left( \left[ 1 - \frac{1}{4} S''(E_a) \omega^2 \right] \psi(E_a) \right. \\
&\quad \left. + \omega \psi'(E_a) + \frac{\omega^2}{2} \psi''(E_a) \right),
\end{align}
where as before $\omega \equiv E_b - E_a$. In the continuum limit, one can then write the eigenvalue equation as the differential equation
\[
-h^2(E) \partial_E^2 \psi - g(E) \partial_E \psi + \left( k(E) + 2E + \frac{1}{2} h^2(E) S''(E) \right) \psi = \lambda \psi,
\]
with the functions $h$ and $g$ are defined in equations (5.22) and
\[
k(E_a) \equiv \sum_{k,b} c_k \left( \left| O^k_{ab} \right|^2 + \left| O^k_{ba} \right|^2 \right) \left( 1 - e^{-S'(E_a)\omega/2} \right).
\]
We now redefine the wavefunction to use a new-variable $y$ as in equations (5.27). This simplifies the eigenvalue equation and brings it into the standard Schrödinger form
$$-\frac{\partial^2}{\partial y^2} \bar{\psi} + V \bar{\psi} = \lambda \bar{\psi},$$
with potential given by
$$V = k(E) + 2E + \frac{1}{2} h^2 S'' + \frac{1}{4} h'^2 + \frac{1}{2} hh''. \quad (6.12)$$
Note that the last two terms in the potential are proportional to inverse powers of the volume and can be dropped; the first two terms dominate at large volume.
To determine the gap, the key part is thus the behavior of the function $k(E)$. Using $S'(E) \equiv \beta(E)$, we can write $k(E)$ in terms of finite temperature 2-point functions of the operators as follows
$$k(E) = \sum_k c_k \left( \langle E | O_k^\dagger(0)O_k(0) - O_k^\dagger(-\beta(E)/2)O_k(0) | E \rangle + O \leftrightarrow O^T \right). \quad (6.13)$$
Using ETH, we can show that is simply the difference between a 2-point correlator where both operators are in the same copy and a correlator where the operators are in different copies of the theory,
$$k(E) = \sum_k c_k \left( \langle O_k^\dagger(0)O_k(0) \rangle_\beta - \langle O_k^\dagger(0)O_k(\beta/2) \rangle_\beta + O \leftrightarrow O^T \right). \quad (6.14)$$
We can estimate these 2-point functions explicitly if we take the case of 2D CFT. Since we have regulated our operators by smearing them in space/time, the first term will be fixed in terms of UV properties of these operators. The potential then becomes
$$k(E) = \sum_k c_k \left( \sigma_E^{2\Delta} - T^{2\Delta_k} + \ldots \right). \quad (6.15)$$
Here $\sigma_E$ is set by the UV regulator of our operator and the ellipsis denotes terms that are suppressed by higher powers of $T/\sigma_E$.
For simplicity, we assume that all of our operators have the same dimension $\Delta$; weakening this assumption will lead to minor modifications. We also assume a large $N$ limit for our theories. In this limit, the energy of a CFT is simply
$$E \approx cVT^{d+1}, \quad (6.16)$$
where $d$ denotes the space dimension of the theory and $T = \beta(E)^{-1}$. The leading terms in the potential at large entropy will then be given by
$$V(E) = \text{constant} + 2cVT^{d+1} - \sum_k c_k T^{2\Delta} + \ldots. \quad (6.17)$$
We now need the minimum of the potential $V(E)$, using the above estimate for $k(E)$. It is given by the condition
$$\frac{(d+1)cV}{\Delta} T_*^{d+1-2\Delta} = \sum_k c_k. \quad (6.18)$$
We now approximate the potential to be quadratic in energy near the minimum. The eigenvalue equation (6.11) implies that the gap is
\[
\text{Gap} \approx \sqrt{\partial_y^2 V} \approx \sqrt{h^2 \partial_E^2 V},
\]
where we have evaluated the second term at the minimum of the potential and used \( \partial_E V = 0 \). It is easier to take \( T \) derivative since the potential in (6.17) is naturally defined in that variable. Doing this change of variables and assuming a large entropy, we get
\[
\text{Gap} \sim h T_*^2 |S''| \sqrt{\partial_T^2 V} |r_* \sim h T_*^2 |S''| \sqrt{cVT_*^{d-1}},
\]
where we have ignored order 1 constants like \( d, \Delta \). Using now
\[
h = \sum_k c_k T_*^{2\Delta+2}.
\]
Along with the condition (6.18) yields
\[
\text{Gap} \sim T_*.
\]
The ground state wavefunction is Gaussian. Going back to energy as our variable, we find
\[
\psi(E) \sim \exp \left[ -\# |S''|(E - E_*)^2 \right].
\]
In order to determine the order one number \( \# \), we would need to know the functions \( k(E) \) and \( h(E) \) more precisely. Recall that the true thermofield double is given by the wavefunction
\[
\psi_{\text{TFD}} \approx \# \exp \left[ -\frac{1}{4} |S''_\beta| (E - E_\beta)^2 \right].
\]
As long as our approximations hold, both the true and approximate TFD state live in the symmetric subspace, so they have the same entanglement structure. This can be easily verified to be the case numerically in the toy model we described near equation (5.12). Figure 8 shows the ground state and the first excited state of the approximate Hamiltonian in the symmetric subspace and in its complement in the full Hilbert space. The ground state is in the symmetric subspace to a very good approximation, as its norm of 0.99989 suggests. It has an overlap of 0.95445 with the exact TFD state. Further, even the first excited state is approximately in the symmetric subspace, with a norm of 0.99962. However, the spread in energies may differ between the true and approximate TFD states by an order one factor determined by the unknown constant \( \# \). The entanglement entropy differs by an additive order one number, since the number of states differs by a multiplicative order one factor. Therefore, it is natural to conclude that the bulk dual of the approximate TFD state is also the eternal black hole.
Finally, we would like to comment on how strong the ‘interaction term’ in equation (6.3) must be in order for the final temperature to be in the regime where ETH is valid. The interaction terms will be as large as the original Hamiltonian near the target temperature. For a CFT with large central charge \( c \), we want the energy to be order \( c \) to be in the ETH
Figure 8. Components of the ground state (in red) and the first excited state (in blue) of the approximate Hamiltonian in (a) symmetric subspace and (b) the complement of the symmetric subspace in the full Hilbert space.
regime. If we use low dimension primary operators, the interaction term \((O_L - O_R)^2\) is order one. Assuming that the number of different operators used is also order one, we require the coefficient \(c_k\) of the interaction term to be of order the central charge,
\[
c_k \sim c.
\]
(6.25)
This would make it difficult to calculate using perturbation theory in the bulk. However, our analysis above has not treated the interaction term perturbatively, so it should be valid in this regime.
7 Discussion
7.1 Experimental firewalls
We have discussed in detail how to construct a TFD state as a ground state of a TFD Hamiltonian with a gap that is not very small. In this section, we will attempt to use a state such constructed to do some interesting gedanken experiments. In particular we will discuss what has been recently called a “teleportation” experiment. In the case of holographic field theories, our discussion will have an interpretation as probing firewalls in the eternal black hole geometry experimentally.
We now briefly review the “teleportation” experiment as discussed in [1, 2] to set up the stage. Here one starts with two identical (holographic) field theories in the TFD state and perturbs the doubled-theory actively with a double-trace perturbation such that,
\[
S_{\text{tot}} \rightarrow S_{\text{tot}} + g V,
\]
(7.1)
with the constant \(g\) chosen to have a specific sign before the experiment. The perturbation is of the form,
\[
V \equiv \frac{1}{K} \sum_{i=1}^{K} \int d^{d-1} x \; O^i_L(0, \vec{x}) O^i_R(0, \vec{x}).
\]
(7.2)
This looks very similar to interaction terms in the TFD hamiltonian that we have constructed. If we assume that the cooling procedure results in the TFD state at time \(t = 0\),
such terms are already present in the total action. In the cases where the leading terms in the TFD Hamiltonian are the free Hamiltonians of the individual theories, these interaction terms can be thought of as a perturbation to the doubled-theory. If we decide to perform an experiment where $V$ is turned off a little while after $t = 0$, physically separating the two theories so that they do not interact simulates this experiment as the interaction terms in the TFD Hamiltonian stop acting after the separation. The final result of the perturbation $V$ can then be measured by studying left-right correlators, for example.
[2] studied such correlators using the dual eternal black hole geometry. The double-trace perturbation made up of many light operators then can be thought of as sending shockwaves into the bulk from the two boundaries. A signal from the left then interacts with the shockwave(s) and gives a non-zero commutator with a right operator. This was interpreted as traversing the wormhole.
However, it is interesting to understand the role of $V$ in the experiment as well as in “defining” the TFD state. The maximally chaotic dynamics in the CFT does not care about the form of $V$ and thus the result of the experiment should be unaffected. But [11] showed that this is only the case when the signal is chosen appropriately. More precisely, if $V$ is replaced by $U_L^\dagger V U_L$, the signal that traverses the wormhole is $U_L^\dagger \phi_L U_L$ and not $\phi_L$. We can think of this as defining a “dictionary” between which left operators interacts with which right operators. Before the perturbation, such a dictionary is mere convention but it becomes physical once the perturbation opens the wormhole. We will see an example of this using the free fermion field theory in appendix A.
### 7.2 Quantum computation and machine learning
So far we have discussed the difficulty in constructing the TFD state, a specific state with maximal entanglement between two quantum systems, with particular focus on the gap in the TFD Hamiltonian. The problem of finding the minimal energy eigenvalue of the TFD Hamiltonian can be rephrased in the language of quantum computing. We will now discuss this interpretation in brief.
First, let us consider the ‘satisfiability problem’, which is the first problem proven to be NP-complete. For orientation, first consider the task to find obtain a specific value for Boolean variable $S \equiv x_1 \lor x_2$. Here, both $x_i$ are themselves Boolean variables, taking values $\{0, 1\}$ and say we want to obtain $S$ with the value 1. What are the possible combinations of values of $(x_1, x_2)$ that will “satisfy” this task? The answer is well-known. The operation of disjunction (OR) we used above implies that $S$ is 1 whenever either or both of $x_1$ are 1.
Now consider a more complicated problem. Let
$$S_i \equiv x_{\mu_1} \lor x_{\mu_2} \cdots \lor x_{\mu_k} \quad (7.3)$$
be a disjunction of $k$ Boolean variables $x_{\mu_i}$. Define the variable
$$S \equiv S_1 \land S_2 \land \cdots \land S_m \quad (7.4)$$
The problem to find one or more configurations of the base Boolean variables $\{x_{\mu_i}\}$ that will satisfy $S = 1$ is called a $k$-SAT problem in computer science. The decision problem
of whether $S = 1$ is possible can be shown to be in NP [12]. Moreover, it is known to be NP-complete for all $k \geq 3$. Such problems are critical elements for many other computational tasks.
As described in [7], one may use a quantum computer to solve these statements via quantum annealing. The strategy is to associate to each proposition a positive definite operator $H_i$ such that $H_i |\psi\rangle = 0$ iff the state $|\psi\rangle$ obeys the logical statement defined by $S_i$. The solvability of the SAT problem is then mapped to a question of whether $H = \sum_i H_i$ has a zero energy ground state.
Quantum annealing-based algorithms have been used to implement $k$-SAT problems, in particular the random SAT problem [13, 14]. Here, the goal is to determine the probability that a given statement of a certain form is true. The problem we have considered is related in an obvious way. Rather than looking at classical boolean statements, we are considering the satisfiability of ‘quantum propositions’ such as $d_{O_i} |TFD\rangle = 0$. Our problem becomes the classical random satisfiability problem when we restrict to matrices having integral entries of the appropriate form. It would be interesting to explore further how the quantum version of random satisfiability relates to the classical one.
Another avenue worth exploring is the relationship between our results and machine learning. One could think of the structure of our $d$ operators as encoding a mapping from the left to right system,
$$\mathcal{O} \rightarrow e^{-\beta H/2} \Theta \mathcal{O}^\dagger \Theta^{-1} e^{-\beta H/2}. \quad (7.5)$$
For the systems of interest this mapping is very complicated.
We can think of the operators $d_{\mathcal{O}_k}$ as representing training data stated in an operator language. The system must then learn the full mapping. We have shown that a state annihilated by $d$ operators constructed from a set of operators $\mathcal{O}_k$ will automatically be annihilated by $d$ operators form from any operator that can be generated from commutators of the $\mathcal{O}_k$. We expect that generically a small number of operators $\mathcal{O}_k$ will generate the entire algebra. Therefore, the system learns the correct mapping from a small amount of ‘training data.’
Moreover, in this formulation of the machine learning problem we see that there is a deep connection between successful learning and the presence of firewalls: successful learning is encoded in a state with smooth horizon annihilated by all $d$ operators, where the mapping between left and right is encoded in the entanglement structure of the state.
In addition, embedding machine learning problems into quantum mechanics may be of theoretical value since it allows one to phrase the problem of machine learning purely in the language of matrices. Moreover, a ‘generic’ learning problem should just reduce to the properties of large random matrices, about which much is known.
### 7.3 Future directions
We have provided reasonably strong evidence that given two copies of any quantum mechanical system obeying the Eigenvalue Thermalization Ansatz, a simple Hamiltonian exists whose ground state is the thermofield double state. This Hamiltonian generically has an energy gap of order the temperature. A number of open questions and puzzles remain.
**Bulk Dual of TFD Hamiltonian.** We primarily thought of our TFD Hamiltonian as a means to prepare a particular state. However, suppose that we prepare the system in the ground state of the TFD Hamiltonian and then continue to evolve *with the TFD Hamiltonian*. In holographic examples, one would like to know the bulk dual. The system has time translation invariance. It has a coupling between the left and right CFT’s, but in the case of interest the coupling is relevant, so in the UV the Hamiltonian factorizes.
The obvious guess is that the bulk dual is a static traversable wormhole with two AdS asymptotic regions. Indeed, this is what happens in the closely related construction of Maldacena and Qi [8] within the SYK model. However, there is a puzzle. Maldacena and Qi coupled a large number of fields between the two CFT’s, with a coupling of order one. We have considered coupling a small number of fields, but with a coupling of order the central charge.
This is not a problem for our CFT analysis, since it is not perturbative in the coupling. But it does make the bulk dual mysterious.\footnote{We thank Juan Maldacena for discussions on this point.} For one thing, the strong coupling takes us out of the supergravity regime (see also [15] where a similar issue arose). In addition, existing constructions of traversable wormholes rely on Casimir energy to violate the Null Energy Condition. We expect that increasing the coupling between the two boundaries will only enhance the Casimir-type energy until the coupling reaches order one; stronger coupling would not be expected to allow for more negative Casimir energy with a small number of fields.
For these reasons, we cannot, at this point, provide a gravitational description of our Hamiltonian.
**Explicit analysis in CFT.** We have shown explicitly in section 3.5 how our construction works within the space of primaries of the Ising model. However, it would be very interesting to experiment with our simple Hamiltonian, and to do the full analysis including descendants. In some cases, the left-right interactions we add can be understood as irrelevant. This raises the interesting question: is there a simple UV complete theory that has the Ising model TFD as its ground state? A combination of the analysis presented in section 3.5 and guesswork suggests that a plausible candidate for this is the theory
\begin{equation}
H_{\text{TFD}} = H_L^0 + H_R^0 - a \int dx \sigma_L(x) \sigma_R(x) - b \int dx \epsilon_L(x) \epsilon_R(x),
\end{equation}
where $\sigma_{L,R}$ and $\epsilon_{L,R}$ are primary operators in the left and right CFTs, and $H^0$ is the Ising Hamiltonian on each side.
More generally, it is interesting to analyze our TFD Hamiltonian in strongly-coupled CFTs. In these theories, instead of using our ETH type arguments, one could use exact and statistical results for the OPE coefficients to diagnose whether the TFD is the ground state of a simple Hamiltonian. Our analysis in this paper suggests that it is, but this could be more rigorously shown or disproven using CFT results.
In CFTs with a large number of primaries, a specific task is to check our claim that a small number of operators is sufficient to pick out the TFD state. We discussed the Commutator Property in section 2. Based on the intuition gained from this property and some preliminary analysis, we expect a statement of the following kind to hold in general CFTs.
Given two primary operators in a CFT on the Riemann sphere, $O_1(z, \bar{z})$ and $O_2(z, \bar{z})$, with conformal dimensions $\Delta_i$ and spins $s_i$ respectively, if
$$d_1(z, \bar{z}) \mid \text{TFD} \rangle = d_2(z, \bar{z}) \mid \text{TFD} \rangle = 0$$
(7.7)
where $d_i(z, \bar{z}) \equiv O^L_i(z, \bar{z}) - e^{-\beta H/2} \Theta O^R_{i,\dagger}(z, \bar{z}) \Theta^{-1} e^{\beta H/2}$; then
$$d_k(z, \bar{z}) \mid \text{TFD} \rangle = 0$$
(7.8)
where $d_k(z, \bar{z}) \equiv O^L_k(z, \bar{z}) - e^{-\beta H/2} \Theta O^R_{k,\dagger}(z, \bar{z}) \Theta^{-1} e^{\beta H/2}$ and $O_k(z, \bar{z})$ is any operator that appears in the OPE of $O_1$ and $O_2$,
$$O_1(z, \bar{z})O_2(w, \bar{w}) \sim \sum_k \frac{c_{ijk} O_k(z, \bar{z})}{(z-w)^{h_1+h_2-h_k} (\bar{z}-\bar{w})^{h_1+h_2-h_k}}$$
(7.9)
with the condition that $\Delta_k < \Delta_1 + \Delta_2$ and $s_k > s_1 + s_2 - 1$. We haven’t proved this statement yet, but we intend to return to this in future work.
**Errors.** In order to move towards an experimental realization of our procedure, it is important to understand how robust our construction is against errors of various kinds. There could be different sources for errors. In the general definition of the TFD Hamiltonian (2.6), if we fail to include enough number of operators $d_i$, we could get errors in the sense that the ground state will be highly degenerate and the actual state we prepare by cooling the system might be far from the TFD state. Another source of errors comes from the ambiguity in the form of the exact TFD Hamiltonian, discussed in section 2. Different forms of the TFD Hamiltonian have different gaps. A problem could arise in this situation if the low energy spectrum is similar to that of a glass. And finally, there will be errors in the ground state due to practical constraints. These constraints could come from errors in the cooling technique one would use in experiments or from not letting the system cool for long enough time. One may hope to find a theoretical model to incorporate at least some of these errors and make quantitative statements, but we leave this for future work.
**Acknowledgments**
We have benefited from many helpful discussions with colleagues over the course of this work, including J. de Boer, M. Cheng, J. Maldacena, X.-L. Qi, S. Shenker, J. Stout, L. Susskind, B. Swingle, and E. Verlinde. DMH is supported in part by the ERC Starting Grant GENGEOHOL. BF is supported in part by the ERC Consolidator Grant QUANTIVIOL. SFL would like to acknowledge financial support from the Netherlands Organization for Scientific Research (NWO).
**A Teleportation with fermions**
Here we will consider the tensor product of two free fermion field theory and study the effect of a left-right double-trace-like interaction on the left-right correlator. This is an interesting example because the answer can be calculated analytically to all orders in $g$, the coupling
constant of the double-trace term. Moreover, this case is relevant in the experimental implementation of our ideas in the Ising model. The total action of the doubled-theory looks like,
\begin{align}
S &= S_L + S_R + S_{\text{int}}, \\
S_{L,R} &= \int d^d x \bar{\psi}_{L,R} (i \partial - m) \psi_{L,R}, \\
S_{\text{int}} &= \int d^d x A^\mu (\bar{\psi}_L \gamma_\mu \psi_R + \bar{\psi}_R \gamma_\mu \psi_L) \equiv gV.
\end{align}
where the $S_{\text{int}}$ term can be thought of as being descended from (a Legendre transform) of the TFD Hamiltonian. The photon profile $A^\mu$ is a constant there and plays the role of a quench when we start the teleportation experiment. We take it to have the profile $A^\mu = \delta^{\mu 0} \alpha(t)$. We now want to calculate the Feynman propagator between the left and the right fields. For this, first we obtain equations of motion from the action (A.1):
\begin{equation}
(i \partial - m) \psi_{L,R} + \alpha(t) \gamma_0 \psi_{R,L} = 0.
\end{equation}
Now, switch to the basis $\psi_\pm = \psi_L \pm \psi_R$ and define $\psi_\pm = e^{\pm i \int_{-\infty}^{t} \alpha(t') dt'} \tilde{\psi}_\pm$. The new field $\tilde{\psi}$ obeys the free equation of motion
\begin{equation}
(i \partial - m) \tilde{\psi}_\pm = 0.
\end{equation}
Thus, the system is integrable for any profile and it is not difficult to compute tunneling amplitudes between the two CFTs exactly. To do this first expand the transformed fields in terms of creation and annihilation operators:
\begin{equation}
\tilde{\psi}_\pm = \int \frac{d^{d-1} k}{(2\pi)^{d-1}} \frac{1}{2w} \left( a_\pm^s(k) u^s e^{ikx} + b_\pm^{s\dagger}(k) v^s e^{-ikx} \right),
\end{equation}
where the ladder modes satisfy
\begin{align}
\{ a_\pm^r(p), a_\pm^{s\dagger}(q) \} &= \{ b_\pm^r(p), b_\pm^{s\dagger}(q) \} = 4w(2\pi)^{d-1} \delta^{(d-1)}(p-q) \delta^{rs}, \\
\sum_s u^s \bar{u}^s &= -\not{p} + m, \quad \sum_s v^s \bar{v}^s = -\not{p} - m.
\end{align}
The extra factor of 2 above comes about in going from the $L,R$ basis to the $\pm$ basis. The thermofield double state in the new basis is
\begin{align}
| \text{TFD} \rangle &= \frac{1}{\sqrt{Z}} e^{\int \frac{d^{d-1} k}{(2\pi)^{d-1} (2w)} e^{-\frac{\beta w}{2} \left( a_L^\dagger a_R^\dagger + b_L^\dagger b_R^\dagger \right)}} | 0, 0 \rangle, \\
&\Rightarrow \frac{1}{\sqrt{Z}} e^{-\frac{1}{2} \int \frac{d^{d-1} k}{(2\pi)^{d-1} (2w)} e^{-\frac{\beta w}{2} \left( a_+^\dagger a_-^\dagger + b_+^\dagger b_-^\dagger \right)}} | 0, 0 \rangle.
\end{align}
All the time evolution will be absorbed into the operator insertions so we can work with the zero time $| \text{TFD} \rangle$. The following formulas are useful and may be checked simply by
expanding the exponent to at most quadratic order
\[
\langle \text{TFD} | a^\dagger_\pm(k) a^\dagger_\pm(q) | \text{TFD} \rangle = 0,
\]
\[
\langle \text{TFD} | a^\dagger_\pm(k) a_\pm(q) | \text{TFD} \rangle = \rho_f(w) 4w (2\pi)^{d-1} \delta^{d-1}(k-q),
\]
\[
\langle \text{TFD} | a_\pm(k) a^\dagger_\pm(q) | \text{TFD} \rangle = (1 - \rho_f(w)) 4w (2\pi)^{d-1} \delta^{d-1}(k-q),
\]
\[
\langle \text{TFD} | a^\dagger_-(k) a^\dagger_+(q) | \text{TFD} \rangle = e^{\frac{\beta w}{2}} \rho_f(w) 4w (2\pi)^{d-1} \delta^{d-1}(k-q),
\]
\[
\langle \text{TFD} | a_+(k) a_-(q) | \text{TFD} \rangle = e^{\frac{\beta w}{2}} \rho_f(w) 4w (2\pi)^{d-1} \delta^{d-1}(k-q),
\]
where \( \rho_f = (1 + e^{\beta w})^{-1} \) is the usual thermal fermion number density. Now let’s consider the left-right Feynman propagator:
\[
G_{LR}(x', x) \equiv i \langle \text{TFD} | T \psi_L(x') \overline{\psi}_R(x) | \text{TFD} \rangle,
\]
where \( x \equiv (t, \vec{x}) \). The time ordering is just the usual time ordering. Now, switch to the \( \pm \) basis and compute using the formulas above:
\[
G_{LR}(x', x) = \frac{i}{4} \langle T \left( \psi_+(x') \overline{\psi}_+(x) - \psi_+(x') \overline{\psi}_-(x) + \psi_-(x') \overline{\psi}_+(x) - \psi_-(x') \overline{\psi}_-(x) \right) \rangle,
\]
\[
= \frac{i}{4} \left\langle T \left( e^{i(A(t') - A(t))} \tilde{\psi}_+(x') \overline{\psi}_+(x) - e^{i(A(t') + A(t))} \tilde{\psi}_+(x') \overline{\psi}_-(x)
+ e^{-i(A(t') + A(t))} \tilde{\psi}_-(x') \overline{\psi}_+(x) - e^{-i(A(t') - A(t))} \tilde{\psi}_-(x') \overline{\psi}_-(x) \right) \right\rangle.
\]
Using (A.7), we see that the first term is
\[
\langle T \left( \tilde{\psi}_+(x') \overline{\psi}_+(x) \right) \rangle \equiv \theta(t' - t) \langle \tilde{\psi}_+(x') \overline{\psi}_+(x) \rangle - \theta(t - t') \langle \overline{\psi}_+(x) \tilde{\psi}_+(x') \rangle
\]
\[
\langle \tilde{\psi}_+(x') \overline{\psi}_+(x) \rangle = 2 \int \frac{d^{d-1}k}{(2\pi)^{d-1}(2w)} \left( (-k + m)(1 - \rho_f) e^{ik(x' - x)} - (k + m)\rho_f e^{-ik(x' - x)} \right)
\]
\[
\langle \overline{\psi}_+(x) \tilde{\psi}_+(x') \rangle = 2 \int \frac{d^{d-1}k}{(2\pi)^{d-1}(2w)} \left( (-k + m)\rho_f e^{ik(x' - x)} - (k + m)(1 - \rho_f)e^{-ik(x' - x)} \right)
\]
Combining these into a single propagator using the \( i\epsilon \) prescription we get
\[
\langle T \left( \tilde{\psi}_+(x') \overline{\psi}_+(x) \right) \rangle \equiv 2G_0(x, x') + 2G_{\text{ent}}(x, x'),
\]
\[
= 2 \int \frac{d^d k}{(2\pi)^d} \left( \frac{(-k + m)e^{ik(x' - x)}(1 - \rho_f(w))}{k^2 + m^2 - i\epsilon} - \frac{(k + m)\rho_f(w)e^{-ik(x' - x)}}{k^2 + m^2 - i\epsilon} \right).
\]
Here, \( G_0(x, x') \) denotes the zero temperature Feynman propagator
\[
G_0 \equiv \int \frac{d^d k}{(2\pi)^d} \frac{(-k + m)e^{ik(x' - x)}}{k^2 + m^2 - i\epsilon},
\]
and $G_{\text{ent}}(x, x')$ denotes a piece induced by entanglement between the left and the right theory at finite temperature
$$G_{\text{ent}} \equiv - \int \frac{d^d k}{(2\pi)^d} \frac{\rho_f(w) \left( e^{i k (x' - x)} (-k + m) + e^{-i k (x' - x)} (k + m) \right)}{k^2 + m^2 - i \epsilon}. \tag{A.12}$$
The formula for the $--$ propagator is identical, the only difference is the dressing by $e^{i A}$. Next, it is easy to see that mixed correlators like $\langle T \tilde{\psi}_\pm(x') \tilde{\psi}_\mp(x) \rangle$ are zero. Plugging all this back into the expression (A.10) we find
$$G_{LR}(x, x') = - \sin(\Delta A) (G_0 + G_{\text{ent}}), \tag{A.13}$$
where $\Delta A = A(t') - A(t) = \int_t^{t'} \alpha(s) ds$. Again, the first piece is trivial and comes about simply due to the direct interaction. The second is due to the entanglement.
For entangled free theories, modified left-right correlators are enough to diagnose how the signal propagates from one theory to another after the perturbation. But for chaotic theories, a better diagnostic is the commutator,
$$C = \langle \Omega | e^{-i g V} [\phi_R(t_R), \phi_L(t_L)] e^{i g V} | \Omega \rangle, \tag{A.14}$$
where $\phi_{L,R}$ represents the operator we are trying to send through the wormhole, $\Omega$ is the vacuum state that we have created, and $t_R$, $t_L$ represents the time when we create/measure the operator. For our toy model, it is straightforward to calculate this using the method outlined above. It would be interesting to analyze what happens when $| \Omega \rangle$ is not exactly the TFD state, but we leave this for the future.
**Open Access.** This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
**References**
[1] P. Gao, D.L. Jafferis and A. Wall, *Traversable Wormholes via a Double Trace Deformation*, *JHEP* **12** (2017) 151 [arXiv:1608.05687] [inSPIRE].
[2] J. Maldacena, D. Stanford and Z. Yang, *Diving into traversable wormholes*, *Fortsch. Phys.* **65** (2017) 1700034 [arXiv:1704.05333] [inSPIRE].
[3] L. Susskind, *Dear Qubitzers, GR=QM*, arXiv:1708.03040 [inSPIRE].
[4] J.M. Maldacena, *Eternal black holes in anti-de Sitter*, *JHEP* **04** (2003) 021 [hep-th/0106112] [inSPIRE].
[5] J. Maldacena and L. Susskind, *Cool horizons for entangled black holes*, *Fortsch. Phys.* **61** (2013) 781 [arXiv:1306.0533] [inSPIRE].
[6] E. Farhi, J. Goldstone and S. Gutmann, *Quantum Adiabatic Evolution Algorithms versus Simulated Annealing*, quant-ph/0201031.
[7] K.L. Pudenz, G.S. Tallant, T.R. Belote and S.H. Adachi, *Quantum Annealing and the Satisfiability Problem*, arXiv:1612.07258.
[8] J. Maldacena and X.-L. Qi, *Eternal traversable wormhole*, arXiv:1804.00491 [INSPIRE].
[9] B. Swingle and J. McGreevy, *Renormalization group constructions of topological quantum liquids and beyond*, Phys. Rev. B 93 (2016) 045127 [arXiv:1407.8203] [INSPIRE].
[10] B. Swingle and J. McGreevy, *Mixed s-sourcery: Building many-body states using bubbles of Nothing*, Phys. Rev. B 94 (2016) 155125 [arXiv:1607.05753] [INSPIRE].
[11] R. van Breukelen and K. Papadodimas, *Quantum teleportation through time-shifted AdS wormholes*, JHEP 08 (2018) 142 [arXiv:1708.09370] [INSPIRE].
[12] S.E. Venegas-Andraca, W. Cruz-Santos, C. McGeoch and M. Lanzagorta, *A cross-disciplinary introduction to quantum annealing-based algorithms*, Contemp. Phys. 59 (2018) 174.
[13] C. Barrett, R. Sebastiani, S. Seshia and C. Tinelli, *Handbook of Satisfiability: Volume 185 Frontiers in Artificial Intelligence and Applications*, ch. Satisfiability Modulo Theories, IOS Press (2009) [DOI:10.3233/978-1-58603-929-5-825].
[14] T. Hogg, *Adiabatic quantum computing for random satisfiability problems*, Phys. Rev. A 67 (2003) 022314 [quant-ph/0206059].
[15] P. Gao and H. Liu, *Regenesis and quantum traversable wormholes*, arXiv:1810.01444 [INSPIRE]. |
In this paper, I discuss the Art and Afrofuturism art experience, which asked a group of white participants to grapple with the complicated, exclusionary power systems that scaffold how we see and describe the future through discussion, visual analysis of a contemporary work of art and a collage artmaking project.
Abstract: This paper describes an art and Afrofuturism art experience that took place during the summer of 2020. Led by an art museum educator, the virtual experience was held over Zoom with a group of ten white adults. The art experience focused on alternative narratives and introduced participants to Afrofuturism as contemporary artistic practice and pedagogical approach. A critical multiculturalism theoretical framework informed the experience, and participants analyzed Afrofuturist art and representations in mass media to interrogate the ways that whiteness influences conceptions of the future in western culture and their own lives. Participants built on what they learned to create collages where they imagined more equitable futures developed from the Afrofuturist themes discussed in the experience.
Who Belongs in the Future: Afrofuturism, Art Education, and Alternative Narratives
Emily Hogrefe-Ribeiro
University of Georgia
Correspondence regarding this article may be sent to the author:
firstname.lastname@example.org
I loved putting my mind in a place of envisioning a positive future without the limits of practicality or likeliness or the challenges of existing structures. And I loved seeing different examples of how Black artists have visualized this.
— Beatrice Mora, Art and Afrofuturism participant
The future seems unrelentingly visible in contemporary discourse. It snaked through the social and political unrest in the United States during the summer of 2020, as massive numbers of protestors took to the streets demanding a different future and an end to racial violence. It shapes scientific dialogue into warning as humans careen towards a hotter, more unstable future in the face of climate change, and it lingers over our everyday during a pandemic, especially in a country seemingly dead-set on making choices that portend evolving economic and social disasters in a not-so-distant future. The future is made and unmade in the present, and what that future looks like depends on who is telling the story. Artists have always depicted the future with imagination, hope, and maybe even a little trepidation.
Critical multiculturalism asks art educators to reconsider the media, language, and aesthetics that we present to students in a way that critiques and invests in alternative ways of knowing and understanding (Acuff, 2015; Knight, 2006). The theory asks us to confront how our work impacts the future. Afrofuturism is a conceptual and pedagogical approach for applying a critical multicultural theoretical framework. As Acuff (2020) explains, “Afrofuturism is about the utopian formulation of a possible model of something that does not yet exist. Re-envisioning semantics in our future art curriculum is key to transgressing repressive social norms and power systems” (p. 20). Having researched Afrofuturism in discussion with artist Wangechi Mutu’s collages for my art history master’s thesis, I was inspired by Acuff’s adaptation of Afrofuturism as a pedagogy, and I approached this lesson as a way to re-envision my curriculum and teaching practices with Afrofuturism, through the theoretical lens of critical multiculturalism.
When I completed this project, I managed the school and teacher programs at the Georgia Museum of Art, the state museum of Georgia and the University of Georgia campus museum.
I am a white, female museum educator and doctoral student who works primarily with Black and Brown K-12 students. My interest in critical multiculturalism and museum education initially came from my desire to create relevant, critical school programming within the museum. Acuff and Evans (2014) describe critical multiculturalism in art museums as creating “counter-discursive spaces” that destabilize the institutional to break apart ossified and entrenched dominant ideologies and systems of power (p. xxviii). I am always looking for ways to problematize the white, western metanarratives portrayed in art museum galleries, putting critical multiculturalism theory to work in the art museum.
Because this Art and Afrofuturism project was completed during the summer of 2020, I had limited access to participants, and I worked with a group of adult, white, female learners. These participants reflect the identities of art museum repeat-visitors, volunteer docents and most museum educators in the United States, and I wanted to know how critical multiculturalism theory might inform programming for this audience. I wondered if it might be possible to teach about contemporary Afrofuturist art—not just to teach about Afrofuturism—but to use its themes and works of art to challenge whiteness, what Spillane (2015) describes as “white power, knowledge and privilege” (p.57) and prioritize alternative ways of knowing. How could I teach a lesson that used art and visual culture to get white participants to interrogate their own beliefs and develop answers to challenging questions like: How does race impact how we understand our pasts and the future? How do Black contemporary artists use art to address current and historical social inequity
through the lens of Afrofuturism? Why is it important for Black artists to imagine an Afrofuture? How can we use Afrofuturism to analyze current events? What does an equitable speculative future look like for each of us?
In this paper, I discuss the Art and Afrofuturism art experience, or lesson, which asked a group of white participants to grapple with the complicated, exclusionary power systems that scaffold how we see and describe the future through discussion, visual analysis of a contemporary work of art and a collage artmaking project. The program was an organized group who were interested in participating in the experience. Based in Afrofuturism, the art experience discussed the central topics of race, utopia, liberation, and justice with a group of ten white adults. The art experience explored ways in which personal conceptions of the past and future and cultural narratives are coded as white by looking at the way participants had the privilege of framing those ideas without race. Each element of the lesson unpacked and emphasized the need for Black artists to imagine alternative spaces. Building a critical multicultural understanding of these issues, the group examined the ways that Afrofuturist art imagines a different future while drawing attention to the social inequity of the present and the past. As expressed in the beginning quote from a participant reflection, the artmaking project made space for learners to use artmaking to articulate their own equitable, utopian futures based in alternative ways of knowing. It also inspired surprising discussions and realizations from all the participants—including me as the facilitator.
Theoretical Framework
Critical multiculturalism grounded all aspects of the art experience. Critical multiculturalism is an educational theory that finds its roots in Critical Race Theory. A critical multiculturalism framework destabilizes systemic inequity and dominant power structures (Acuff, 2013). The need for critical multiculturalism arose from the term “multiculturalism” morphing from a transformative pedagogy to an overused and desaturated buzzword. bell hook’s *Teaching to Transgress* (1994) describes multiculturalism as the global acceptance of decentering the west which compels educators to focus on the issue of voice: “Who speaks? Who listens? And why?” (p. 40). Multicultural education theory was created to provide all students, regardless of race, gender or class, an equal opportunity to learn. Over time, “multiculturalism” mutated into a word used for political correctness. The multicultural education framework has been misappropriated, and its powerful ideas desaturated into a mainstream framework that doesn’t threaten “the way things are” and that continues a deracialized discourse, perpetuating the inequalities the theory was created to address. In art education, over time multiculturalism came to signal a benevolent inclusivity that does not critique, or even address, power systems but instead perpetuates harmful, negligent narratives through an embrace of neutrality and an emphasis on cultural tolerance that include the dangers of reinforcing stereotypes and cultural appropriation.
The alternative framework of critical multiculturalism re-centers the complex work of analyzing oppression, institutionalized power structures and the subjugation of non-dominant cultural knowledge and voices (Acuff, 2013; 2015). The theory specifically identifies race as the locus for these intersecting power dynamics and seeks to pull apart hegemonic narratives and combat subjugation. Critical multiculturalism eschews universalized narratives and embraces personal narrative to position cultural difference within these larger systemic contexts. Its activist origins ask educators to center a wider array of voices and critique the unequal systems that have silenced and erased those perspectives. Critical multiculturalism directs educators to ask different questions including: Is this true? Who says so? Who benefits most when people believe it is true? How are we taught to accept that it is true? What are different ways of looking at the problem? (Acuff, 2018). I situated the Art and
Afrofuturism art experience within these guiding questions.
Through discussion, art analysis, and artmaking, the lesson inhabited a (virtual) critically multicultural space of constructive confrontation and critical interrogation (hooks, 1994). The lesson challenged and subverted the group’s preconceived cultural assumptions about ideas of the past and the future in a way that critiques power (Acuff, 2015). It helped learners identify for themselves the ways that hegemonic and White supremacist knowledge dominates their understandings of the future. Critical multiculturalism further informed the experience in the artmaking project. A collage activity focused on personal narrative and experience, then invited learners to visualize and articulate their own version of a disrupted future that exists outside the dominant power structures.
Afrofuturism, a term created by cultural critic Mark Dery (1994) in “Black to the Future,” provided the central pedagogical tool for the experience. Afrofuturism imagines a future where Black people are transformed from the racial, social, and economic violence of the past and present to live in futures that value Black existence and African diasporic culture (Acuff, 2020). It is a critically multicultural pedagogy that “disrupt(s) universalized knowledge and counter(s) normalized narratives” (Acuff, 2015, p. 33). By reimagining technology, identity, and liberation, Afrofuturism posits a future where “Black identity does not have to be negotiated with awful stereotypes, a dystopian view of the race, and abysmal sense of powerlessness, or a reckoning of hardened realities;” it instead declares that “fatalism is not a synonym for blackness” (Womack, 2013, p. 9). Afrofuturism reframes dominant discussions about the future and contemporary art to encompass a lived experience beyond existing structures. By adopting this lens, the Art and Afrofuturism lesson asked participants to learn and to think about a future outside traditional narratives.
Acuff (2020) explains that “Afrofuturism requires art teachers to rethink the media that they cover in their art curriculum. A future art curriculum cannot be led by Western ideals” (p. 19). This maxim dictated how I chose components for the art experience. Content in each section incorporated and prioritized Black voices. Multimedia clips from the movie “Malcolm X” and an interview with former First Lady Michelle Obama encouraged participants to draw their insights and distinctions directly from lived experiences described by Black people. The work of art we discussed, Ellen Gallagher’s *Abu Simbel* (Figure 1), itself exemplifies a rethinking of Western ideals. Gallagher, a contemporary Black American female artist, completed the work by performing an artistic intervention on a photogravure of Abu Simbel that she found at the Freud Museum in London (Harvard Museums, n.d.). She manipulated a Western representation of an ancient African location, reinterpreting it with racial, historical, and futurist iconography.
Figure 1: Abu Simbel, by Ellen Gallagher. Harvard Art Museums/Fogg Museum, Margaret Fisher Fund.
In addition, Afrofuturist pedagogical elements encouraged students to “develop their futures through art curriculum” (Acuff, 2020, p. 15). The art
---
1 All images of Abu Simbel in this article from Harvard Art Museums/Fogg Museum, Margaret Fisher Fund, which grants permission for scholarly use.
experience encouraged this by creating space for revision and adaptation in participants’ collage making as they continued to engage with Afrofuturist theory and aesthetics. With the art project, any inclination to create work that engaged with stereotypically “African” imagery or motifs was discouraged, and participants were reminded they were not creating Afrofuturist works of art. Instead, participants were invited to make art that adopted the Afrofuturist language of possibility, liberation, and justice to represent a future that rejects cultural subjugation, White supremacy, and heteropatriarchy in our society.
Project Description
The Art and Afrofuturism art experience was developed based on my experience as a museum educator. It emphasized close looking at a single work of art and encouraged personal and collective meaning-making through a dialogic style of learning. The lesson took two and a half hours and engaged a White audience of mostly women in their 20s and 30s. This community of college-educated adult learners benefits from social and cultural privilege. The group had various levels of visual literacy—with some being experienced in discussing art in a group or class and others being unfamiliar with the practice. Despite this, all the participants are regular to semi-regular museum visitors. In relation to the concepts the lesson would introduce, most of the group felt comfortable with social justice terms and ideas. Some participants had heard of Afrofuturism, and a few were completely new to the idea.
The overall goal of the art experience was to develop critical multicultural understanding and promote cross-cultural dialogue and learning. It introduced the learners to Afrofuturist theory, pedagogy, and art. The experience took place on a zoom call that I led by sharing my computer screen because of the COVID-19 pandemic. I adapted a virtual tour format (based on current, evolving best practices) that the Georgia Museum of Art and other museums were using due to social distancing requirements. The virtual tour used a presentation of images of artwork and other media to prompt close-looking, discussion and other engagement with works of art. Participants provided their own materials for collage making, and I created a Powerpoint presentation for our lesson. The participants were engaged learners and active listeners, and the experience helped them contextualize current events and challenge their assumptions about conceptions of the future.
A writing prompt followed by group discussion where participants described their past and futures in 5–10 words introduced the art experience. The writing activity rooted the lesson in personal experience. Some common themes in participants' reflections were sunshine, snacks, loving pasts, teenage angst in the past, and hope or concern for the future. These ideas did not explicitly or implicitly relate to race. The next step of the discussion introduced clips from *Malcolm X* and an interview with Michelle Obama. In each clip, Blackness plays an integral role in each person’s understanding of the past and how other (white) people dictate or describe their futures for them. Malcolm X reflects on being told that he couldn’t be a lawyer because he’s Black, and Michelle Obama describes a guidance counselor who made assumptions about her race and socioeconomic background and told her she “wasn’t Princeton material” (CBS This Morning, 2018).
After we watched these clips, I asked participants to draw a distinction between our discussion of our pasts and the life experiences described in the video. I worked to get the group to tease out the differences between their white understanding of the future and the explicitly raced descriptions of the future dictated to Black people in the *Malcolm X* and Michelle Obama interview clips. This got the group to consider how “knowledge of the dominant power is normalized, and consequently universalized” (Acuff, 2013, p. 220). This discussion primed the group to begin exploring alternative cultural knowledge in the clip from the film *Black Panther*.
Next, we watched and discussed aesthetic and conceptual choices in a scene from *Black Panther*—
an Afrofuturist film (Ryzik, 2018; Staff, 2018). The film reimagines traditional African architecture and clothing in a way that projects African cultural heritage powerfully into the future. During this discussion, participants compiled a series of observations about how *Black Panther* imagined an imaginary present in a different way from the prior videos and their own initial descriptions. Participants noted that the film suggested an independent future of imaginary spaces that weren’t necessarily new but that did challenge established racial, societal, and natural hierarchies: only Black characters were present, each character greeted each other with respect despite class, the ruler was female, and despite clear technological advancement in the visualization of Wakanda, it seemed to prioritize and respect the natural world.
This analysis of mass media transitioned into a close-looking discussion of Ellen Gallagher’s *Abu Simbel*. I introduced the work using the inquiry-based teaching method Visual Thinking Strategies (VTS) (Housen, Yenawine, & Brookshire, 2018). This learner-directed teaching strategy invites participants to make observations and connections for themselves instead of adopting the “banking style of education” (Freire, 1970). The participants developed a complex understanding of the work by finding answers to a series of three open-ended, repeating questions.\(^2\) The group considered what they had learned about Afrofuturism as they made observations about the work, and they didn’t ask for context or additional information because they were deeply invested in figuring out what was going on together. One overarching analysis developed by the end of the VTS exercise: participants noted parallels between alien abduction and the slave trade, and they surmised that the work of art was reimagining the existence of Black people in America as a result of slavery.
To introduce more context into the discussion, I centered the conversation with a description from
---
\(^2\) The three VTS curriculum questions are: what’s going on in this picture?; what do you see that makes you say that?; what more can we find?
Gallagher, who explains her work as “a tricked-out, multi-directional flow from Freud to ancient Egypt to Sun Ra to George Clinton” (Harvard Museums, n.d.). At this point I departed from a strict version of VTS and provided background information on visual elements of the work that they had repeatedly wondered about and played a trailer for *Space is the Place*, a blaxploitation film that inspired much of the work. By layering information into our discussion after participants had already analyzed the work themselves, I was able to emphasize an element of Afrofuturism that our discussion had previously overlooked—that the idea builds from visual and conceptual representations of the past. It is not just a reimagining of the past or just a utopian look forward. Almost everyone who participated noted that element as something new they learned about Afrofuturism.
After finishing up our analysis, I asked participants to begin working on a collage that pulled themes from our discussion of Afrofuturism into their works of art. I reiterated that we were not making Afrofuturist artwork. Instead, we were centering alternative narratives and representational strategies as a group of White artmakers. The collage activity encouraged learners to work like artists as they developed their renderings of an equitable speculative future. I then paused our collaging to start a discussion on the recent uprisings and protests including Black Lives Matter and the Defund the Police movements, connecting our exploration of art and artmaking to immediately relevant topics. After a thoughtful, critical discussion, participants went back to artmaking, revising their works of art based on a discussion of current events. After 40 minutes, everyone shared their collages and detailed what elements of Afrofuturism were reflected in their works of art.
**Project Findings**
I always thought of Afrofuturism as simply an imagination of a future without whiteness or the white lens. But I learned that it does draw on the past and focuses on injustice and oppression, which made me realize that Afrofuturism is the antithesis of Black erasure.
— Sam Busa, Art and Afrofuturism participant
The most surprising and satisfying element of this lesson happened when one of the participants challenged an assumption that I made about the Black Lives Matter protests and activist movements. I gave participants about 20 minutes to work on their collages initially, and then I stopped them and showed some images of the current protests and asked how we might view Black Lives Matter through an Afrofuturist lens. After thinking about it for a few moments, one of the participants remarked that they didn’t think the protests were Afrofuturist at all. They stated that the activism is directed towards white people, and Black people asking for the very basic request of not being murdered. There didn’t seem to be anything emancipatory or liberating or separately and powerfully Black in asking for the bare minimum consideration as human beings.
Others chimed in that they agreed with the point, and I asked if anyone else had a different perspective. One person felt like the cultural reckoning created by the protest and movements were making space for Black joy and Black lives not constrained or represented solely by oppression, and that felt relevant. Someone else mentioned that the greater societal awareness and acceptance of the need for strictly Black spaces aligned with Afrofuturist ideas. Another participant pointed out that the BLM movement was demanding an end to inherited violence and generational trauma, echoing the Afrofuturist theme of referencing and then re-imagining the past for the future.
While I planned on introducing current events to get the participants to reconsider their collages and think more critically about Afrofuturism, the conversation did not go in the direction I originally anticipated. I thought participants would feel compelled to layer in elements of current events into their collages. This did not occur, but the final discussion exemplified the Afrofuturist art educational strategy of working through the curriculum, which ultimately impacted the themes of their collages. For example, many focused the artmaking on representations of interiority—joy, space to grow—as a manifestation of the realizations they had during the art experience. The questions participants asked were beyond those that I could have anticipated as I planned the experience—the questions emerged through the lesson and had a profound impact on everyone involved in the art experience. Participants developed new tools to analyze and contextualize current events with the future in mind. In addition, the group did the work of challenging the existing power structures that demanded the need for protests as well as unpacking the goals and impact of the movements as well.
Participants were able to articulate and center a Black future, activating critical multicultural theory as they confronted the way their previous ideation of the future circulated around the axis of Whiteness. This transformation was apparent in their collages. One participant went back to their original list of words for the future and built a collage by rethinking each term using their newly developed Afrofuturist lens. Another included a call to action and structural changes in their representations (Figure 2). The collage features elements of text that reference
privilege and wealth—calling into question who inherits these things and who does not. The participant used overlapping images of stars and the sky to indicate a different future filled with possibility, noting that she wanted the top of the collage to juxtapose the busy city scenes of the bottom to show something yet undiscovered.
Figure 2: Final collage from Art and Afrofuturism participant
In this paper, I have discussed an Art and Afrofuturism art experience that explored alternative narratives. The two-and-a-half-hour lesson was informed by critical multiculturalism theory and introduced participants to Afrofuturism through mass media depictions and artistic representations. The critical multiculturalism theoretical framework worked to challenge the expectation that the future is white in Western culture and asked participants to create a collage illustrating a more equitable future developed from the Afrofuturist themes discussed. In a post-lesson evaluation, participants reported finding the experience impactful and eye opening. It confronted their ways of seeing the world, inspired a critical examination of current events, and offered the group space to think of a future that is something different. In the same way that I was re-envisioning the curriculum, the participants were re-envisioning their futures. The Art and Afrofuturism art experience created a “counter-discursive space” that challenged established systems of understanding race and visual culture. The discussions participants had that challenged their unexamined ideologies are crucially important for white educators working with BIPOC students to also have. In addition, providing anyone space to consider and create alternative, equitable futures offers a powerful opportunity for tumultuous times.
References
Acuff, J. B. (2013). Discursive underground: Re-transcribing the history of Art Education using critical multicultural education. Visual Inquiry, 2(3), 219–231. https://doi.org/10.1386/vi.2.3.219_1
Acuff, J. B. (2015). Failure to operationalize: Investing in critical multicultural art education. Journal for Social Theory in Art Education, 35, 30-43.
Acuff, J. B. (2016). ‘Being’ a critical multicultural pedagogue in the art education classroom. Critical Studies in Education, 59(1), 35–53. https://doi.org/10.1080/17508487.2016.1176063
Acuff, J. B. (2020). Afrofuturism: Reimagining art curricula for Black existence. *Art Education, 73*(3), 13–21. https://doi.org/10.1080/00043125.2020.1717910
Acuff, J. B., & Evans, L. (2014). *Multiculturalism in art museums today*. Rowman & Littlefield.
CBS This Morning. (2018, November 8). *Michelle Obama talks self-doubt, Princeton, and life after White House* [Video]. Youtube. https://youtu.be/Cn2B2-laxDI
Coogler, R. (2018). *Black Panther*. [Film] Marvel Studios.
Derry, M. (1994). Black to the future: Interviews with Samuel R. Delany, Greg Tate, and Tricia Rose. *Flame Wars*, 179–222. https://doi.org/10.2307/jj.ctv122om2w.3
Freire, P. (1970). *Pedagogy of the oppressed*. Harvard. (n.d.). From the Harvard Art Museums' collections Abu Simbel. Retrieved July 8, 2020, from https://www.harvardartmuseums.org/art/315230.
hooks, b. (1994). *Teaching to transgress*. Routledge.
Housen, A.; Yenawine, P.; Brookshire, M. (2018) Understanding the basics, VTS: Visual Thinking Strategies.
Knight, W. B. (2006). Using contemporary art to challenge cultural values, beliefs, and assumptions. *Art Education, 59*(4), 39–45. https://doi.org/10.1080/00043125.2006.11651602
Ryzik, M. (2018). *The afrofuturistic designs of 'black panther'*. The New York Times. Retrieved August 7, 2020, from https://www.nytimes.com/2018/02/23/movies/black-panther-afrofuturism-costumes-ruth-carter.html
Spillane, S. (2015). The failure of whiteness in art education: A personal narrative informed by Critical Race Theory. *Journal of Social Theory in Art Education, 35*, 57–68.
Staff. (2018). *Things you didn't know about Black Panther's kingdom of Wakanda*. Architectural Digest. Retrieved July 22, 2020, from https://www.architecturaldigest.com/story/5-things-you-didnt-know-about
Womack, Y. (2013). *Afrofuturism: The world of black sci-fi and fantasy culture*. Lawrence Hill Books.
Davenport, T. (2013). *Sun Ra - Space is the Place* (1974) trailer [Video]. Youtube. https://www.youtube.com/watch?v=4sOls1u8iwg |
Thinking melons? Think Terranova
MELON PRODUCT GUIDE 2020
Terranova SEEDS
March 2020
Terranova Seeds is a specialist vegetable seed company with offices and warehouses located in Australia and New Zealand. Our specialised technical staff and world-class seed production facilities allow us to provide the highest quality seed offering to our customers.
We are committed to supplying high quality seeds with high purity and germination rates; all trialled under local conditions. Our commitment to our quality standards ensures that we provide seeds that perform, and full technical support to you, our customers.
Terranova is a fully owned subsidiary of South Pacific Seeds, however, we trade as a completely independent entity. Our range of Melons is also available to our customers across the Pacific Islands and Papua New Guinea.
Reliable seeds. Quality seeds.
That’s what you can count on whenever you think Terranova Seeds.
Maxima
- Round to slightly oval: 9–10kgs
- Very vigorous plant
- Rind colour – broad, dark green stripes
- Very sweet taste with deep red flesh
- Outstanding flavour
- Good shelf life.
Talca
- Slightly oval: 8–9kgs
- Plant vigour is very strong
- Rind colour – broad and very dark green stripes
- Deep red flesh colour, which is very firm
- Outstanding sweet flavour
- Very good shelf life.
Eland
- Seedless variety of exceptional quality
- Round fruit averaging 8–10kg
- Dense heavy melon
- Glossy, dark green skin
- Deep red, sweet, crisp flesh
- Heavy setting ability.
TWT8196 (Lucille)
- Dark stripes
- Fruit shape is oval 7–8kgs
- High Yielding
- Flesh is deep-red, sweet with high lycopene.
### TWT 9047
- Medium sized fruit 6-8kgs
- Very round uniform shape and size
- More pronounced stripe than Talca
- Good set, good vine
- Crisper, darker red internals.
### La Joya
- Very strong plant with healthy foliage and good fruit coverage
- Very early maturing
- Round fruit with broad and very dark stripes
- Midi size: 5-6kg.
### Skyline
- Personal watermelon
- Good plant strength covering the fruit well
- Fruit shape: Round
- Fruit size: 3–3.5kg
- Uniformity: Excellent
- Rind: 9mm thick with light narrow tiger stripe.
### Belinda
- Plant strength: Good
- Cover: Good
- Fruit shape: Round
- Fruit size: 2–3kg
- Uniformity: Good
- Rind: 9mm thick with dark green stripes on medium green background.
## Watermelons
### Ana
- **Plant characteristic:** Vigorous plant
- **Shape:** Round
- **Rind:** Dark striped
- **Texture and taste:** Good flavour
- **Shelf life:** Good
- **Average weight:** 3–3.5 kg.
### Sky Star
- **Plant strength:** Good
- **Cover:** Good
- **Fruit shape:** Round
- **Fruit size:** 2.5–3 kg
- **Uniformity:** Good
- **Rind:** 9mm thick with light narrow tiger stripe.
## Pollinators
### Yarden
- **Diploid watermelon**
- **Mid green with a darker green stripe**
- **Uniform fruit 8–12kgs**
- **Deep red firm flesh**
- **Good yield**
- **Oval shape**
- **Excellent pollinator for triploid watermelons.**
### Lion Pollinator
- **Strong adaptable plant**
- **Continuous production of male flowers over fruiting period**
- **Fruit is small 3-4kgs and not edible grey/green colour**
- **IR to Powdery mildew.**
### Gouldian
- Good size vine with excellent set
- Fully netted harper type melon
- 2.2–2.4kg mid-winter in Northern Australia
- Fruit has a tight cavity with good flesh colour.
### Morgan
- Harper type melon that has a vigorous plant and good yield potential
- The fruit are uniform, firm and have a tight cavity with good orange internal colour
- Oval to round fruit
- Firm flesh and very good flavour with high Brix
- Very good shelf life.
### Restart
- Harper type melon. Orange flesh
- Very firm flesh and excellent Brix
- Highly vigorous plant
- Fruit size around 1.5kg
- Intermediate resistance to Powdery mildew and to Fom 0, 1.
### Zacapa Gold
- Harper type melon which exhibits a very strong bush that provides excellent cover
- High quality fruit are uniform, firm, have a tight cavity and good orange internal colour
- Fruit is oval to round
- Firm flesh, very good flavour and high Brix
- Very good shelf life
- Additional comments and cultural notes
- Resistance: F2
- Target Spring - harvest.
### TRM8185 (Cannon)
- Early maturing harper type melon
- Uniform fruit size, shape and maturity
- Small seed cavity
- Good netting, excellent flesh colour & flavour.
### TRM8190
- High quality fruit - round shape with a small cavity
- Average 14 brix
- Bright orange flesh colour
- Due to very good shelf life it is very suitable for export markets.
### TRM9359
- Warm season melon with high brix & great flavour
- Fruit is very uniform size and shape
- High quality, small, tight cavity
- Dark orange flesh colour with super sweet flavour
- Disease resistance: Fom 0, 1, 2.
### TRM9360
- Exceptionally high yielding melon with concentrated harvest
- Fruit size is medium, with a uniform even net
- Fruit was very uniform size and shape
- Small seed cavity that presents well as a cut melon
- Early sugar content development
- Good flavour and high Brix
- Warm season production.
**Melons**
**TRM9384**
- Good strong plant, excellent cover
- Very good concentrated crown set
- Uniform even light net
- Fruit is round/oval
- Good fruit quality, tight cavity.
- Very good eating quality due to early sugar development.
---
**Melons Specialty**
**TMS9030 (Akiles)**
- High quality yield in early crop
- Medium sized fruit, with an open and healthy plant all along the crop cycle
- Very early maturing with deep orange colour and good flesh firmness
- Netting covers all the fruit skin, also ribs. When mature, skin turns to yellow, including the ribs
- Excellent disease package: HR: Fom 0,1,2; IR Px: 1,2 / Gc.
**TMS9326 (Goldex)**
- Yellow Canary type melon
- Hybrid yellow skinned melon with greenish white flesh
- Adaptable vigorous vine providing good cover
- Fruit is Round/Oval with a smooth golden yellow rind when mature
- The fruit turns to yellow only when internal brix reaches 10%. At full maturity, the fruit reaches 14-16% brix
- Flesh is firm, juicy and crispy
- Resistance; IR: Px 1, 2.
**THD9282 (Dino Melon)**
- White skinned honeydew with green fleck
- Very sweet and juicy
- Firm flesh melon, high brix, and rounded shape
- New Category of honeydew melon showing good growth in world markets.
**TMS8200 (Justin)**
- Yellow rind and orange flesh LSL Cantaloupe
- Round fruit averages 1.5kg
- Rind changes to yellow when mature which is a picking indicator
- Fruit has a medium netting
- Resistance: Fom 0,1,2, Px IR: MNSV-
**Buena Vista**
- Orange-flesh hybrid Honeydew melon with creamy white skin
- Good Brix when allowed to mature
- Tight cavity
- Good field holding
- Intermediate resistance to powdery mildew.
### Honeydew
#### River Dew
- Medium to large fruit size suited to cool production
- Round white skinned honey dew
- Average 2.5-3kg
- Light green crisp flesh and strong vine
- Resistant to Fom 0,2, Px IR.
#### Fresh Dew
- Warm season honeydew
- Round white skin with light green crisp flesh
- Average 2-3kg.
- Strong vine
- Resistant to Fom 0,2, Px IR.
#### Sweet Peridot
- Main season green flesh honeydew melon
- Consistent round shape, smooth milky white skin, with a small seed cavity
- Skin stays smooth, with hardly any sugar cracking even at full maturity
- Resistant to Fusarium 0 and 2, Intermediate resistance to powdery mildew.
#### THD9189
- Warm season Honeydew
- Very white smooth clean skinned fruit with firm green flesh
- Shape is round, with a very small cavity
- Excellent flavour and firmness, Brix 14-16%
- Good yield potential with a consistent size and quality
- Resistant to Fusarium 0 and 2, Intermediate resistance to powdery mildew.
### Milky Way
- White skinned honeydew with green flesh
- Round in shape, averaging 16x16cm
- Small cavity
- Good yield potential
- Thick very crisp flesh with ESL & sweet flavour.
### Glossary of Terms
**DISEASE RESISTANCE LEVEL: HR = High Resistance, IR = Intermediate Resistance**
| CODE | PATHOGEN (SCIENTIFIC NAME) | COMMON NAME |
|------|----------------------------|-------------|
| Fom | Fusarium oxysporum f.sp. meloni | Fusarium wilt |
| Px | Podospharia xanthii | Powdery mildew |
| Gc | Golovinomyces cichoracearum | Powdery mildew |
| MNSV | Melon necrotic spot virus | Melon necrotic spot |
Thinking melons? Think Terranova.
Talca
- Slightly oval: 8–9kgs
- Plant vigour is very strong
- Outstanding sweet flavour
- Very good shelf life.
Sales Orders: Phone: (02) 9616 1288 Fax: (02) 9616 1299. For production guides and cultural notes visit www.terranovaseeds.com.au
Nth Queensland/NT
Shaun Todd
0437 890 920
SE Queensland
Michael Sippel
0418 479 062
New South Wales
Charlie Vella
0419 286 370
Coastal SE QLD/Nthn NSW/Wide Bay Burnett Regions
Steven Williams
0407 256 521
South Australia
Greg Bragg
0419 635 548
Tasmania
Andy Doran
0497 999 987
Western Australia
Danie Oosthuizen
0417 930 233
Victoria
Territory Manager
Nick Mitchell
0418 532 650
Terranova SEEDS |
An Empirical Study of how Socio-Spatial Formations are influenced by Interior Elements and Displays in an Office Context
BOKYUNG LEE, Korea Advanced Institute of Science and Technology (KAIST), Republic of Korea
MICHAEL LEE, Autodesk Research, Canada
PAN ZHANG, Autodesk Research, Canada
ALEXANDER TESSIER, Autodesk Research, Canada
AZAM KHAN, Autodesk Research, Canada
The design of a workplace can have a profound impact on the effectiveness of the workforce utilizing the space. When considering dynamic social activities in the flow of work, the constraints of the static elements of the interior reveals the adaptive behaviour of the occupants in trying to accommodate these constraints while performing their daily tasks. To better understand how workplace design shapes social interactions, we ran an empirical study in an office context over a two week period. We collected video from 24 cameras in a dozen space configurations totaling 1,920 hours of recorded activities. We utilized computer vision techniques, to produce skeletonized representations of the occupants, to assist in the annotation and data analysis process. We present our findings of socio-spatial formation patterns and the effects of furniture and interior elements on the observed behaviour of collaborators for both computer-supported work and for unmediated social interaction. Combining the observations with an interview of the occupants’ reflections, we discuss dynamics of socio-spatial formations and how this knowledge can support social interactions in the domain of space design systems and interactive interiors.
CCS Concepts: • Human-centered computing → Empirical studies in collaborative and social computing; Empirical studies in collaborative and social computing; Empirical studies in collaborative and social computing.
Additional Key Words and Phrases: socio-spatial; office space; human-building Interaction; space occupancy
ACM Reference Format:
Bokyung Lee, Michael Lee, Pan Zhang, Alexander Tessier, and Azam Khan. 2019. An Empirical Study of how Socio-Spatial Formations are influenced by Interior Elements and Displays in an Office Context. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 58 (November 2019), 26 pages. https://doi.org/10.1145/3359160
1 INTRODUCTION
The built environment inherently shapes the social interactions we have with each other every day [33, 45, 51]. From schools to hospitals to cafés, certain types of social behaviours are encouraged while others are discouraged. Social norms dictate some of these interactions but others seem to emerge from interior elements in the space (e.g. furniture configurations, wall positions, computing devices) [22, 51]. For example, people gravitate towards seats which reduce the visual exposure of their computer screens [2]. When getting an opinion from a colleague on a document, sitting shoulder-to-shoulder, indicates a deeper level of collaboration and a longer time commitment than speaking face-to-face [33].
In office contexts, it is becoming more important to understand the influence of the spatial aspects on social interactions and have the office design process explicitly support the desired interactions. The quality of social interactions have a profound impact on the effectiveness of the workforce [3, 12, 23]. Also, office layouts are becoming less standardized and are being more creatively designed, resulting in greater variations and diversity of designs [16]. An architect or designer may intend to support specific work styles and various collaboration spaces, but whether their design intent has the desired outcome is difficult to say yet.
Several theories rooted in social science describe spatial arrangements of people that can be used to analyze how people occupy space during social interactions. In Proxemics [22], four discrete interpersonal zones were introduced where people are most comfortable during different types of social interactions. The F-formation framework (or facing formation [27]) illustrates diverse formation shapes with social spacing and body orientations. However, these theories were focused on standing situations, especially for face-to-face interactions and is challenging to apply to office contexts. In recent offices, the ubiquitous use of mobile computing and wireless communication increases the ecological flexibility [35] in the office and allows social interactions to be more fluid and dynamic.
Therefore, the objective of our study is to understand socio-spatial formations in the office environment in relation to furniture configurations, interior elements, and the type of interactions (i.e. computer-supported and unmediated), as outlined in Figure 1. Socio-spatial formations are defined as the arrangements of people during social interactions including micro positioning of bodies, orientations, distances, and formation shapes (e.g. linear shape, L-shape). Using social trails as a metaphor, where similar ad hoc paths are formed by people repeatedly preferring to take a different route than where the sidewalks are placed, we aimed to explore how the findings on socio-spatial formations could inform future office interiors, similar to learning by doing practices [32] and evidence-based design approach [45].
To achieve these goals, we conducted a two-week ethnographic study using 24 cameras (Figure 1). Our camera setup covers a diverse set of space configurations including common and personal spaces, diverse desk sizes, desk arrangements, and interior elements. To preserve the privacy of occupants during the analysis, we converted the video into skeletonized representations using computer vision techniques, and developed a custom software tool (Skeletonographer) to play back and annotate the footage.
We first analyzed the skeleton data using thematic analysis [21] and found all the formation shapes that occurred in the office. Inspired by Lawson [33] and Kendon[27], we then defined a socio-spatial framework that includes ten formation shapes and three desk-relative arrangements.
Fig. 1. Objectives and overview of our study.
Using that as our analytical lens, we re-analyzed the data by annotating with our framework and found socio-spatial patterns in relation to the interior setup and interaction type. Occupants were interviewed and asked to reflect on their behaviours and we corresponded their answers to observed formation patterns to understand space perceptions that cause certain patterns.
Our study provides three contributions to the human-computer interaction (HCI) and computer-supported cooperative work and social computing (CSCW) community: 1) We revealed socio-spatial formation patterns in relation to *interior elements* and *types of social interactions* with corresponding occupants’ space perceptions. In addition to static formation patterns, we also reported patterns in formation transitions. 2) We proposed a dynamic concept of socio-spatial formations and discussed how this knowledge can support social interactions in the domain of space design systems and interactive interiors. 3) We proposed a computer-vision based ethnographic method which is applicable for other field studies using anonymized skeletons.
## RELATED WORK
Our research bridges existing work on *socio-spatial theories* and *space-relevant human behaviours*. We reviewed the related work in the following four topics: 1) theories behind socio-spatial formations, 2) the influence of physical office spaces on social behaviours, 3) techniques for shaping physical office spaces that facilitate social interaction, and 4) pervasive sensing to understand occupant behaviours in spaces.
### 2.1 Theories Behind Socio-Spatial Formations
There have been several efforts in HCI and CSCW to understand spatial patterns during social interactions in the physical space with the goal of informing interactions for ubiquitous environments. One schematic framework is *Proxemics* which was coined by Edward Hall [22] to illustrate spatial relationships (distances) in everyday social interactions. He proposed four discrete interpersonal distances found in American culture: the *intimate* zone (0–0.45 m), the *personal* zone (0.45–1.22 m), the *social* zone (1.22–3.66 m), and the *public* zone (>3.66 m). This contributed to *proxemic interactions* [9, 19, 61] that use spatial distances as an input; however, the system is somewhat abstract, only considering point-to-point distances and standing interactions. Inspired by Sommer [52] who looked into spatial arrangement as a function of group tasks, social relationships, and individual’s personalities, we build upon proxemics theory by considering the diverse spatial constraints imposed by the physical interiors, sitting and standing states of occupants, and collaborative situations where people do not necessarily face each other.
Similarly, *F-formations* were proposed by Adam Kendon [27] to illustrate spatial formations with social spacing and orientation. The framework decomposes a socio-spatial arrangement into an *o-space* and a *p-space*. *o-space* is the convex, empty space surrounded by the people involved in a social interaction, whereas *p-space* is the space that contains the bodies of the people involved in the interaction. In HCI, F-formations contributed to the domain of human-robot interactions [25, 30], multi-device interactions (e.g. kiosks and tabletop displays) [36, 37, 55, 57], and technologies embedded in physical environments (e.g. in the kitchen, cubicle walls or information centres) [15, 37, 43]. We applied this framework to Human Building Interactions (HBI) and utilized *F-formations* as our analytical lens to evaluate how people occupy spaces in a given office setup during social interactions.
Beyond the two major theories mentioned, some researchers have tried to analyze people’s micro-behaviours (e.g. formation transitions, body gestures, poses and actions) in the physical environment. Shapiro et al. proposed *interaction geography* [50] to map visitors’ movements and social activities on a museum floor plan to understand socio-technical practices in museums. Krogh et al. [29] introduced three concepts of socio-spatial literacy: *proxemic malleability*, *proxemic
threshold, and proxemic gravity to describe space-dependent behaviours such as “pointing at a display” or “rolling their chairs to re-orient themselves”. Motivated by these works, we explored how interior elements including various sized desks, types of desks, walls, partitions and desk arrangements can influence the occupant’s formations and poses. We denote these interactions as socio-spatial formations.
2.2 Understanding the Influence of Physical Office Spaces on Social Interactions
For decades, researchers from multiple disciplines such as architecture, sociology, psychology, and environmental behaviour have attempted to understand how the design, layout and interior of physical spaces affect human behaviour [4, 22, 49, 51]. Sommer [51], the pioneer of social design in the field of environment behavior psychology, argued that minor adjustments in furniture arrangements could induce alternative social behaviours. For example, chairs arranged into small groups around tables could encourage more active interactions. Specifically in office contexts, the mainstream approach in the late nineties was to investigate the influence of physical environments on communications [3, 11, 12] and productivity [23]. For example, levels of communication and creativity increased after occupants moved from an enclosed layout to an open layout [3, 12]. Still, the scientific rigor of these findings has been challenged due to scattered empirical evidence [18, 47].
Recently, studies inspired from the tradition of space syntax [24] have looked deeper into the relationship between office spatial layouts and social interactions [46] in architecture and HCI. Researchers found that social interactions were predominant in printing rooms and kitchens [13, 17] as well as in workstation areas [44, 46]. However, most of them have focused on statistical analysis using only the frequency of interactions as the independent variable.
In this paper, we extend the works above by looking at social interactions from a micro-perspective, looking at how physical interior elements influence socio-spatial formations, arrangements, and poses, instead of measuring the frequency of interactions. We argue that this approach is significant for informing office space design where furniture and architectural layouts act as behavioural constraints. Only a few works in HCI and social behavior study the relationship between social formations and physical environments, including a tourist information centre [37], table shapes [56], display angles in a museum [26], and seated positions near common desks [33, 51]. We build upon these approaches in office environments with more varied devices, interior elements, and different types of desk configurations.
2.3 Towards Physical Office Spaces that Facilitate Social Interaction
Several attempts rooted in HCI have been made to plan physical office spaces that enhance work experiences in terms of space analysis [2, 40] and interactive systems [8, 15, 53, 63].
2.3.1 Space Analysis. To analyze the physical characteristics of a space, space syntax [24] has been proposed to quantitatively analyze spatial formations and their impacts on human experiences in architectural studies. A popular example is the visibility graph [58], which visualizes the mutually visible locations in a spatial layout. Extending that concept, Nagy et al. [40] evaluated social congestion using space topology. However, there is room for improvement to simulate more realistic social interactions in a given interior setup.
Therefore, we extend the work of Backhouse et al [7] who used an ethnographic approach to understand natural social interactions in the office and observed social formation patterns in-the-wild. Building upon the approach of data-driven design [45, 48], our encompassing goal is to argue for the inclusion of occupants’ space usage behaviours in space design and analysis.
2.3.2 Interactive Interiors for Social Interaction. Several interactive techniques for robotic furniture have been proposed to enhance work experience by overcoming spatial constraints in HCI. The
majority of them focused on single-person usage scenarios. Bailly et al. [8] introduced an actuated monitor, a mouse and a keyboard that adapts to the user’s behaviour, while Wu et al. [63] proposed the concept of an responsive monitor that helps users maintain ergonomic poses. A similar approach was applied to desks or tables; for example, a desk that shape-shifts to support user’s work preferences [60], and automatic height-changing desks [34]. Moreover, robotic furniture that moves around the space based on human behaviour has also been demonstrated [42, 54].
Few works convey social interaction scenarios for interactive interior elements. Takeuchi et al. [53] introduced the concept of a weightless wall that blocks sound using headphones and Danninger et al. [15] proposed the cubicle partition that changes its transparency, both works based on the body orientations of the parties involved. Shape-changing desks were recently introduced to manage the notions of interaction proxemics in medical consultations [56] and informal meetings [20]. This work is highly motivational for our research, and we contribute by investigating how socio-spatial behaviours including poses and formations should inform interior elements to enhance collaborative experiences.
2.4 Pervasive Sensing to Understand Occupant Behaviours in Spaces
To understand how environmental resources impact people’s experiences, the notion of pervasive sensing has been introduced, which is a technique that continuously observes individuals and their interactions. Several researchers leverage a combination of a pervasive sensing systems to analyze space-use behaviours using ambient and wearable devices such as blob sensors with Bluetooth wrist-bands [5, 59], RFID badges [13], tracking tags & Zigbees [62] or infrared sensors for indoor localization [2]. These techniques are useful for collecting data anonymously; however, these do not convey details of occupant behaviours. Therefore, we extended the traditional ethnographic study with anonymous pervasive sensing by utilizing computer vision techniques to produce skeletonized representations of the occupants.
3 METHOD
To understand how interior elements and desk configurations influence the social formations in the office, we conducted observations of occupant poses and spatial arrangements in physical spaces. Our experimental setup consisted of 24 cameras installed at pre-selected locations as seen
in Figure 3. We built upon traditional *digital ethnography* [31] by using skeletons of individuals rather than using raw video data. The skeletons were generated using computer vision through the OpenPose library [14], capturing features as connected vertices corresponding to features on the human body as seen in Figure 5. We explored this approach in an effort to improve privacy, especially when dealing with the public. To make observations and annotate the skeletons, we implemented a custom video tool (Figure 6). As a result, our study demonstrates that ethnographic studies using skeleton data have potential for future in-situ social studies.
### 3.1 The Office
Our study area (Figure 2) was a section of an office occupied by the research division of a global software company located in North America with a relatively flat hierarchy working culture. The study area had 60 employees from multicultural backgrounds and countries including Canada, US, France and Korea. The organization was structured into three research groups, and collaborations within groups were usually for work updates, brainstorming, discussing issues, and information sharing. However, informal social interactions occurred frequently between groups as well.
The study area was 102 ft x 70 ft and composed of a main floor and mezzanine with various areas, including: private workspaces (cabins), open office areas, meeting rooms, and corridors as shown in Figure 3. All the employees had height-adjustable desks with articulating monitor arms for a partially customizable work environment. The devices people used varied from multiple desktop workstations to a single laptop.
### 3.2 Space Configurations
From our informal observations prior to the official study period, we observed that social interactions in the office often occurred near desks accompanied with papers, displays and/or laptops. The *common desk* areas (i.e. meeting rooms) were utilized for long-term collaboration, while *personal desk* areas were more commonly turned into temporary social spaces by visits from co-workers.
Our observations covered both common desk areas and personal desk areas, and we assumed that different space-usage interactions could exist between them. We also covered various *interior elements* including partitions, walls, configurations and various desk sizes as shown in Figure 4. Common desks were mostly located with a fixed large display at one end of the desk and were in meeting room spaces surrounded by four walls. We selected three common desk areas with different desk sizes (Figure 4-top) for our study. All personal desks were the same size, therefore,
configuration was a more significant aspect affecting behaviour. We selected six different personal desk configurations in the open office (Figure 4-middle), based on whether there were barriers next to or behind the desks. Some personal desks were surrounded by walls and doors, supporting a more private working space, others with only a partition (Figure 4-bottom).
Our goal for this study was to obtain a general understanding of how different types of spatial features affect *space-use behaviours* in terms of social interaction. Instead of preparing controlled spatial features for lab studies, we instead chose *in-the-wild* field studies with various setups.
### 3.3 Data Collection & Technical Setup
#### 3.3.1 Camera Setup
Our setup consisted of 15 Raspberry Pi 3 A+ and 9 Raspberry Pi B+ single-board computers, each equipped with Raspberry Pi Camera modules (Version 2). The cameras were chosen for their inexpensive cost, built-in Wi-Fi capabilities and the ability to fully program and customize all software, allowing us to implement algorithms such as motion compression. Each unit was powered by a standard USB power supply. Cameras were mounted throughout the office with adjustable mounting hardware.
#### 3.3.2 Camera Location Planning & Installation
While planning the camera positions for coverage of the 17 desk areas, several issues needed to be considered. First, it was difficult to find camera positions for obtaining full body coverage due to furniture occluding views within the office, an important requirement of our computer vision system. To minimize occlusion, we typically employed at least two cameras to capture the same location from different angles. Second, in order to minimize the amount of video data and reduce subsequent processing and analysis time, we wanted to find camera positions that could cover multiple target areas. Third, installation elements (e.g. mounts and power supplies) had to be taken into account during planning.
To do this, we used Autodesk Revit to simulate the camera coverage within a real-scale 3D architectural model. We prepared custom camera families in Revit that matched the specifications of our Raspberry Pi camera modules (i.e. focal length and field of view) so that we could accurately plan the camera coverage and power supply requirements. In total, 24 cameras were used in this study, the final camera positions shown as grey circles in Figure 5. We then installed the cameras in the physical space based on our Revit model.
3.3.3 Recording & Motion Compression. We captured video of social behaviours at 17 desk locations for two weeks (10 working days) from 9:00 am to 5:00 pm local time. All video footage was saved in a secured Amazon S3 bucket and used as input to generate anonymized skeletons. To reduce storage and post-processing requirements, we developed a simple form of motion compression to only store video for motion based events. We perform this compression in-memory on the raspberry pi using a circular buffer. Once every second, frames are compared for color based changes within the scene. If a significant change is detected, the contents of the circular buffer are written out to file, and a subsequent video is saved containing the motion footage. This is repeated until no more significant motion is detected.
3.3.4 Masking & Skeleton Generation. To keep costs fixed and reuse existing hardware, we used 3 local workstations with NVIDIA Quadro P6000 graphics cards. One of the workstations was also used as a local file server for the raw video and processed outputs. We decomposed processing of files into tasks and developed a job management script in Python to distribute them as jobs. Synchronization was performed using Amazon’s Simple Queue Service (SQS). During processing, each workstation retrieves a video from the file server, performs masking and OpenPose [14] processing, then finally stores the results back onto the file server.
Masking is performed to remove areas where additional privacy is required: monitors, collateral occupants outside of the area of focus, or people who opted out of the study. Masking is achieved by applying an overlay mask image over raw video footage using the FFmpeg application as shown in Figure 5-a. Afterwards, the OpenPose library can recognize the occupants’ embodied poses from the masked videos and estimates skeletons based on 25 key points (Figure 5-b, c). JSON files corresponding to each frame in the video are generated, containing key points for each of the occupants in the video.
3.4 Skeletonographer: Skeleton-based Digital Ethnography Tool
To analyze the skeleton data, we implemented a custom playback tool, Skeletonographer, for three reasons (Figure 6). First, we needed a tool that would be able to play back the frame-by-frame skeletonized data, providing similar affordances to standard video playback tools which are often used in traditional digital ethnographic studies. Second, to understand socio-spatial patterns at
different space configurations, we needed a tool to annotate and classify occupant’s skeletons space by space using custom labels obtained through the initial analysis process to find further patterns. Third, we needed to efficiently manage a large amount of data that was collected for 1,920 hours from 24 locations. We used a custom Node.js server to serve the web-based playback tool as well as a concatenated version of the selected skeleton data set. The tool was written in JavaScript and HTML, with the skeleton data being drawn onto a canvas element on the page.
The Skeletongrapher tool is composed of five parts: a) source panel, b) video panel, c) video control panel, d) labeling panel, and e) timeline panel as shown in Figure 6. The **source panel** enables users to select data using the camera number and the date through a drop-down menu bar. The **video panel** displays skeletons from a selected source on a still photo similar to traditional video players using the absolute timeline. Users can move through time by scrubbing the *time control bar* (Figure 6, D-2) on the **timeline panel**. The absolute time of a specific moment is displayed next to the time control bar.
Additionally, the **labeling panel** was implemented for further analysis processing. We added labels for activities, number of people in the interaction, and socio-spatial patterns as icons (Figure 6, B-1). To annotate the skeleton data, users need to first clarify the specific area for the analysis using the pre-defined buttons (Figure 6, D-1), then the video will highlight the selected area in yellow. Users can annotate behaviours by clicking the labels. They can also create new labels if needed using a keyboard shortcut. The annotated results are shown on the **timeline panel** to keep track of annotations. The results are synchronized between cameras to support cases where the same area is captured from multiple cameras. Once users have finished annotating a data set, they can export the data as a JSON file with time-stamped data for each label. This file can be used for further analysis, processing and storage.
The video properties can be updated from the **video control panel**. We implemented a toggle button that can reveal the original video for cases where skeletons we incorrectly generated for any of the available views. In addition, users can play back the skeletons similar to video players using buttons or keyboard shortcuts.
3.5 Data Analysis
The collected data was analyzed using thematic analysis [21] to understand the overview of socio-spatial formations in the office. As a first step, we used Skeletonographer as a video playback tool and watched seven days worth of data similar to traditional digital ethnographic studies. We transcribed the skeletonized videos by sketching occupants’ spatial arrangements using similar notations as Krogh et al. [29] and Paay et al. [43]. We sketched the head and body orientation, estimated gaze direction, poses, and added additional comments to describe the situation (Figure 7). When there were any changes in formation, they were captured using illustrations of path movements or drawing sequences.
As a second step, we annotated all the collected skeletonized social interaction data using Skeletonographer. The labels were derived from the previous step, including the desk-relevant body arrangements (e.g. “across”, “adjacent”), formation shapes (e.g. “face-to-face”, “T-shape”), the number of occupants, types of social activities (e.g. “show & tell”, “discuss”, “conference call”), and sitting or standing status. Custom labels were created when we discovered new patterns, for example, poses such as “leaning on desk” or “pointing”.
We then revisited all our annotated skeleton data and transcriptions to analyze socio-spatial formation patterns. Annotated skeleton information was useful in filtering and finding recurring patterns of formations and activities (e.g. two-people, unmediated discussions) in different space configurations. Transcribed images were useful when checking formation transitions throughout the interactions.
Finally, we conducted one-on-one interviews with twelve employees to understand the perceptions behind the observed socio-spatial formation patterns. As the collected data was formed as anonymous skeletons, it was difficult to ask about formations that each participant exhibited. Instead, we prepared sample sets of skeletonized videos for each pattern. We showed the videos, described the pattern, and asked for their opinions, perceptions, and contextual details for each formation pattern. Some participants noticed themselves from the skeletonized representations, reflected on their experience, and described that situation (e.g. the type of social interaction it was, what encouraged them to exhibit that specific formation, how environmental aspects affected their social formation). When they did not recognize themselves, they tried to recall previous experiences and shared their thoughts.
3.6 Research Ethics for Installing Cameras in Public Space
As a company, we consulted guidelines and rules for overt video surveillance in the private sector as outlined by the Government of Canada [41]. In addition to the guidelines, four independent groups were consulted. First, corporate legal counsel performed an overview of legislation and
best practices as well as employee rights and safety, including General Data Protection Regulation (GDPR) concerns. Also, the *internal security team* which oversees all systems containing private data performed an investigation on behalf of the *facilities management team* who also conducted their own review. Finally, the *management of the employees* in the affected areas were consulted and interviewed, where comments and concerns were addressed and accounted for in the final design of the study, including collection systems, notices and information presented to employees.
An internal informational *website* was created to provide the basic overview of our project and let occupants access information about the camera setup. The site also contained information on our project background, goals, camera installation plans, data storage methods and the precautions employed to protect privacy. Access to the website was restricted to users located within the company firewall only. Then, *two presentations* were made, one to the managers and team leads of the space and a subsequent meeting for all the employees in the space. The presentations focused on explaining the intentions, goals and details of the data collection. We shared which areas would be covered by cameras, and highlighted the use of skeletonized human figures unless there was ambiguity. Following the presentation, we had a 40 minute open discussion with occupants to address concerns and discuss possible solutions and requested permission for the study. Then, we created *surveillance notices* and posted them in each of the areas covered by the cameras. The signage provided information about recording period, the actual camera position, what each camera would see (Figure 5-a), what the researcher would see (Figure 5-d), and a link to the website. This was done not only for the occupants, but also for visitors entering the space.
### 4 SOCIO-SPATIAL FORMATION FRAMEWORK FOR OFFICE SPACES
We build upon Adam Kendon’s *F-formation* framework [27] to use as our conceptual lens in analyzing how social formations differ in diverse office configurations. Previous work using F-formations [37, 43, 57] are limited in three ways. First, they focused on standing social interactions when office collaboration frequently includes combinations of sitting and standing which can evoke different aspects of social proximity and formations. Second, social interactions in the office are often a combination of dynamic activities, such as *discussing*, *presenting*, and *co-creating*. During these activities, occupants are not necessarily looking at each other. Third, the majority of social interactions in the office occur in the proximity of spatial elements that influence their social arrangements.
To analyze office social formation patterns, we define two new classifications: *formation shapes* and *desk relative arrangement*. *Formation shapes* illustrate the geometric shapes of formations. Additionally, inspired by the work of Lawson[33] and Sommer [51], who looked into chair positions occupied near a desk, we also considered *desk-relative arrangement* as an additional layer of framework.
#### 4.1 Formation Shapes
Our analysis generated 133 video transcriptions for common desks and 282 for personal desks. The analyzed shapes of the social geometry are based on the people’s relative position and body orientation. We observed five of Kendon’s known formations (i.e. *Vis-à-Vis*, *side-by-side*, *L-shape*, *Semicircular*, *Circular*) and two of Paay’s additional formations, which were observed in social cooking scenarios (i.e. *V-shape* and *Reversed L-shape*). Additionally, we found 104 video transcriptions (Figure 7) that did not fit into formation shapes, and identified three new formations not previously illustrated in other studies. We named these new formations: *Parallel*, *T-shape*, and *Z-shape*.
Although Kendon’s basic formations [27] were derived from standing social interactions, those were also observed in our studies of mixed sitting and standing situations. The *Vis-à-Vis* (*face-to-face*) formation occurs when people are opposite and facing each other, sharing a transactional
space between them. This formation was frequently observed when people were sitting across the desk or standing behind the chair clearance space. The *Side-by-side* formation is defined as standing next to each other, abreast. This formation was shown while working on concurrent but independent tasks or when demonstrating work. The *L-shape* formation represents people standing orthogonally to each other and facing a shared transactional space in front of them. This was mostly shown while having a brief chat or performing separate tasks while collaborating in the office. When there are more than two people, *Semi-circular* formations were dominant when engaged in a shared visual focus such as a TV or computer. Similarly, *Circular* formations were dominant for long-term discussions, although the shape of the circle depended on the physical environments.
We also observed two additional formations that were identified in kitchen collaborations [43]: the *V-shape* and the *Reversed L-shape*. The *V-shape* formation has people facing forward similar to the side-by-side formation but their bodies are slightly tilted to face each other. In the kitchen, this shape was shown when people were conversing while being actively engaged with individual tasks. However, in the office, this formation was more frequent when they were engaged in discussion near the desk they were sitting at. The *Reversed L-shape* formation is denoted when bodies were in an L-shape configuration but facing away from each other. Unlike the Reversed L-shape in the kitchen, which was formed when cooks were working at different cooking benches, in meeting rooms, people construct this shape while giving presentations on the public display.
Beyond these shapes, we found three new spatial arrangements that are applicable to social interactions in the office: the *Z-shape*, the *Parallel* formation and the *T-shape*. The *Z-shape* is formed by people facing each other but not within the same line. This is different from how Paay [43] defined *Z-shape*; which is side by side but facing opposite directions. In the office environment, people often formed a *Z-shape* instead of a face-to-face shape even though there was enough space. A *Parallel* formation is where people are not in the same line, but orienting their bodies towards the same direction. *Parallel* formations seldom happen in ordinary social contexts, but were frequently seen when a single person shares their work using a computer while sitting, with the other person listening while standing. The *T-shape* formation occurs when people’s bodies are in the same line, with only one person’s body oriented towards the other’s. This arrangement is also rare but frequently observed during discussions near the desk area.
4.2 Desk-Relative Arrangements
In addition to formation shapes, we added an additional layer to our framework, the *desk-relative arrangement*. We observed that the majority of social interactions occur near the desk, and how people locate themselves around the desk at each configuration leads to different space-usage patterns and social behaviours. Therefore, we set three different arrangement types using the occupants’ position as shown in the left of Figure 8. First is the *Perpendicular* arrangement where people locate themselves on two orthogonal sides near the corner of the desk. The *Across* arrangement is where people are located across the desk using the desktop as a transactional space, while the *Adjacent* arrangement is where people are located on the same side of the table. When there are more than two people in social interactions, those can be explained by decomposing them into multiple base arrangements.
The desk-relative arrangements do not stand for any social shapes. For example, *perpendicular* arrangement does not connote *L-shape*; people can form *face-to-face* or *V-shape* formations by freely moving around the space while maintaining perpendicular positions. Therefore, we combine these two classifications to explicitly illustrate the socio-spatial behaviours in the office spaces.
5 FINDINGS 1: STATIC SOCIO-SPATIAL PATTERNS IN THE OFFICE
In the following section, we describe observed static socio-spatial patterns in relation to the *type of social interaction activities* (i.e. unmediated interactions & computer-supported interactions), and *the type of office space* (i.e. common space & personal space) as shown in Table 1. For each finding, we further describe the aspects that influence the observed patterns by aligning them with results from the interview.
| (calculated based on 2: time) | Common Desk Area | Personal Desk Area |
|-------------------------------|------------------|--------------------|
| Unmediated | 66.23% | 42.97% |
| Computer-supported | 33.77% | 57.03% |
| Total Social Interactions | 100% | 100% |
Table 1. Frequency of unmediated and computer-mediated social interactions for each desk type. The calculation is based on the total duration of instances that are annotated as each combination.
5.1 Unmediated Social Interactions
In this study, we regarded *unmediated* social interaction as interactions without the involvement of computing devices. The results showed that 66.23% of social interactions in common spaces and 42.97% of social interactions in personal spaces were unmediated social interactions (Table 1). Compared to computer-supported interactions, *interior elements* were found to have a bigger influence on static socio-spatial formation patterns as shown in Figure 9.
5.1.1 Common Desk Areas. Unlike when people occupied common desks for individual work [2], our study showed that people occupied the areas closest to the door without concern of *visual exposure* of their laptop display. In addition, different from the study that observed standing occupants in public vacant spaces [27], we found that in sitting situations, socio-spatial formations are influenced by *desk size* and the position of surrounding *walls*.
*Influence of desk size.* *Adjacent* arrangements were dominant at M3 (56%) compared to M1 (21.05%) or M2 (37.5%) as shown in Figure 9. The interview revealed that the *size* of the desk influenced this formation pattern. The participants highlighted that the *depth* of the desk at M1, which is 1.3 m, was considered too deep to employ discussions effectively, especially when there were only two people. However, *adjacent* arrangements caused ergonomic difficulties during the
Fig. 9. The frequency of desk-relative arrangements for each spatial configuration during unmediated social interactions are illustrated as bars. The frequency of formation shapes for each desk-relative arrangement are illustrated as lines underneath their corresponding bars. The frequency was calculated based on the total duration observed.
discussion. Therefore, they moved backwards and oriented their bodies towards each other to form either *face-to-face* (Figure 10-b,c,f) or *V-shapes* (Figure 10-a). The space farther away from the desk was actively occupied.
**Influence of walls.** People at M1 did not initially arrange themselves into *adjacent* setups, even though the length of the desk was long enough to accommodate two people (Table 1). All the *adjacent* arrangements observed at M1 were instances following computer-mediated interactions. Interviewees mentioned that the *walls* located <1 m from the desk made them feel socially uncomfortable with sitting next to each other, and locating at different sides of the desk was found to be the solution to overcome this (Figure 10-g,h). This leads to a high frequency of *perpendicular* or *across* arrangements as shown in Figure 10-g-l.
**Influence of a table between people.** The *across* arrangement was a common formation for all common desk areas (Table 1-yellow), and having a desk between the occupants demonstrated different social behaviours. In contrast to *perpendicular* and *adjacent* arrangements where occupants tended to stay close, people arranged *across* from each other often kept a larger distance with less frequent eye contact. For example, they located diagonally instead of taking directly opposite positions, and rotated their bodies or heads as a *Z-shape* (Figure 10-k). One instance showed that people stretched their legs parallel to the desk and formed *T-shapes* or *L-shapes* (Figure 10-j).
### 5.1.2 Personal Desk Areas
All the occupants in the office have their own personal desks, and they casually visit other’s desks for informal discussions. As soon as the *owners* of the personal desk noticed they had a *visitor*, they turned their head and started a conversation. If the discussion continued for more than about one minute, both *owners* and *visitors* changed the position or rotation of their bodies. The *desk arrangements*, *desk partitions* and *sit/stand status* was found to have an influence on their socio-spatial formations.
**Influence of desk layout.** For all types of personal desk configurations (Figure 4 O1-O6, E), the *adjacent* arrangement with *L-shape*, *V-shape* and *face-to-face* formations were frequently seen during unmediated social interactions (Figure 10-u-x). Compared to common desk spaces, they
maintained a farther social distance and occupied a relatively larger space. The interviews revealed that people regard personal desks areas as private spaces, and tried to find a location that does not invade privacy but is reasonable enough to have a conversation. Their overlapping transactional area was usually the space behind the desk, which sometimes generated an upside down V-shape (Figure 10–w).
Variations were shown when any edge of the personal desks was open. For the desks with open sides, about half of the unmediated interactions occurred in perpendicular arrangements (Figure 9 O1/O2, O4/O5). The shape varies among V-shape, L-shape, and face-to-face, depending on the length of their discussion. When the back of the desk is approachable, about 45% of visitors formed across arrangements during unmediated interactions (Figure 9–O3) with Z-shape (Figure 10–α) or face-to-face shapes (Figure 10–z). Interviewees mentioned that they prefer these formations from the perspective of both visitor and desk owner, as their display is not visually exposed.
Fig. 10. Frequently observed socio-spatial formations during unmediated interactions. Examples are categorized based on spatial influence.
**Influence of partitions.** Partitions (30 cm tall) or architectural features near desks were found to contribute to certain social formations. *Visitors* tended to stay near partitions to lean on them (Figure 10 m,o,p). One person even grabbed and leaned on the metal truss near a desk. Interviews revealed that interior elements near personal desks that can be leaned on (e.g. partitions or columns) provided additional comfort when in someone’s personal area regardless of social distance. Interviewees mentioned that these interior items acted like a *security blanket*, allowing comfortable interactions even when there is limited space.
**Influence of doors.** When there were doors or walls near personal desks that physically divide the personal space from corridors (Figure 4 E2), *visitors* frequently stood near the wall or door (Figure 10–q) for short-term discussions. Interviews showed that they preferred to stay near these physical divisions for short discussions (e.g. asking for advice or asking simple questions), as standing on the boundary of personal and common space can provide an impression that the conversation will not take long.
**Influence of standing or sitting status.** Figure 10–r and s illustrate two example cases, one with a seated *owner* and the other with a standing *owner*, both influencing the social proxemic distance. The distance between people were closer when they were both standing and maintaining similar eye levels and the distance increased when one of them sat down. In addition, when the *visitor* crouched next to the seated *owner*, both leaned on the desk closing the distance between them (Figure 10–t).
### 5.2 Computer-supported Social Interactions
*Computer-supported* social interactions are interactions that involve any sort of computing device (e.g. TV, tablet, laptop). The results showed that the 35% of computer-supported social interactions occurred in common spaces and 38% in personal spaces (Table 1). The results showed that socio-spatial formations are highly influenced by the *type of display* and in general, social proxemic distances are closer compared to unmediated interactions (Figure 12).

5.2.1 Common Desk Areas. At common desks, people usually brought their mobile computing devices with them, and often referred to information directly on these devices or projected onto a public display.
**Personal mobile devices.** When sharing information using personal devices in the middle of a discussion, people tended to maintain the formation they were in, and moved around the device instead. In **adjacent** or **perpendicular** arrangements, they moved close to each other, oriented their bodies towards the devices, and formed a *T-shape* (Figure 12 b) or *V-shape* (Figure 12 a). When people were in **across** arrangements, they turned their laptops towards the others (Figure 12 c) and used their displays as a public display.
**Public display.** When people used public displays, they actively rearranged themselves towards the display. The *V-shape* was predominant in all desk setups regardless of their previous formations as shown in Table 11 (Figure 12 d,e,f,g). When there are more than two people on the same side of the desk, the occupants seated closer to the display moved back slightly to prevent blocking the sight lines of others, as shown in Figure 12 i,j. Although this disconnected people from the desk, they kept this formation and used their thighs as a temporary table. The *reversed L-shape* formation was additionally observed when one person turned their chair to look at the public display while another person was looking at their own laptop display (Figure 12 f).
5.2.2 Personal Desk Areas. Similar to findings from Koutsolampros [28], we observed multiple computer-mediated social interactions near personal desk areas. Those usually involved discussing works-in-progress, which caused the owner’s displays to be involved in the interaction. In most cases, people were working while sitting, which indicates that the display height was not ideal for the standing visitor. Although all the employees were using height-adjustable desks with articulating monitor arms, they rarely adjusted the setup. Instead, they rearranged their body positions and orientations towards the monitors similar to public vertical display scenarios [6] (Figure 12).
**Desk owner’s display.** Displays oriented towards the desk owner encouraged visitors to arrange in an **adjacent** setup regardless of the desk configuration, and the *T-shape* was predominant in most cases (Figure 11). The visitors stayed relatively close to the *owner* and looked at the monitor as illustrated in Figure 12 n, o. Interviewees highlighted that the closer proxemic distance did not bother them as the display provided an external visual focal point. However, when there were partitions on the side of the desk, a larger T-shape was formed as the visitor preferred to lean against the partition. Pointing gestures were repeatedly seen with this formation shape.
The *parallel* shape was occasionally formed as well (Figure 11). Visitors often occupied the space behind the owner’s chair and oriented their bodies towards the display, while the owner looked at and operated their computer (Figure 12 l). This was frequently seen but only lasting for short periods of time. The interviewees highlighted that this formation was helpful when making progress updates quickly and efficiently because avoiding eye contact indirectly prevents visitors from intervening.
On the other hand, for long-lasting computer supported social interactions, *V-shape* (Figure 12 r,s) and *L-shape* (Figure 12 p,q) were often used. Both the *visitor* and the *owner* of the desk moved back and oriented towards each other, while continuing to look at the display. The desk owners occupied the space slightly off-center, and *visitors* moved slightly closer to the display’s centre point. The distance between them was closer than during unmediated social interactions, but the distance between the visitor and the desk was still relatively far. Therefore, when *visitors* wanted to look at the display more carefully, they bent over rather than moving closer illustrated in Figure 12 l-a. The interviews revealed that these long-lasting computer-mediated social interactions at personal desk areas were usually for *casual project updates* (e.g. scrum meetings) between collaborators.
Visitor’s mobile devices. Visitors often brought and worked with their own mobile computing devices when they needed to work separately in the middle of collaboration. Visitors found separate workspaces (non-overlapped transactional spaces) near the personal desk area. For example, we observed one person bringing their chair and arranging in a reversed L-shape, and worked with their laptop on their leg (Figure 12-u). Another case showed a visitor moving to a nearby public table behind the desk owner and working individually in a reversed Z-shape (Figure 12-v). Interviews highlighted that they preferred to have a separate work area because working long-term in the other’s personal space made them uncomfortable and made it difficult to concentrate.
Fig. 12. Frequently observed socio-spatial formations during computer-supported interactions. Examples are categorized based on the type of display used.
6 FINDINGS 2: FORMATION TRANSITIONS FOR SOCIAL COMFORT
Inspired by the work of Tong et al. [57], we also analyzed how socio-spatial formations change over time. We found four types of formation changes that occurred to remain socially comfortable within static environmental constraints: 1) body position adjustments for comfortable social proxemic distances, 2) formation shape adjustments for ergonomic comfort, and 3) minor movements for stimulation, and 4) social territory reconfiguration for concurrent multi-interactions.
6.1 Body Position Adjustments for Comfortable Social Proxemics
When face-to-face interactions last longer than about 2 minutes, occupants slowly adjust their positions to reach an ideal proxemic distance. For example, people subtly moved backwards over time and occupied a larger space as illustrated in Figure 13-a,b,c. In some cases, when occupants were arranged across the desk, they changed to an adjacent setup to close the distance between them (Figure 13-d,e). This was frequently observed in M2 and M3 where the depth of the table was greater than 1.2 m, and the vacant area behind the desk becomes occupied.
6.2 Formation Shape Adjustments for Ergonomic Comfort
Other socio-spatial transitions occurred to support comfortable and ergonomic conversations by changing formation shapes. This was frequently shown when unmediated social interactions lasted longer than two minutes. People changed their chair position and body orientation to face each other to minimize neck or back strain. For example, people who were arranged at common desk...
areas rolled away from the desk to form a *V-shape* or *face-to-face* shape and as shown in Figure 13-g.h.i. The social territory expands to the area farther behind the desk. Second, this was shown when the computer-supported social interactions lasted long. They often re-arranged themselves from *parallel* to *L-shape* or *V-shape* to create a shared transactional space to comfortably face each other while maintaining a reasonable viewing distance from the screen (Figure 13-j.k).
### 6.3 Minor Movements for Stimulation
At personal desk spaces, socio-spatial transitions were more frequent due to the standing status of *visitors*. *Visitors* moved around the *owner* while discussing by keeping similar distances (Figure 13-l.m). Interviews revealed that people did not notice that they moved during the conversation, but afterwards indicated that the flexibility of slight movement around the space brought a more comfortable experience to the discussion. In addition, when the conversations lasted longer, visitors unconsciously moved towards nearby interior elements, such as walls or furniture.
### 6.4 Social Territory Reconfiguration for Activity Transitions
Given the highly complicated nature of collaborative activities in the office, we observed that social territories changed when displays were included or excluded from the conversation. When occupants started to use public displays, they oriented towards the display, which leads to shape changes from *circular* to *semi-circular* (Figure 13-n.o.p). Some people even changed to an *adjacent* arrangement as shown in Figure 13-q. The social distance between the occupants reduced, and their social territory moved farther from the desk. Interviewees highlighted that when they are looking at the display together, they feel less uncomfortable being closer to others.
Similar changes were also observed at personal desks. When the display was removed from the conversation, the occupants changed into a *face-to-face* formation with a larger social distance. Therefore, the social territory expanded beyond the desk area, as shown in Figure 13-r.s.t. This is the opposite of public displays where people tended to move away from the desk when the display was included.
## 7 DISCUSSION
This work is a preliminary investigation to gain an overview of socio-spatial patterns in office layouts. Different from early work on proxemic theories [22] with *static* distances, our study reveals that socio-spatial formations are a *dynamic* concept which can be influenced by space size, desk size, interior elements (e.g. partitions, columns, walls, and doors), sitting or standing status, and visual focus. In this section, 1) we further discuss this dynamic concept of socio-spatial formations and how this can inform future office designs. Then, we discuss the opportunities for 2) future office planning tools and 3) interactive interiors that can enhance social comfort in the office. Finally, 4) we reflect on our research methods.
### 7.1 Dynamics of Socio-Spatial Formations to Inform Office Space Design
Similar to the work by Marshall [37], we revealed that *furniture* has a large influence on preferred socio-spatial formations. Building upon the works of Lawson [33], who argued that the seating arrangements around a desk are relevant to different types of collaborative interactions, we propose that the size and configuration of a desk are higher level factors that contribute to social arrangements. For example, Lawson showed that *adjacent* arrangements were predominant for collaboration scenarios, but we found that there is a threshold to the size of a desk at which people feel more socially comfortable occupying a different edge of the desk. For meeting spaces, designers first need to determine the desired social arrangements for a certain space, then choose an appropriate shaped desk to enhance social comfort. For instance, when the goal is to support
adjacent arrangements at small-sized wall mounted table, designers can add additional edges to provide non-collinear edges for occupants.
Results also showed that interior elements could induce different socio-spatial formations. For instance, when the surrounding wall was too close to the desk, people reported that they felt uncomfortable sitting close to others. Also, nearby desk partitions, walls, or columns could provide comfortable areas by encouraging people to stand around them. Therefore, by looking at these elements with a social lens and using those as building blocks, interior designers can improve social comfort. Configuring interior elements can compensate for the constraints from architectural limitations as well.
Moreover, the intervention of displays in the conversation influenced socio-spatial formations. Different from Hall’s work [22], our study showed that the acceptable proxemic distance becomes smaller during computer-mediated social interactions. Besides visual focus, the ownership of the display was found to be an influential factor. At personal desks, the parallel formation between the roles of driver and observer were witnessed similar to vertical public display setups [6], but without the role changes. Interviews highlighted that visitors tried to keep a farther distance from owners’ devices to respect their personal space. Unlike meeting rooms that include a large table and TV, designers could also consider multiple small kiosk zones to arrange people in close formations by taking advantage of both shared visual focus and public displays, for work update discussions.
Lastly, unlike social interactions in standing situations [22, 27], the social proxemic distance becomes larger in interactions with a mixture of standing and sitting occupants. Several aspects caused this variation. The eye level difference between people standing and sitting causes ergonomic issues when they are located too close together. Also, the chair and seated body posture enlarges the occupant’s personal space, which encourages others to stay farther away. Interior designers could use this knowledge when selecting and arranging furniture in offices. For limited spaces, designers can maximize social comfort by installing bar tables or intentionally placing guest chairs near personal desks.
7.2 Space Planning Tools to Support Social Comfort
There have been several systems to evaluate the physical characteristics of a space to support the design process. Most looked at the space from a topological perspective, and the social aspects considered in the system were mostly focused on path simulation [39, 40] or visual privacy [10]. However, our study revealed that social comfort in office spaces is not only a matter of visual privacy or walking distances. Based on our findings, we suggest several opportunities for future space analysis and simulation tools.
First, a future system could simulate which areas will be frequently occupied in a given design during social interactions from a micro-perspective. Our results revealed that this would be simulated differently based on the configuration of walls, desks, and partitions, and showed the potential for evaluating the level of spatial support in social interactions. Moreover, the simulation could be run for each type of social interaction (e.g. getting quick advice, long-term unmediated discussions, computer-supported progress updates), which reveals to different space-usage patterns. Inspired by works that include robots in the formation patterns [25, 38], the system could simulate computer-mediated social interactions by regarding displays as a type of human, and including them in the formation shape. The relationship between the display and the human can be defined in the system to distinguish the behavior of visitor and desk owner. Also, the system needs to be aware of the contextual information of the interior elements beyond just geometry, for example, whether the desk is for public use or personal use. The threshold distance is relatively larger for a personal desk setup, and this generates different socio-spatial patterns such as an L-shape in the adjacent arrangements, with an occupied space farther from the desk.
7.3 Interactive Interior Elements and Furniture for Social Comfort
Social interactions at personal desk areas are common in the office environment [48]. Informal desk visits were found to increase work productivity by letting people share work or progress quickly in a familiar area [3, 11]. However, our study revealed that the experience of social interactions at one’s personal desk depends on a broad range of aspects. First, people visiting another person’s desk exhibited different socio spatial formations, as they aimed to maintain a certain distance not to invade personal space. Another aspect was from the different sitting and standing statuses between the desk owner and the visitor, namely ergonomic discomfort due to the display or desk height configured to desk owner’s preferences (Figure 12.1-b). We also found that the reason that people did not adjust the height of their desks or monitors was not because they did not want to adjust their work environment, but because they did not want to intervene and break the flow of a social interaction.
Robotic workstations or work environments have only focused on single-user interactions that automatically respond to the owner’s anthropometric measurements [63] or habits [8]. The next generation of interactive furniture or workstations should support informal social interactions. The interaction technique needs to help people smoothly transition their perceptions of the display from a private display to a public one [6, 26].
7.4 F-formations to Evaluate Space Occupancy
We built upon Kendon’s F-formation framework [27] to understand socio-spatial behaviours and office occupancy patterns. Our study showed that the F-formation framework can provide additional information for social contexts in prior occupant evaluation studies in HBI. Unlike the majority of existing works that collected individual occupant’s presence data using localization methods and created heat-maps from an individual’s position data [2, 59, 62], our approach provided deeper insight into how space is occupied with respect to groups of people in specific contexts. For example, we were able to find different socio-spatial formations for how the visitor and owner of a personal desk occupy space during different types of collaborative activities. Furthermore, the transitions between each socio-spatial formation provided an overview of the dynamics in space usage which can help space designers be aware of user’s long-term social comfort.
7.5 Digital Ethnography with Anonymized Skeletons
Our digital ethnography study is based on video frames of occupant’s skeletons generated using computer vision techniques. We captured and distilled the body movements of occupant’s skeletonized representations in a manner detailed enough to infer and observe socio-spatial formations. Anonymized skeletons allowed privacy-preserving analysis, which encouraged all the occupants to participate in our study with fewer concerns. Also, our method speeds up the social interaction classification process by automatically counting the number of occupants (skeletons) in the scene, and makes it easy to deal with large-scale data.
However, there were several technical limitations. First, our use of computer vision omitted the detection of non-fixed items, such as chairs or laptops, which play an important role in socio-spatial patterns in the office environment. As a result, we needed to occasionally consult the masked raw video. With further advancements in computer vision, the relevant items could be detected, located, and tracked reliably, maintaining anonymity. Moreover, real-life situations cause occlusion scenarios which degrade the quality of human body detection. This problem was mitigated by installing multiple cameras for a single area in our study. Conversely, some coats, hangers and truss structures generated false positives in the human detection, but those were easily excluded since they are seen as static skeletons.
Despite these limitations, we believe this advanced method can support future ethnographic studies in three domains. First, our method can be applied to public ethnographic studies, which have historically been done by co-located human observers for privacy concerns [6]. Second, our method can be used to understand socio-spatial characteristics in sensitive contexts, such as hospitals. For example, this can contribute to the study of Thomsen et al. [56], which studied social formations in sensitive medical consultations. Third, compared to using other pervasive sensors that support indoor localization [2, 62], our method could collect embodied behaviours, such as body languages, spatial formations, and poses, without the need for devices that are both cumbersome for the individual and costly for large-scale studies.
8 STUDY LIMITATIONS & FUTURE WORK
There are several limitations that call for further investigation in the future. First, a large amount of variability in real work environments made it challenging to pinpoint the exact influences on observed socio-spatial formations. In this study, we focused on the frequency of each formation pattern along with interview results with a large data set. Conducting a controlled study could be a likely next step for a more precise investigation. Also, collecting the level of social comfort for each formation in the future could provide richer knowledge in terms of social comfort in space. Second, sound and lights, which are potential influences on social interactions, were ignored in this study. There were some instances where occupants wore headphones before the visitor approached, and others were people were having conversations in a darker environment. This could be investigated in the future. Third, our findings from common spaces and personal work spaces are not feasible to inform offices with hot-desking. Further investigation is needed to explore the influence of common desks with personal laptops for individual work settings. Fourth, due to the ethical research setup, we could not accurately collect the content (e.g. project-related discussion, non-work chat) or type (e.g. scrum meeting, asking for advice) of social interactions. Interviews provided a glimpse into these aspects by recalling or guessing who the skeletons are just by identifying unique body language or poses. In the future, we can ask occupants to manually log the contexts, as a diary study. Context-recognition from body language or poses using advanced machine-learning techniques could also automatically add an additional layer of information. Moreover, we did not consider cultural aspects while analyzing socio-spatial formations despite its importance [13, 22]. Although we believe the amount of data collected in diverse locations from multicultural employees compensated for this limitation, we could further investigate demographic information from the skeletons [1]. Finally, we briefly observed that the path people took while approaching other’s desks was not always the most efficient path; instead, they took the path that allowed the desk owners to see them approaching in their peripheral vision. However, we could not accurately extract the path they took just by observing them through videos from perspective views. In the future, reconstructing 2D skeletons into 3D models in the architectural space could be a potential next step to obtain accurate movements from the occupants.
9 CONCLUSION
This is an exploratory study to investigate the influence of spatial characteristics on socio-spatial formations. We captured a rich array of social arrangements exhibited in an office environment for two weeks with corresponding occupants’ space perceptions. We analyzed collected data using Skeletonographer, that we implemented to generate skeletonized representations, and to play back and annotate skeleton data. Our findings revealed that desk sizes, desk configuration, walls, partitions, and sitting and standing status contribute to different preferred socio-spatial formations during unmediated discussions, while the type and ownership of displays have more significant influence during computer-mediated social interactions. We also found that socio-spatial formations
change over time to maintain ergonomic and social comfort in long-term interactions. Based on these findings, we propose that future office space designers could plan desired socio-spatial formations at a given space while arranging interior elements to maximize social comfort. We also suggest that this knowledge can inform opportunities for space planning and analysis tools as well as interactive furniture in the future.
REFERENCES
[1] D. Adjeroh, D. Cao, M. Piccirilli, and A. Ross. 2010. Predictability and correlation in human metrology. In *2010 IEEE International Workshop on Information Forensics and Security*. 1–6. https://doi.org/10.1109/WIFS.2010.5711470
[2] Hamed S. Alavi, Himanshu Verma, Jakub Mlynar, and Denis Lalanne. 2018. The Hide and Seek of Workspace: Towards Human-Centric Sustainable Architecture. In *Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18)*. ACM, New York, NY, USA, Article 75, 12 pages. https://doi.org/10.1145/3173574.3173649
[3] Thomas J. Allen and Peter G. Gerbersting. 1973. A Field Experiment to Improve Communications in a Product Engineering Department: The Nonterritorial Office. *Human Factors* 15, 5 (1973), 487–498. https://doi.org/10.1177/001872087301500505 arXiv:https://doi.org/10.1177/001872087301500505
[4] Irwin Altman. 1975. The Environment and Social Behavior: Privacy, Personal Space, Territory, and Crowding. (1975).
[5] Louis Atallah and Guang-Zhong Yang. 2009. The use of pervasive sensing for behaviour profiling ↠a survey. *Pervasive and Mobile Computing* 5, 5 (2009), 447 – 464. https://doi.org/10.1016/j.pmcj.2009.06.009
[6] Alec Azad, Jaime Ruiz, Daniel Vogel, Mark Hancock, and Edward Lank. 2012. Territoriality and Behaviour on and Around Large Vertical Publicly-shared Displays. In *Proceedings of the Designing Interactive Systems Conference (DIS ’12)*. ACM, New York, NY, USA, 468–477. https://doi.org/10.1145/2317956.2318025
[7] Alan Backhouse and Peter Drew. 1992. The design implications of social interaction in a workplace setting. *Environment and Planning B: Planning and Design* 19, 5 (1992), 573–584.
[8] Gilles Bailly, Sidharth Sahdev, Sylvain Malacria, and Thomas Pietrzak. 2016. LivingDesktop: Augmenting Desktop Workstation with Actuated Devices. In *Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16)*. ACM, New York, NY, USA, 5298–5310. https://doi.org/10.1145/2858036.2858208
[9] Till Ballendat, Nicolai Marquardt, and Saul Greenberg. 2010. Proxemic Interaction: Designing for a Proximity and Orientation-Aware Environment. *ACM International Conference on Interactive Tabletops and Surfaces, ITS 2010*, 121–130. https://doi.org/10.1145/1936652.1936676
[10] Mateus Paulo Beck. 2012. Visibility and Exposure in Workspaces. In *Proceedings of the 9th International Space Syntax Symposium*.
[11] Aoife Brennan, Jasdeep S. Chugh, and Theresa Klane. 2002. Traditional versus Open Office Design: A Longitudinal Field Study. *Environment and Behavior* 34, 3 (2002), 279–299. https://doi.org/10.1177/0013916502034003001 arXiv:https://doi.org/10.1177/0013916502034003001
[12] Malcolm J. Brookes and Archie Kaplan. 1972. The Office Environment: Space Planning and Affective Behavior. *Human Factors* 14, 5 (1972), 373–391. https://doi.org/10.1177/001872087201400502 arXiv:https://doi.org/10.1177/001872087201400502
[13] Chloe Brown, Christos Efstratios, Ilias Leontiadis, Daniele Quercia, and Cecilia Mascolo. 2014. Tracking serendipitous interactions: how individual cultures shape the office. In *CSCW*.
[14] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In *2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*. IEEE, 1302–1310.
[15] Maria Danninger, Roel Vertegaal, Daniel P. Siewiorek, and Aadil Mamuju. 2005. Using Social Geometry to Manage Interruptions and Co-worker Attention in Office Environments. In *Proceedings of Graphics Interface 2005 (GI ’05)*. Canadian Human-Computer Communications Society, School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada, 211–218. http://dl.acm.org/citation.cfm?id=1089508.1089543
[16] Francis Duffy and Kenneth Powell. 1997. The new office. (1997).
[17] Anne-Laure Fayard and John Weeks. 2007. Photocopiers and Water-coolers: The Affordances of Informal Interaction. *Organization Studies* 28, 5 (2007), 605–634. https://doi.org/10.1177/0170840606068310 arXiv:https://doi.org/10.1177/0170840606068310
[18] Thomas F. Gieryn. 2002. What Buildings Do. *Theory and Society* 31, 1 (2002), 35–74.
[19] Saul Greenberg, Nicolai Marquardt, Till Ballendat, Rob Diaz-Marino, and Miaosen Wang. 2011. Proxemic Interactions: The New ‘Unicomp’. *Interactions* 18, 1 (Jan. 2011), 42–50. https://doi.org/10.1145/1897239.1897250
[20] Jens Emil Grønbæk, Henrik Korsgaard, Marianne Graves Petersen, Morten Henriksen Birk, and Peter Gall Krogh. 2017. Proxemic Transitions: Designing Shape-Changing Furniture for Informal Meetings. In *Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17)*. ACM, New York, NY, USA, 7029–7041. https://doi.org/10.1145/3025453.3025487
[21] G. Guest, K.M. MacQueen, and E.E. Namey. [n. d.]. *Applied Thematic Analysis*.
[22] E.T. Hall and Copyright Paperback Collection (Library of Congress). [n. d.]. *The Hidden Dimension*.
[23] Barry P. Haynes. 2008. The impact of office layout on productivity. *Journal of Facilities Management* 6, 3 (2008), 189–201. https://doi.org/10.1108/14725960810885961 arXiv:https://doi.org/10.1108/14725960810885961
[24] Bill Hillier and Juliemre Hanson. 1984. *The Social Logic of Space*. Cambridge University Press. https://doi.org/10.1017/CBO9780511597237
[25] H. Huettentrauch, K. S. Eklundh, A. Green, and E. A. Topp. 2006. Investigating Spatial Relationships in Human-Robot Interaction. In *2006 IEEE/RSJ International Conference on Intelligent Robots and Systems*. 5052–5059. https://doi.org/10.1109/IROS.2006.282535
[26] Junko Ichino, Kazuo Isoda, Tetsuya Ueda, and Reimi Satoh. 2016. Effects of the Display Angle on Social Behaviors of the People Around the Display: A Field Study at a Museum. In *Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW ’16)*. ACM, New York, NY, USA, 26–37. https://doi.org/10.1145/2818048.2819938
[27] Adam Kendon. 1990. *Conducting Interaction: Patterns of Behavior in Focused Encounters*. Cambridge University Press.
[28] Petros Koutsalimpros, K Sailer, R Haslem, M Austwick, and Tasos Varoudis. 2017. Big Data and Workplace Micro-Behaviours: A closer inspection of the social behaviour of eating and interacting.
[29] Peter Gall Krogh, Marianne Graves Petersen, Kenton O’Hara, and Jens Emil Groenbaek. 2017. Sensitizing Concepts for Socio-spatial Literacy in HCI. In *Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17)*. ACM, New York, NY, USA, 6449–6460. https://doi.org/10.1145/3025453.3025756
[30] H. Kunouka, Y. Suzuki, J. Yamashita, and K. Yamazaki. 2010. Reconfiguring spatial formation arrangement by robot body orientation. In *2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)*. 285–292. https://doi.org/10.1109/HRI.2010.5451182
[31] David L. Masten and Tim M.P. Plowman. 2010. Digital Ethnography: The Next Wave in Understanding the Consumer Experience. *Design Management Journal (Former Series)* 14 (06 2010), 75 – 81. https://doi.org/10.1111/j.1948-7169.2005.tb00044.x
[32] B. Lawson. [n. d.]. *How Designers Think: The Design Process Demystified*.
[33] B. Lawson. [n. d.]. *The Language of Space*.
[34] Bokyung Lee, Sindy Wu, Maria Jose Reyes, and Daniel Saakes. 2019. The Effects of Interruption Timings on Autonomous Height-Adjustable Desks That Respond to Task Changes. In *Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19)*. ACM, New York, NY, USA, Article 328, 10 pages. https://doi.org/10.1145/3200985.3300558
[35] Paul Lutz and Christian Heath. 1998. Mobility in Collaboration. In *Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work (CSCW ’98)*. ACM, New York, NY, USA, 305–314. https://doi.org/10.1145/289444.289505
[36] Nicolai Marquardt, Ken Hinckley, and Saul Greenberg. 2012. Cross-device Interaction via Micro-mobility and F-formations. In *Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST ’12)*. ACM, New York, NY, USA, 13–22. https://doi.org/10.1145/2380116.2380121
[37] Paul Marshall, Yvonne Rogers, and Nadia Pantidi. 2011. Using F-formations to Analyse Spatial Patterns of Interaction in Physical Environments. In *Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work (CSCW ’11)*. ACM, New York, NY, USA, 445–454. https://doi.org/10.1145/1958824.1958893
[38] Takahiro Matsumoto, Mitsuhiro Goto, Ryo Ishii, Tomoki Watanabe, Tomohiro Yamada, and Michita Imai. 2018. Where Should Robots Talk?: Spatial Arrangement Study from a Participant Workload Perspective. In *Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18)*. ACM, New York, NY, USA, 270–278. https://doi.org/10.1145/3171221.3171265
[39] Danil Nagy, Damon Lau, John Locke, James Stoddart, Lorenzo Villaggi, Ray Wang, Dale Zhao, and David Benjamin. 2017. Project Discover: An Application of Generative Design for Architectural Space Planning. https://doi.org/10.22360/simaud.2017.simaud.007
[40] Danil Nagy, Lorenzo Villaggi, James Stoddart, and David Benjamin. 2017. The Buzz Metric: A Graph-based Method for Quantifying Productive Congestion in Generative Space Planning for Architecture. *Technology/Architecture + Design* 1, 2 (2017), 186–195. https://doi.org/10.1080/24751448.2017.1354617 arXiv:https://doi.org/10.1080/24751448.2017.1354617
[41] Office of the privacy commissioner of canada. 2019. Canada on Covert Video Surveillance in the Private Sector. (jan 2019). Retrieved 2019-01-05 from https://www.priv.gc.ca/en/privacy-topics/surveillance-and-monitoring/gd_cvs_2009052/
[42] Takeshi Oozu, Aki Yamada, Yuki Enzaki, and Hiroo Iwata. 2017. Escaping Chair: Furniture-Shaped Device Art. 403–407. https://doi.org/10.1145/3024969.3025064
[43] Jeni Paay, Jesper Kjeldskov, and Mikael B. Skov. 2015. Connecting in the Kitchen: An Empirical Study of Physical Interactions While Cooking Together at Home. In *Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW ’15)*. ACM, New York, NY, USA, 276–287. https://doi.org/10.1145/2675133.2675194
[44] Mahbub Rashid, Kevin Kampschroer, Jean Wineman, and Craig Zimring. 2006. Spatial Layout and Face-to-Face Interaction in Offices—A Study of the Mechanisms of Spatial Effects on Face-to-Face Interaction. *Environment and Planning B: Planning and Design* 33, 6 (2006), 825–844. https://doi.org/10.1068/b31123
[45] Kerstin Sailer, Andrew Budgen, and Nathan Lonsdale. 2008. Evidence-Based Design: Theoretical and Practical Reflections of an Emerging Approach in Office Architecture.
[46] Kerstin Sailer, Petros Koutsolampros, Martin Zaltz Austwick, Tasos Varoudis, and Andy Hudson-Smith. 2016. Measuring interaction in workplaces. In *Architecture and Interaction*. Springer, 137–161.
[47] Kerstin Sailer and Alan Penn. 2009. Spatiality and transpatiality in workplace environments. Royal Institute of Technology (KTH).
[48] K Sailer, R Pomeroy, and R Haslem. 2015. Data-driven design—Using data on human behaviour and spatial configuration to inform better workplace design. *Corporate Real Estate Journal* 4 (02 2015).
[49] L Scott-Webber. [n.d.] In *Sync: Environmental Behavior and the Design of Learning Spaces*.
[50] Ben Rydal Shapiro and Rogers Hall. 2018. Personal Curation in a Museum. *Proc. ACM Hum.-Comput. Interact.* 2, CSCW, Article 158 (Nov. 2018), 22 pages. https://doi.org/10.1145/3274427
[51] Robert Sommer. [n.d.] *Personal space: the behavioral basis of design*
[52] Robert Sommer. 1967. Small group ecology. *Psychological bulletin* 67, 2 (1967), 145.
[53] Yuichiro Takeuchi. 2010. Weightless Walls and the Future Office. In *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10)*. ACM, New York, NY, USA, 619–628. https://doi.org/10.1145/1753326.1753419
[54] Yuichiro Takeuchi and Jean You. 2014. Whirlstools: kinetic furniture with adaptive affordance. In *CHI Extended Abstracts*
[55] Maurice Ten Koppel, Gilles Bailly, Jörg Müller, and Robert Walter. 2012. Chained Displays: Configurations of Public Displays Can Be Used to Influence Actor-, Audience-, and Passer-by Behavior. In *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12)*. ACM, New York, NY, USA, 317–326. https://doi.org/10.1145/2207676.2207720
[56] Josephine Raun Thomsen, Peter Gall Krogh, Jacob Albrek Schnedler, and Hanne Linnet. 2018. Interactive Interior and Proxemics Thresholds: Empowering Participants in Sensitive Conversations. In *Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18)*. ACM, New York, NY, USA, Article 68, 12 pages. https://doi.org/10.1145/3173574.3173642
[57] Lili Tong, Audrey Serna, Simon Pageaud, Sébastien George, and Aurélien Tabard. 2016. It’s Not How You Stand, It’s How You Move: F-formations and Collaboration Dynamics in a Mobile Learning Game. In *Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’16)*. ACM, New York, NY, USA, 318–329. https://doi.org/10.1145/2935334.2935343
[58] Alasdair Turner, Maria Doxa, David O’Sullivan, and Alan Penn. 2001. From Isovists to Visibility Graphs: A Methodology for the Analysis of Architectural Space. *Environment and Planning B: Planning and Design* 28, 1 (2001), 103–121. https://doi.org/10.1068/b2684 arXiv:https://doi.org/10.1068/b2684
[59] Himanshu Verma, Hamed S. Alavi, and Denis Lalanne. 2017. Studying Space Use: Bringing HCI Tools to Architectural Projects. In *Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17)*. ACM, New York, NY, USA, 3856–3866. https://doi.org/10.1145/3025453.3026055
[60] Luke Vink, Viraj Kan, Ken Nakagaki, Daniel Leithinger, Sean Follmer, Philipp Schoesslerr, Amit Zoran, and Hiroshi Ishii. 2015. TRANSFORM As Adaptive and Dynamic Furniture. In *Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15)*. ACM, New York, NY, USA, 183–183. https://doi.org/10.1145/2702613.2732494
[61] Daniel Vogel and Ravin Balakrishnan. 2004. Interactive Public Ambient Displays: Transitioning from Implicit to Explicit, Public to Personal, Interaction with Multiple Users. In *Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST ’04)*. ACM, New York, NY, USA, 137–146. https://doi.org/10.1145/1029632.1029656
[62] Man Williams, Jane Burry, Asha Rao, and Nathan Williams. 2015. A System for Tracking and Visualizing Social Interactions in a Collaborative Work Environment. In *Proceedings of the Symposium on Simulation for Architecture & Urban Design (SimAUD ’15)*. Society for Computer Simulation International, San Diego, CA, USA, 1–4. http://dl.acm.org/citation.cfm?id=2873021.2873022
[63] Yu-Chian Wu, Te-Yen Wu, Paul Taele, Bryan Wang, Jun-Yo Liu, Pin-sung Ku, Po-En Lai, and Mike Y. Chen. 2018. ActiveErgo: Automatic and Personalized Ergonomics Using Self-actuating Furniture. In *Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18)*. ACM, New York, NY, USA, Article 558, 8 pages. https://doi.org/10.1145/3173574.3174132
Received April 2019; revised June 2019; accepted August 2019 |
New ICT infrastructure and reference architecture to support Operations in future PI Logistics NETworks
D2.2 PI Reference Architecture – Final Version
Document Summary Information
| Grant Agreement No | 769119 | Acronym | ICONET |
|--------------------|--------|---------|--------|
| Full Title | New ICT infrastructure and reference architecture to support Operations in future PI Logistics NETworks |
| Start Date | 01/09/2018 | Duration | 30 months |
| Project URL | www.iconetproject.eu |
| Deliverable | D2.2 PI Reference Architecture Final |
| Work Package | WP2 Cloud-based PI Control and Management Platform |
| Contractual due date | 31/07/2020 | Actual submission date | 31/07/2020 |
| Nature | R | Dissemination Level | Public |
| Lead Beneficiary | CLMS |
| Responsible Author | Orfeas Panagou |
| Contributions from | IBM, INV, VLTN, ILS, CNIT, ELU, ESC, NGS, PGBS, SON, PoA, SB |
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Grant Agreement No 769119.
Disclaimer
The content of the publication herein is the sole responsibility of the publishers and it does not necessarily represent the views expressed by the European Commission or its services.
While the information contained in the documents is believed to be accurate, the authors(s) or any other participant in the ICONET consortium make no warranty of any kind with regard to this material including, but not limited to the implied warranties of merchantability and fitness for a particular purpose.
Neither the ICONET Consortium nor any of its members, their officers, employees or agents shall be responsible or liable in negligence or otherwise howsoever in respect of any inaccuracy or omission herein.
Without derogating from the generality of the foregoing neither the ICONET Consortium nor any of its members, their officers, employees or agents shall be liable for any direct or indirect or consequential loss or damage caused by or arising from any information advice or inaccuracy or omission herein.
Copyright message
© ICONET Consortium, 2018-2020. This deliverable contains original unpublished work except where clearly indicated otherwise. Acknowledgement of previously published material and of the work of others has been made through appropriate citation, quotation or both. Reproduction is authorised provided the source is acknowledged.
# Table of Contents
1 Executive Summary ................................................................. 8
2 Introduction ........................................................................... 9
2.1 Deliverable Overview and Report Structure .......................... 9
3 ICONET Conceptual Layers ..................................................... 10
3.1 OLI & NOLI layers ............................................................ 10
3.1.1 Physical Layer .......................................................... 11
3.1.2 The Link Layer .......................................................... 11
3.1.3 The Network Layer .................................................... 11
3.1.4 The Routing Layer ...................................................... 11
3.1.5 The Shipping Layer .................................................... 12
3.1.6 The Encapsulation Layer ............................................ 12
3.1.7 The Logistics Web Layer ............................................ 12
3.2 Resulting ICONET services ............................................... 13
4 Service Requirements & Implementations ............................... 14
4.1 Simulation service specification ....................................... 14
4.2 Optimisation service specification .................................... 15
4.3 Shipping, Encapsulation, Routing, and Networking service specification .......................... 16
4.3.1 Shipping Service ...................................................... 16
4.3.2 Encapsulation Service .............................................. 18
4.3.3 Routing Service ....................................................... 19
4.3.4 Networking Service .................................................. 20
5 Traversing a PI-network ....................................................... 23
5.1 Scenario Parameters ...................................................... 23
5.2 Scenario Goals .............................................................. 23
5.3 Key Steps ...................................................................... 23
5.3.1 Consolidate Order information & constraints .................. 23
5.3.2 Transportation ......................................................... 26
5.3.3 Arriving at a hub ...................................................... 26
5.3.4 Arriving at the final destination .................................. 27
5.4 Conclusions and impacts to the architecture ....................... 27
5.4.1 Technical requirements ................................................................. 27
5.4.2 Decisions .................................................................................. 27
5.4.3 Events ....................................................................................... 28
5.4.4 Data requirements ..................................................................... 29
5.4.5 Integration with existing/legacy systems using PI Data Adapters .... 32
5.4.6 IoT component ........................................................................... 38
6 Physical Internet Decentralized Reference Architecture ...................... 40
6.1 Background of previous work .................................................... 40
6.2 Current version of Reference Architecture .................................. 42
6.2.1 Design Principles ................................................................. 42
6.2.2 Living Lab Considerations .................................................. 44
6.2.3 Final PI Architecture Blueprint .......................................... 56
7 ICONET Ontology ............................................................................. 63
8 Conclusions ..................................................................................... 67
9 References ....................................................................................... 68
List of Figures
Figure 1 Simulation service requirements ................................................ 14
Figure 2 Flow diagram for simulation service ........................................... 14
Figure 3 Optimisation Service in Context ............................................... 15
Figure 4 Optimisation service specification ............................................. 16
Figure 5 Inputs, Outputs & Dependencies of Shipping service ............... 17
Figure 6 Example of Shipping state management .................................... 17
Figure 7 Inputs, Outputs & Dependencies of Encapsulation service ........ 18
Figure 8 Routing Service Integration ...................................................... 19
Figure 9 Interactions of Networking Service ......................................... 21
Figure 10 Shipment requirements .......................................................... 24
Figure 11 Identify all paths ................................................................. 25
Figure 12 Selected path ....................................................................... 26
Figure 13 Initial events of shipment ...................................................... 28
Figure 14 PI-container data specifications ............................................. 29
| Figure | Description | Page |
|--------|-----------------------------------------------------------------------------|------|
| 15 | PI-node data specifications | 30 |
| 16 | PI-mover data specifications | 30 |
| 17 | PI-route data specifications | 31 |
| 18 | PI-corridor data specifications | 31 |
| 19 | PI-network dependencies | 32 |
| 20 | External Data Integration | 37 |
| 21 | IoT-lite Ontology Example | 38 |
| 22 | Example of IoT operational data | 39 |
| 23 | Initial Conceptual Architecture | 40 |
| 24 | Key Modules | 41 |
| 25 | Conceptual Architecture | 42 |
| 26 | Cyclical influence of ICONET work | 43 |
| 27 | LL1 Service Oriented Information Workflow | 45 |
| 28 | List of PoA actors | 46 |
| 29 | PI Services on PoA | 46 |
| 30 | Decomposition of PoA PI Hub | 48 |
| 31 | LL2 Service Oriented Information Workflow | 49 |
| 32 | LL2 Corridors & PI Services | 50 |
| 33 | Links comprising a corridor | 51 |
| 34 | SONAE Urban Distribution Workflow | 52 |
| 35 | SONAE Flows & Engaged PI Services | 53 |
| 36 | LL4 Service Oriented Information Workflow | 56 |
| 37 | Example of different levels of transport accommodated by PI hubs | 56 |
| 38 | Node to Node communication | 58 |
| 39 | PI Reference Architecture | 59 |
| 40 | ICONET Ontology excerpt | 63 |
| 41 | PI Common Data Model [Sternberg and Norrman 2017] | 64 |
| 42 | Example semantic rule | 64 |
| 43 | ICONET Domain Model | 65 |
| 44 | PI Order containing IoT data | 65 |
| 45 | Acknowledgement of PI Order & IoT settings | 66 |
List of Tables
Table 1 TCP/IP, OSI, OLI & NOLI relation ................................................................. 10
Table 2 ICONET Layers and Services in relation to OSE, OLI and NOLI Layers .................. 13
Table 3 Mapping WP1 outputs ....................................................................................... 40
| Abbreviation / Term | Description |
|---------------------|-------------|
| API | Application Programming Interface |
| BLE | Bluetooth Low Energy |
| Csv | Comma Separated Values |
| ERP | Enterprise Resource Planning |
| GPICS | General Physical Internet Case Study |
| IMS | Inventory Management System |
| JSON | JavaScript Object Notation |
| LL | Living Lab |
| MES | Manufacturing Execution Systems |
| NFV | Network Function Virtualisation |
| NOLI | New Open Logistics Interconnection |
| NUTS | Nomenclature of Territorial Units for Statistics |
| OLI | Open Logistics Interconnection |
| PI | Physical Internet |
| PoA, POA | Port of Antwerp |
| SDK | Software Development Kit |
| SDN | Software Defined Networking |
| TEN-T | Trans-European Transport Network |
| TMS | Transportation Management Systems |
| VRPB | Vehicle Routing Problem with Backhaul |
| WMS | Warehouse Management System |
| WP | Work Package |
1 Executive Summary
The key objective of this deliverable has been the production of the final blueprint of the architecture required to support PI network operations, as derived from the activities and lessons learned during the previous months of the ICONET project. The architecture presents the final definitions of the associated connectivity models, architectural modules and data structures. The final reference architecture is sufficiently generic and high-level to be widely applicable and does make use of ontologies for the definition of the relevant data structures. This report also analyses interfacing/integrating with existing logistics platforms and solutions, security and data protection, regulatory compliance and network service level monitoring aspects to ensure a viable and usable model architecture.
In this second and final version of the deliverable, the main focus was on transforming key requirements, events and data required based on a generic scenario, as well as use cases driven by the project Living Labs in a reference architecture that addresses all required capabilities. Data specifications stem from the findings of WP1 deliverables and research conducted in the development of the multiple components of the ICONET project and their interactions. The report also documents service requirements, definition of required inputs and expected outputs as well as dependencies between services, while also providing a more technical oriented approach to the potential architecture of a PI system. Furthermore, other key findings include the major events, data and decisions that need to be considered throughout the journey of a PI-container in a PI-network, as well as a blueprint for designing and implementing a PI enabled architecture in a decentralized manner, validated by the service development along with the simulated living lab scenarios.
2 Introduction
This deliverable builds upon the work described in the initial version of the architecture of ICONET and its initial analysis of the requirements, interactions and architectural elements that it described. The architecture design has evolved based on literature review on previous efforts related to Physical Internet architecture and work done on ICONET’s WP1 & WP2. More specifically for WP1, the work done in T1.5 (GPICS), as well as T1.6 (PI Protocol Stack and enabling Networking Technologies) and the relevant deliverables, produced the initial guidelines and requirements for the PI architecture. The effort performed in WP2, namely T2.2 and relevant deliverables also influenced the architecture heavily (and vice-versa), as it presented the functionalities and interactions of the main services of the project.
Additionally, the challenges and goals of each Living Lab were examined in relation to the PI architecture. This report presents an architectural overview of the building blocks comprising these Living Labs, along with the PI Services that are engaged on each level.
ICONET partners also formed another basis for this architecture as they provided input during various ICONET workshops, focused on eliciting input regarding technical requirements (PI-related), potential issues and risks, and capabilities required to support PI operations, as well as restrictions and needs that came up during the development of the various services and components used. During the process of designing the initial conceptual architecture, the core modules and their interactions were also identified and mapped. In this second and final version of the reference architecture, the technical considerations and implementations that would need to be considered on a distributed PI system are further specified, resulting in the finalized decentralized Reference Architecture presented in this document.
Similar to the previous version, the layers of OLI, NOLI and the resulting composite reference model are used as a basis for defining the main architectural components, as this provides a common basis of understanding the Physical Internet concept and its’ desired functionality and outcomes.
2.1 Deliverable Overview and Report Structure
This deliverable provides a conceptual reference architecture for the design and development of PI network functions and services and builds upon previous work to further expand the concepts explained therein. The deliverable starts the definition of different layers from different reference models, and their role in the PI, as they became more concrete during the previous months of the project, concluding on the representative ICONET reference model. The specifications for PI-elements and services are then described and how they fit to the previously explored concepts. The next section (Section 5) provides an example transversal scenario of a shipment from one PI node to another. Section 6 presents the design principles that led to a distributed proposed reference architecture, as well as the Living Lab considerations that the architecture addresses. The finalized blueprint of the PI Reference Architecture is also presented, shown in relation to the various building blocks as well as using an individual PI node as a basis. Its relation to decentralized networking paradigms and their design principles as they were used for the PI Reference Architecture are also presented in this section. Section 7 presents a basic ontology needed to cover the requirements of the previously mentioned concepts, along with data security considerations. Finally, the report presents the conclusions made in Section 8.
3 ICONET Conceptual Layers
The methodology followed in the previous version of this deliverable was to explain the high-level functionalities of the proposed OLI layers, on which we based the decisions made when designing the key services of the ICONET project and their interactions, which by extension heavily influenced the reference architecture. During the earlier months of the project, work done in WP1 and WP2 aimed to further specify which of these functionalities can be applied in a technical manner to enable the goals of the ICONET project. In the following section, an analysis of the high-level functionalities of OLI and NOLI layers is presented. Again, this is done on a conceptual level, as this helped tremendously to further specify which of these layers and corresponding services can truly pave the way forward for the Physical Internet concept.
3.1 OLI & NOLI layers
The output of the OLI (Open Logistics Interconnection) model (Montreuil et al., 2012) and the NOLI (New Open Logistics Interconnection) model (Colin et al., 2016) is presented in Table 2 below.
| TCP/IP Layer Name (Internet) | OSI reference Model Layer Name | OLI Layer Name (Montreuil et al.) | NOLI Layer Name (Colin et al.) |
|------------------------------|--------------------------------|----------------------------------|-------------------------------|
| Application | 7. Application | 7. Logistics Web | 7. Product |
| | 6. Presentation | 6. Encapsulation | 6. Container |
| | 5. Session | 5. Shipping | 5. Order |
| Transport | 4. Transport | | 4. Transport |
| Network | 3. Network | 4. Routing | 3. Network |
| | | 3. Network | |
| Network Access | 2. Data Link | 2. Link | 2. Link |
| Physical | 1. Physical | 1. Physical | 1. Physical Handling |
Table 1 TCP/IP, OSI, OLI & NOLI relation
In essence, the differences between the already examined OLI model and the OLI model is that the NOLI model attempts to further clarify and explain the functionalities of the various layers, proposing unifications and semantically important differences between layers. More specifically, the NOLI model proposes the renaming of the Logistics Web layer to Product layer, as they claim that the Physical layer definition of the OLI model would have to have definitions for all physical objects that are relevant to it (π-Products, π-Movers etc.) which is not feasible. As such, the NOLI model passes that responsibility to the entry point of the Physical Internet, which on this model is the Product layer. The product layer would be responsible for defining the products that are passed through it, and in turn, the Physical layer would only be responsible for the actual physical handling of these products, thus it is renamed to “Physical Handling” layer. Furthermore, the NOLI model proposes the
encapsulation layer to be renamed to “Container” layer, as the encapsulation of products is specified as putting products into $\pi$-Containers. Additionally, it is proposed that the Shipping Layer is broken down into Order and Transport layers respectively, to have a clearer separation of concern. Finally, the NOLI model proposes the unification of the Routing and Network layers, as their interdependency is significant, thus unifying their functions into a single conceptual layer.
Based on the considerations mentioned above, a mix of those 2 models were used as a guideline both for individual services and the overall conceptual architecture. A high-level description of the layers that the ICONET project relied upon is presented below.
3.1.1 Physical Layer
This layer monitors the physical objects of PI involved in handling and transporting cargo such as means of transport, vehicles, carriers, conveyors, stores and sorters. ICONET project investigates these concepts captured in D1.1 (PI-aligned digital and physical interconnectivity models and standards) where the solutions for generalising and functionally standardising unloading, orientation, storage and loading operations is being investigated. The Physical layer is responsible for the physical actions that need to happen for a shipment to begin and conclude its’ trip throughout the $\pi$-Network based on the decisions made of the other services. As these actions already occur with a variety of different ways in the current logistics climate, further technical work on Physical layer was deemed out of scope for the ICONET project, and instead, we chose focus in providing already established outputs that can be used for already occurring Physical actions (such as picking lists for product loading etc.).
3.1.2 The Link Layer
This layer handles node to node transfer. The link layer is responsible for ensuring the smooth flow of goods between PI-nodes. To achieve that, this layer must enable the pre-evaluation of potential options, identification of potential issues across the supply chain and the suggestion of appropriate mitigation measures. In ICONET this layer provides mechanisms for efficient and reliable shipping of (sets of) PI containers from shippers to final recipients. The management of the procedures and protocols for configuring the quality of service, monitoring, verifying (acknowledgement), adjourning, terminating and diversion of shipments in an end-to-end manner is being conducted in ST2.2.3 Shipping algorithms and services will be specified in deliverable D2.4 (‘PI networking, routing, shipping and encapsulation layer algorithms and services v1’), and as such, the functionality of the conceptual Link Layer will be handled by the Shipping service.
3.1.3 The Network Layer
The Network Layer is responsible for ensuring interoperability, integrity and interconnectivity between different networks. The issue of interoperability will be addressed through the definition of a common data model for all PI-operations, while interconnectivity is facilitated through the use of IoT and the definition of a protocol stack. The common data model will be constructed from the data specification of all PI-elements and all ICONET services to form the ICONET ontology, described in detail in ICONET Ontology. Specifically, in ICONET, this layer will act as a knowledge base containing information of $\pi$-Nodes, $\pi$-Hubs, routes etc. in a network.
3.1.4 The Routing Layer
The routing layer is in charge of routing the PI containers from starting point to their destination. To achieve that, the routing layer must be able to monitor the status, capability, capacity (utilization) and performance of PI
operations. ICONET will achieve this through the routing service. The routing service will be responsible for supporting optimal routing decisions, taking into account cost, environmental and operational factors such as emissions, network topology and the type of the product. The objective of this service will be to ensure the smooth operation and flow of goods between PI elements. To achieve that, the routing service requires information from the Network layer, regarding compatible locations and general information regarding the status of the PI hubs.
3.1.5 The Shipping Layer
The Shipping layer is responsible for the efficient shipping of PI container, focusing on the functional and procedural requirements, ensuring visibility of shipping status throughout the PI network. On a conceptual basis, the shipping layer will handle the propagation and processing of Order information (as part of the Order conceptual layer) and ensure visibility and quality of service of shipping (as part of the Transport conceptual layer). ICONET will address this requirement through the Shipping Service. The shipping service will be responsible for ensuring efficient and reliable shipping of PI containers from shippers to final recipients. The objective of the shipping algorithms will be to ensure quality of service and monitoring of end-to-end cargo movement. The shipping service requires data from IoT devices, regarding location and cargo status, and in conjunction with the networking service, PI-hub data in order to be able to divert shipments as needed.
3.1.6 The Encapsulation Layer
The Encapsulation Layer links products to PI containers and is responsible for the composition and decomposition of orders while maintaining visibility and traceability of PI-elements. The requirements of the encapsulation layer will be addressed through the definition of an encapsulation service that will be responsible for the efficient encapsulation assignments of products within PI containers. Encapsulation algorithms will take into account the modular load units for a product, the capacities and performance of transportation means and the status and capacities of PI hubs. In essence, the Encapsulation layer is responsible for all packing & unpacking operations that might occur in the $\pi$-Network.
3.1.7 The Logistics Web Layer
As defined in OLI, this layer provides the interfaces between PI and users. In ICONET, the responsibility of this layer is to act as an entry point of potential non-PI orders originating from legacy systems, and also provide information regarding these orders to interested parties. Additionally, integrity of PI-operations will be ensured through the use of blockchain. The role of blockchain technologies will be to ensure integrity of operations through the definition of trusted, auditable and secure distributed ledgers & smart contracts of transactions as containers flow within the PI network and with a combination of data stemming from IoT devices will be used to ensure all constraints of shipments are maintained.
3.2 Resulting ICONET services
To summarize the previous section, the table below shows the interrelationship between OSI, OLI, NOLI and resulting ICONET layers and services.
Table 2 ICONET Layers and Services in relation to OSE, OLI and NOLI Layers
| OSI Layer | OLI Layer | NOLI Layer | ICONET Layer | Resulting Service |
|-------------|---------------|---------------|--------------------|-------------------|
| Application | Logistics Web | Product | Logistics Web | Logistics Web |
| Presentation| Encapsulation | Container | Encapsulation | Encapsulation |
| Session | Shipping | Order | Order | Shipping |
| Transport | | Transport | Transport | |
| Network | Routing | Network | Routing | Routing |
| | Network | | Network | Network |
| Data Link | Link | Link | Link | - |
| Physical | Physical | Physical Handling | Physical | - |
A study on the feasibility of the descriptions, relations and goals of these layers and the overall protocol stack is done in relevant WP1 deliverables (D1.10 – D1.12). It is important to also note the conclusions in this report along with the feasibility studies and internal discussions on WP2 activities, resulted in the decision of the final ICONET reference model and the corresponding services to be used for technical implementation for the ICONET project.
A Physical service was deemed out of scope for further technical work, as it would require significant effort to synchronize physical actions with the corresponding operations described in the PI concept and the Physical Layer. Additionally, adoption of these PI-based physical actions would be difficult at this current stage.
Moreover, the functionalities and offerings of the Link layer presented in a theoretical manner above, are very close to the specifications offered by the Shipping layer, and as such, the decision was made to unify these functionalities in a single Shipping service which in conjunction with other services, will provide mechanisms for efficient and reliable shipping. Overall, the services where the ICONET project will focus to encompass the majority of relevant functionalities on are the following:
- Shipping
- Encapsulation
- Networking
- Routing
- Optimisation
- Logistics Web
The technical implementations and features of these services are further detailed in the next section of this report.
4 Service Requirements & Implementations
Having defined the services of ICONET, in the previous version of the report, the required inputs and their sources were captured. Additionally, the expected outputs, and the interaction among different services were identified, mapped and enabled. These can be found in D2.1, D2.3 and D2.4. A short overview of the simulation and optimization services is presented in the next section, while focusing on the offerings of the key 4 services. Furthermore, the IoT platform is shortly mentioned through its interactions and is further detailed in D2.7.
4.1 Simulation service specification
The previous version of this deliverable also explored the capabilities offered by the simulation service. While this enables the PI services and operations to work in tandem with simulated data, this is not part of the reference architecture. Instead, it is a good starting point to simulate the integration and features provided from the aforementioned services and provide a testing ground for the respective implementations.
To reiterate, the simulation service is an overarching service that needs to be aware of all network data in order to emulate the operations of the PI-network. Its requirements, inputs and outputs are presented in Figure 1.
| Inputs | Outputs | Dependencies |
|-------------------------|--------------------------|----------------------------|
| Container data | Economic KPIs | IoT component (input) |
| Mover data | Operational KPIs | Legacy component (input) |
| Node data | Environmental KPIs | All services (output) |
| Corridor data | | |
| Route data | | |
Figure 1 Simulation service requirements
The simulation service requires data from all PI-elements in order to evaluate all possible scenarios and generate the relevant KPIs that can be consumed by the rest ICONET services in order to identify the best possible network configuration. It depends on the IoT components and Legacy component for the extraction of all relevant data.

Figure 2 Flow diagram for simulation service
The diagram above represents the flow of data for the simulation service (Figure 2).
4.2 Optimisation service specification
The optimisation service is concerned specifically with optimising operations within a single PI Node. While there are a number of variations in the layout and function of a given PI Node, the operations frequently fall into similar categories:
- Moving goods internally within a node
- Storing goods internally within a node
- Consolidation or deconsolidation of goods within a node
The optimisation service addresses all three of these through machine learning and reinforcement techniques. Due to the bespoke natures of a given PI node and the fact that machine learning approaches are highly data driven, the optimisation service is custom built per node (or node type) but uses a common framework, ensuring customisation effort is minimal. As already mentioned, data acquisition is a key requirement in using the optimisation techniques defined. The data in question can come from a number of sources but most lie within the PI Node itself. While real time data is used to define the optimisation problem at hand, historical datasets power the algorithms themselves. Therefore, order history as provided by the shipping service for the node is critical, as is operation plans within the node – for example the train schedules and movements within a port, or the list of containers to be relocated on a deep-sea terminal on a given date. This data is often held within legacy IT systems outside of the PI service stack, so must be extracted. This is done by the shipping service for that node due to privacy and confidentiality concerns, but other non-confidential info could be shared via the network service as it may be consumed by neighbour nodes, depending on the data type.
Figure 3 Optimisation Service in Context shows the integration of the Node Optimisation Service with the PI Service Stack and local services within the PI Node. This integration represents an updated approach to the OLI service layers which has not previously included a node optimisation layer. The optimisation service is enhanced and powered by data sharing, which is a core strength of the PI concept and what ultimately powers a PI Network.
In turn, optimised PI Nodes lead to better throughput in a PI Network. The inputs and outputs for the PI Node Optimisation Service depend on the specific optimisation problem at hand. For ICONET, the focus was placed on train loading and consolidation within a port, container storage within a port, and goods storage within a warehouse. The inputs in these circumstances would include train schedules for the day, rail undertaking bookings for the day, container arrivals and departures for the day, and deep-sea vessel arrival and departure for the day. In addition to port operations, warehouse operations were addressed by the optimisation service so that inputs in this scenario are warehouse status (available slots, available forklifts, etc.) and movement instructions (container will arrive at a certain day and time and leave at a certain day and time). The output from the PI Node Optimisation service will be an optimised plan (a loading plan, a storage plan, a train schedule, etc.). This is also summed up in Figure 4.
| Inputs | Outputs | Dependencies |
|---------------------------------------------|----------------------------------------------|---------------------------------------------------|
| • Routing options | • Optimal network configuration | • IoT component (input) |
| • Shipping options | | • Legacy component (input) |
| • Networking options | | • All services (input) |
| • Encapsulation options | | |
Figure 4 Optimisation service specification
### 4.3 Shipping, Encapsulation, Routing, and Networking service specification
The services that were identified in section 3.2 as key enablers for a PI network are described in this section. Their offerings and interactions are what allow the PI concept and, as a by-product, its’ conceptual architecture to be successfully encompass the functionalities and aspirations of the PI. As these services are described in length in deliverables D2.3, D2.4, and D2.5, they will be examined briefly, to provide an overarching idea of the key components of the architecture.
#### 4.3.1 Shipping Service
As mentioned before, the Shipping Service encapsulates the functions of the conceptual Shipping layer, which is further conceptually divided into the Order and Transport layers in the ICONET reference model. Its role in a PI enabled network environment, as per the DoA, is to “enable the efficient and reliable shipping of (sets of) PI containers from shippers to final recipients. Study the management of the procedures and protocols for configuring the quality of service, monitoring, verifying (acknowledgement), adjourning, terminating, and diversion of shipments in an end-to-end manner, leveraging the IoT means of T2.3 in concert with the Blockchain principles of T2.4 wherever possible.” In order to fulfil these goals, the Shipping service takes on the role of the overall orchestrator of the PI services. As such, the Shipping Service is responsible for receiving PI enabled orders and, with the usage of the capabilities offered by other services, make appropriate decisions to ensure the delivery or handle the non-delivery of an order in an end-to-end manner.
Through its various integrations with the rest of the ICONET components, the Shipping Service will have all the data needed to make appropriate decisions during the order’s lifecycle. The integration with the IoT Cloud Platform (D2.8) enables it to have information about sensor values (such as location, temperature etc.) to provide the possibility for informed decisions based on the order’s requirements. Additionally, the integration with the Web Logistics service, and its blockchain implementation (and Smart Contracts) allows for validated checks for violations of the aforementioned requirements. The key inputs and outputs of the shipping service are outlined in Figure 5 below.
| Inputs | Outputs | Dependencies |
|---------------------------------------------|----------------------------------------------|-------------------------------|
| • PI Order data | • Order State | • IoT Platform |
| • Node data | • Encapsulated | • Web Logistics (Blockchain) |
| • Shipment data | • Routed | • PI Services |
| • Container data | • Shipped | |
| | • Impeached | |
Figure 5 Inputs, Outputs & Dependencies of Shipping service
The decisions that the Shipping service is responsible for, as mentioned previously, is best described as a state diagram, shown in Figure 6. below.
Figure 6 shows an order arriving at a PI node, after which the Shipping is responsible for making all appropriate decisions to forward it to the next destination (more analysis on the specific interactions between services will
be presented in section 5 with a full example of an order transversal). After this is done, the Shipping Service will continuously check the data received from the IoT sensors residing in the containers used to transport the order and check the values against the requirements established in the Smart Contracts. In parallel, appropriate status updates are made to the state of the order. In case of an impeachment, the Shipping service is responsible for terminating the shipment and if needed, rerouting it to a valid disposal location. If the order progresses nominally, these actions are repeated on every PI node the order passes through (if necessary).
4.3.2 Encapsulation Service
The Encapsulation Service, as mentioned earlier, is based on the Encapsulation layer of the OLI & NOLI models. For the purposes of ICONET, while no technical implementation will occur, sometimes actions that would probably fit more into the description of the Physical layer or a potential Physical Service are also included. This occurs on a case by case basis and is not considered in scope for the Encapsulation Service. The role of Encapsulation in ICONET is described as the process of “encapsulating products in PI containers”. In the duration of the project, we have concluded that the Encapsulation service needs to also encapsulate PI containers into larger sets of PI containers and packing these PI containers into PI Movers (trucks, trains, etc.). Since the algorithmic problem described in these functions has the same basis, i.e. Three-dimensional bin packing problem, a unified solution provided by the Encapsulation Service can satisfy all of these requirements.
It can be useful to think of these processes as a multi-stage encapsulation, with the 1st stage being the encapsulation of products into PI containers, the 2nd stage being the encapsulation of PI containers into larger PI containers etc. These different stages of the encapsulation process, while having the same basis, have different constraints, which the Encapsulation service will address. The 1st stage encapsulation is done based on the dimensions and type of the products and containers, meaning a product has to fit into a container to be encapsulated and if the product needs refrigeration for example, the container has to be able to provide that. For the 3rd stage Encapsulation, (PI containers into PI movers), other considerations can apply, such as weight distribution of the vehicle, air between products/containers, as well as cargo consolidation based on routing decisions to minimize empty space, etc.
The encapsulation services’ key inputs, outputs, and dependencies are presented in Figure 7 below.
| Inputs | Outputs | Dependencies |
|-------------------------|--------------------------|-----------------------|
| • PI Order data | • Encapsulated State | • Shipping Service |
| • Product data | • Picking List | • Routing Service |
| • Constraints data | | |
| • Container data | | |
Figure 7 Inputs, Outputs & Dependencies of Encapsulation service
4.3.3 Routing Service
As described in D2.3, the routing operation for the PI is “The Routing operation selects the feasible/optimal routes (out of those identified by the Networking Layer) through the PI that connect the origin of the shipment (i.e. the initial π-hub handling the π-units comprising the shipment to the final destination/π-hub that will handle the shipment”. Put simply the routing services finds the best possible route through the network provided by the networking service. There are additional constraints that are also considered in route calculation including but not limited to:
- Reducing number of empty transports
- Travel Time
- Cost
- Consolidation of Containers
- Perform both Delivery and Pick Up
It should be noted that these constraints are not static and will vary according to the specific use case addressed. For example, LL3 e-commerce has a heavy focus on optimising routes that include both delivery and pickup operations using multiple trucks, where LL1 PI-Corridor focuses on end-to-end travel time and on time deliveries. The routing algorithm developed with the broadest application was VRPM (Vehicle Routing Problem with Backhaul). The inputs needed for this algorithm are:
- List of Pick Up Nodes
- List of Delivery Nodes
- List of Vehicles
- Supply Information per Pick Up Point
- Demand Information per Pick up Point
- Start & End Times for Delivery Service
- Service times per Node
- Travel Costs
- Travel Times
- Max Vehicle Mileage
- Vehicle Capacity
Figure 8 Routing Service Integration
The source for much of the inputs is the Networking Service (which can provide info about the layout of nodes (distance, travel time etc.) and the nodes themselves (service time, supply & demand etc.). The remaining information is supplied by the logistics operator (vehicle costs, vehicle capacity, start & end times of delivery service etc.) and this will be forwarded through the shipping service which manages the overall order. *Figure 8 Routing Service Integration* shows the interrelationship between the routing, shipping and networking services.
The output from the service is an ordered list of nodes representing the optimal route (usually by cost but not always) as well as an estimated cost for the service. The service can be re-utilised after each node is reached, thereby continually ensuring the optimal route is taken by accounting for real time updates coming from the vehicles, the nodes or external factors. Again, this information comes via the network service or the shipping service and can utilize analytics coming from IoT tracking systems that the shipping service consumes.
### 4.3.4 Networking Service
The Networking Service is responsible for collecting, storing and integrating all PI network information and communicating PI shipment relevant network information in an interoperable and interconnected manner, as captured by the Network Layer. It is therefore, divided into two separate stages, Stage 1 dealing with the collection and integration of PI network information, and Stage 2 dealing with the provision of PI Shipment specific information. Starting from identifying the relevant nodes and links of the network(s) in collaboration with the Link and Physical Layers, the Networking service Stage 1 classifies them in terms of geographical location (and scale), transport mode, and level of aggregation. For this purpose, the GPICS PI Node and PI Link typology is adopted. In Network Service Stage 2/ Step 1, this classification is utilized through the application of the Area, Modes, Aggregation level and Data detail tools, which are described as follows:
The aim of the geographical scale function (Area tool) is to limit the scope of the search area for network components. An area of relevance is identified on the global map, for the specific PI shipment submitted. For example, the scale will be different for a request to carry a cargo from North to South Europe, or between two neighboring French cities. The Area tool utilizes the origin and destination coordinates of the PI shipment to identify an oval shaped area of relevant PI network components.
The mode tool considers restrictions imposed by each PI order on the modes available for shipment. If more than one mode is feasible, the Networking Service assess the multimodality options available, by considering transshipment nodes and various mode links.
The aggregation tool considers the level of detail required for the routing request associated to a specific PI order. PI nodes can represent international or local hubs in the case of long-haul shipments, local warehouses and postcodes in the case of e-commerce, as well as specific function (e.g. customs) in complex port (intra-hub) operations.
With awareness on the scale, aggregation level and modes a final decision is made on the level of data detail required. This will depend on the PI order made, but also on data availability. The output data detail can range from physical properties of infrastructure, to live information on the services operating on the PI network. Four levels of data details can be identified:
- **Infrastructure properties**: Network information describe static infrastructure characteristics such as the length of a link, the modes that can accommodate the function of carrying cargo (e.g. truck, rail), or even more detailed information such as classification into motorway, or number of lanes. A similar
concept can be applied to the description of nodes. A node can represent a warehouse that has specific capacity for storage and docking capability.
- **Operational status**: Additional to network static information, operational condition information is collected for each PI network component. For example, incapacitated links due to roadworks or traffic, will have their travel time and travel cost (if applicable) updated with most current information. Similarly, for nodes up to date availability and status of infrastructure is considered, such as spare capacity.
- **Services schedule**: The aim of this data layer, is to account for the fact that roads and warehouses do not handle directly cargo transport, but rather indirectly through services. Truck, air, river/sea or rail transport hauling services that utilize the infrastructure are responsible for undertaking the task of physically moving cargo from one location to another. With that in mind, at Stage 2/ Step 1 the Networking Service collects (if available) service schedules that operate between specific locations.
- **Services status**: A final layer of complexity can be anticipated, that accounts for live information on the capacity of en-route hauling services and queues at PI hub services.
It is also essential to associate each physical infrastructure component with properties to be used for operational decisions. Networking Service Stage 1/ Step 2, deals with this requirement through four tools that adjust the information collected in Step 1 to specific PI Order requirements.
- **Link filter** and **node filter** tools capture the restrictions imposed by the PI Order and filter out any network components that do not meet them. For example, for a shipment that requires refrigeration, the respective PI node and PI service property is examined, and nodes and services that do not offer this option are not considered further.
- The **Link and Node Cost** tools calculate the final metrics for each PI component in terms of KPIs as requested in the PI order, such as travel time, travel cost, service reliability. Performance is adjusted for each PI component considering the requirements, constraints and limitations implemented in the previous steps of the Networking Service.

The final output of the networking service is a set of interconnected PI nodes, PI links and PI services, that are available for transporting a shipment between any two nodes. Stage 1 of the Networking Service that is responsible for collecting information on the status of the network, is always listening for changes in traffic or services status. Depending on the nature of the PI Order the Networking Service Stage 2 can be either called once or several times. For cases that only static information are available the Networking Service is called only when an order is initiated, while for cases that information is dynamically updated, it is called whenever a shipment arrives at a PI node that it is not the destination. Figure 9 captures the operational sequence for Stage 2 of the Networking Service.
5 Traversing a PI-network
This section attempts to identify and map the processes that a PI order must go through while traversing a PI-network, identify the events that are key to initiating these processes, as well as understand the decisions that need to be made at every step. Additionally, the data needed at every step will also be identified. To achieve that, a scenario of a shipment travelling through the network will be described. While the previous version of this deliverable also documented this scenario, it has been updated to include a more complete image of the decisions and operations occurring in the shipment’s trip, as well as attributing the decisions and functions on every step to specific services, while highlighting their interaction points.
5.1 Scenario Parameters
The selected scenario describes a pharmaceutical cold-chain, under which, the storage, handling and transportation of the shipment is sensitive to external conditions and requires certain temperature and humidity, to avoid compromising the chemical stability of the product. Cold-chains might also require specific conditions in terms of exposure to light, vibrations during handling, and shocks.
For this scenario, the manufacturer (or the regulator) must define the constraints for all supply chain processes. In more detail, we assume an acceptable temperature range between 3°C - 6°C. The acceptable humidity range is 30% to 40%. The quality of service requirements include that the product must reach its final destination within 5 days after dispatch. The manufacturer also proposes that the shipment, should not be stored close to doors or windows to avoid compromising the integrity of the product.
5.2 Scenario Goals
The objectives of this scenario are the following:
1. The product must arrive within the defined timeframe (<5 days)
2. The product arrives without being compromised (temperature 3°C - 6°C and humidity 30% to 40%.)
3. Cost is minimized
4. Environmental impact is minimized
5. All participants are accounted for and reimbursed for their services
5.3 Key Steps
5.3.1 Consolidate Order information & constraints
The scenario begins with a shipment of the cold-chain pharmaceutical product, that needs to be sent from Point A to Point B, while maintaining the criteria defined in the previous section. The first step is to propagate the information of the shipment to the PI network for handling and shipment, in tandem with necessary physical actions, such as the shipment being physically in the starting point. For the PI aspect, the Shipping service will register the order as communicated by an external system. This in turn will convert all relevant information to the PI data model which will be used across all services of the PI. This new PI order will also contain all the constraints (described above) in order to enable the services to make optimal decisions. These constraints are stored in the Web Logistics service using blockchain enabled SLAs which are used for verification and live checks of sensor values indicating whether the shipment operates within acceptable values of the constraints. The first
decision that needs to be addressed is how this shipment will be packaged and encapsulated, while minimizing cost and environmental impact by reducing empty space in cargo transports, but more importantly making sure that the targeted quality of service is adhered to. Assuming that the dimensions of the shipped products are known, the encapsulation service is responsible for optimally packing the products (either directly or their packaging) into PI containers. These containers are a core part of the shipment’s travel through the PI network, as the sensors embedded in them are responsible for propagating all the responsible information to decision makers in the PI as well as interested parties (location, temperature etc.). In this case, a container that is refrigerated with temperature and humidity sensors would be chosen.
After the products of the shipment are encapsulated, the order status is updated and is marked ready for routing.
For this specific example, this means that the shipment must reach Point B, within a certain timeframe (5 days) and in a certain condition (not compromised, spoiled or damaged), as seen in Figure 10. The details of the shipping process, the requirements and constraints as well as the financial aspects of the shipping process have already been defined by the shipping service, as described in more detail D2.3 and D2.4. What is left is to define the best route through the PI-network for point B. The routing service needs to be aware of all possible paths from Point A to Point B (Figure 1111).
The Networking service contains information regarding PI node capabilities and the routes that connect them. In this instance, the networking service would take into consideration any constraints of the order to return only paths that are in line with these requirements. That means that the paths returned will have an ETA < 5 days, and if storage is needed in a node through the trip the node selected at any point should support be refrigerated and support the required temperature 3 – 6 degrees Celsius and humidity of 30% to 40%. As such, all possible paths that are compatible with the shipment instructions and constraints, as defined earlier, are passed on to the Routing service from the Networking service. Having identified all possible routes, the routing service then ‘decides’ on the best route (at the time) (Figure 122), taking into account the instructions and constraints, as defined by the shipping service. The routing service will also need to be aware of the conditions at each PI-hub, taking into account capacity, utilization and congestion between hubs.
In contrast to the digital internet, where the number of hops is a key metric, the physical distance between hubs is important as it affects cost, environmental performance and quality of service. The routing service will be ‘called’ at every hub, when a shipment arrives, in order to ensure that the planned path is still the ‘optimal’ and update it if necessary. Therefore, the arrival of a shipment at a hub is the event that triggers the routing service, taking into account updated information regarding the conditions at the next hubs. The routing service also requires information regarding all possible paths forward, as defined by the networking service. This means that ‘calling’ the routing service will also trigger the networking service for an updated of possible routes. Changes in the predetermined route based on these routing iterations, could also potentially trigger the Encapsulation service. Further bundling or unbundling of cargo can be done through this encapsulation iteration, always adhering to the constraints set up initially. These iterations allow for potentially much greater savings in terms of volume used and CO2 emissions, as the various PI movers can be loaded/unloaded on hubs located between Point A & Point B.
5.3.2 Transportation
Once the best path has been identified and the preferred transportation have been selected, the PI-container begins its journey in the PI-network. The selected means of transport must also adhere to the quality of service requirements, ensuring the integrity of the shipment as well as the timely arrival of the product. For this scenario, the selected transportation must offer a controlled environment (in terms of temperature and humidity) as well as monitoring capabilities. Monitoring capabilities must cover coordinates, temperature, humidity and status of the shipment. The margin of error for the key metrics must also be provided.
5.3.3 Arriving at a hub
When the shipment arrives at a PI-hub, some of the previously made decisions must be reevaluated. Arrival at a PI-hub is an event that will trigger reevaluation of the selected path and transportation means. This means that the routing and networking services will have to be ‘called’ again to evaluate potential new hubs and paths, identify better paths to the final destination and the required means to get there. In cases where the initial path is modified, the shipping service will have to be called in order to update the shipping agreement accordingly.
If the shipment has been identified as compromised, a new route will have to be defined. The nature of this route will depend on the instructions of the shipment. This might be a new reverse logistics network to return the shipment to the sender or a new route to the appropriate disposal of the shipment. In both cases, a new order may or may not be initiated. To achieve that, the routing, networking and shipping services will have to be ‘called’, to identify and evaluate possible paths. The shipping service will define the actions to be taken in case of a compromised shipment, while updating the shipping agreement accordingly.
In cases, where the shipment needs to be decomposed in order to reach different final destinations, the encapsulation service needs to be ‘called’. In the case where shipments are split and combined, the encapsulation service will need to be aware of specific requirements, as defined in the shipping agreement. Relevant information includes the type of the product, potential issues with cross-contamination, physical information such as product form and more importantly handling instructions. For the case of the cold-chain pharmaceutical product of this scenario the encapsulation service must take into account the controlled environment requirements and handling instructions. For example, this shipment should not be composed into larger shipments containing products with no specific temperature requirements. In contrast, if a shipment was to be composed into a larger freight, the encapsulation service could perform a second encapsulation of containers within containers in order to reduce empty space and by consequence, transport costs. From a practical perspective, using a cold-chain to ship products with no such requirements, will not only increase the cost of transport but might also affect all products.
5.3.4 Arriving at the final destination
This scenario is concluded when the shipment arrives at the final destination. Once more, the final destination must have the capability to evaluate the condition of the shipment, ensuring that temperature, humidity, and other handling requirements have not been compromised. If the shipment has been identified as compromised, a new route will have to be defined as described in the previous section. In cases where the shipment fulfils the given conditions, the shipping service will have to be called in order to update the shipping agreement and create a ‘goods received’ note to be sent to the sender. The shipping service will also need to ensure that all network participants are notified, their services are accounted for that the relevant receipts are produced.
5.4 Conclusions and impacts to the architecture
The updated defined scenario illustrates a high-level case study of a PI-network, while showcasing the requirements, key and interactions made by the services. These requirements include technical aspects, decisions, events, and information/data flows that are needed in order to realize the vision of PI, and as such have heavily influenced the reference architecture.
5.4.1 Technical requirements
The technical requirements, identified in this scenario, include the need for monitoring across all steps in the PI-network. Monitoring should cover all the quality of service, handling, storage and transportation requirements. For this scenario, these include monitoring of temperature, humidity and integrity of the shipment. IoT sensors must also capture the location of the PI-containers throughout its journey. These requirements are addressed with the use of IoT sensor technology, as described in D1.6.
5.4.2 Decisions
The described scenario also addressed the decisions that need to be considered at each step. The key decisions that have been identified are the routing, transportation and encapsulation decisions that need to be re-evaluated at every step, to ensure that the shipment is at the best path to its final destination, taking into account updated information regarding capacities, congestion and other operational factors. All decisions will need to adhere to the requirements set by the sender in terms of quality of service and other shipment constraints.
5.4.3 Events
The identified events that occur in the initialization of a PI order are showcased as a flow diagram in Figure 13 below:
These events are:
- Order: placing an order for a product will trigger the planning of the shipping process, creating SLAs and composing a PI order with relevant constraints
- Encapsulation: products of order are packed optimally based on product requirements
- Networking: validate destination is in line with order constraints
- Routing: optimal route must be identified
- Shipment ready for dispatch: when the shipment is ready to be dispatched, the shipping agreement must be generated and the order is shipped
- Shipment arrives at hub: when a shipment arrives at a PI-hub, routing decisions and in some cases encapsulation decisions should be re-evaluated ensuring that the best path is followed
• Shipment arrives at final destination: when the shipment arrives at the final destination, a receipt must be generated, and all participants must be reimbursed for their services
5.4.4 Data requirements
As a first step towards the conceptual PI architecture, the previous version identified the high-level data specifications for different services and systems within the ICONET universe. The physical layer, while not being used as a service by ICONET, has been defined as the network of all physical elements of a supply chain network. As such, the elements of the Physical Layer were identified and described. The results of this identification process was used to guide the initial formation of the ICONET ontology. This version was iterated upon during the project’s previous months, based on the needs of the services and systems that were being developed. Furthermore, special care was taken in order to shape the next versions of the ontology in a manner that allowed for the seamless integration of the ICONET services. These definitions allowed for the creation of the conceptual data structures and flows, that were later validated through the simulation service and the living lab use cases ensuring interoperability through different scenarios and scales of PI implementations.
The previous version of the report identified the high-level requirements for key entities of a PI network. This included baseline container data of the PI container shown below (Figure 14).
A PI-container must be uniquely identified, hence the need for an identifier. The identifier could be linked to the IoT device that is monitoring the container and it should be linked to the various identifiers the container might have in other systems (e.g. order id, shipment slip etc.) to facilitate provenance and traceability. Monitoring the status and location of containers also raises the need for operational information such as the location (latitude, longitude), delivery time, capacity, utilisation origin and destination as well as product dependent information such as temperature and humidity. To facilitate the operations of the IoT sensors as presented in D2.7, a more concrete data model was created. Two same level data structures, Shipment data and Requirement data were used containing the information needed. The shipment section contains the identification data needed for linking PI containers with shipments and the requirement section contains all configuration data for the various sensors of the container, allowing the transmission of alerts if values detected outside of those described in this section are not within acceptable ranges.
A similar approach has been adopted for PI-nodes, that receive PI-containers. The relevant data for PI-nodes is presented in Figure 15.
All PI-nodes must also be uniquely identifiable. It is also important to be aware of the level of a node (as defined in D1.7 – L1, L2, L3 for Country, NUTS-2 and Urban level nodes). In the process of identifying the optimal route for a container, the networking must also be aware of any dependencies between nodes, the capacity and utilization level of each node as well as the function of the node – the role assigned to each PI-node for a given network to take into consideration during decision making. Lastly, the coordinates of a node must be known, allowing for using geofencing techniques to determine whether a PI container is located at a node or not.
The PI-movers (trucks, ships and airplanes) that participate in a PI-network were characterized by the data presented in Figure 16.
In addition to the unique identifier, that characterizes all PI-elements, a PI-mover must have information about its type, the routes it can serve, the frequency of the service it can provide, the capacity and utilization levels as well as information regarding its filling rate. This information is needed from ICONET services, in order to identify the best allocation of resources and selection of optimal routing.
Achieving optimal routing, requires data from all PI-routes that are available in a PI-network. A route provides the connection between PI-nodes through a set of PI links. Routes must be identified, and their links, type and stops must be provided. The data is presented in Figure 17.
Link data will be extracted from existing sources, such as TEN-T corridors in a Pan European level, or shipping route information for lower levels. Links are also a key component for which data is required in realizing the vision of ICONET. The data is presented in Figure 18.
Sets of links form routes and the information that is needed to evaluate and optimize the selection of a link, include the nodes of the corridor, their type, the capacity of the corridor, duration of the trip as well as information about any congestion that might occur, as well as the level they operate on. Given the dependencies presented in this section, the PI-network is defined as presented in Figure 19.
5.4.5 Integration with existing/legacy systems using PI Data Adapters
The physical layer represents the physical elements of a network, responsible for generating data. The data layer represents the tools and methods responsible for capturing the data. It consists of two components, IoT component, accounting for all the IoT devices that are responsible for the connectivity and the extraction of data and the legacy components, accounting for the legacy systems that already exist across the supply chain partners.
At the legacy component, it is important to identify all relevant legacy systems and investigate integration options. Integration can be achieved through APIs or custom adapters developed within the ICONET project in a limited scope. Legacy systems are another source of data that will feed into the ICONET services to realize the vision of PI.
The complexity of manufacturing operations and that of logistic operations, have led to the development of a cornucopia of enterprise systems. From MRP (Materials Resource Planning) and MRP-II (Manufacturing Resources Planning) to ERP (Enterprise Resource Planning), enterprise systems are still growing in scope and quantity, integrating almost all downstream and upstream operations, aiming to offer an end-to-end supply chain system. For the purposes of ICONET, enterprise systems are classified as internal, external and extensional.
External systems are the systems not directly related to logistics operations but offer significant information sources. These include systems that contain information regarding traffic, general search services and other related sources. More specifically, ICONET external systems might include services like Marine Traffic, offering real-time information on global ship traffic, weather data sources, offering weather information that can be input to various services, routing data (e.g. Google maps) offering information about available modes of transport, start and end dates, load data, and waypoints. Routing services can also offer traffic data, an invaluable input to almost all optimisation services.
Extensional systems are defined as systems that extend ICONET functionality. They are also external systems but they are not information sources but rather services such as optimisation solvers.
Internal systems are defined as the systems used in logistics operations. These are the legacy systems used by ICONET LL partners. They include numerous enterprise systems such as ERP systems, Warehouse Management Systems (WMS), Transportation Management Systems (TMS), Manufacturing Execution Systems (MES), Inventory Optimisation Systems, Procurement Systems, Production Planning Systems, Scheduling, Transportation Planning, Order Management Systems and Demand Forecasting tools. Input and output data for these systems must be identified and documented, connectors/adapters must be developed where necessary. These data must be supported by the ICONET ontology.
As a first step towards the conceptual PI architecture, it is key to identify the data requirements for different services and systems within the PI universe. Identifying data requirements will inform the ICONET ontology and the initial architecture as they will drive the integration-related decisions as well as the service modelling. The first step is to identify the data available at the legacy system level. Mapping available data will enable the definition of data structures, definition of data flows and will eventually facilitate the development of ICONET ontology, that will enable system interoperability. The following data was considered in the course of the project and the ICONET ontology was influenced and included a number of these data structures.
Internal systems data include data from ERP, WMS, TMS, MES, IMS, Order Management and other planning systems. For ERP systems, the following data have been identified:
- User details, such as ID, Name, Phone, Title
- Account details, such as Name, ID, Type
- Contact details
- Campaign details, such as type and status, in order to track success rate or any other key performance indicator (KPI)
- Reporting
- Lead time
- Mapping information (to avoid syntactic issues)
- Advanced shipping notification
For WMS, the following data have been identified:
- Master data: Article number, Description, Article weight, Article length, Article width, Article height, Quantity unit, Type unit load, loading factor, gripping unit, Blocking indicator, Batch number, Weight/retrieval unit, Weight/unit load, Client, Best before date, Remaining runtime, Sorter capability
- Inventory data: No. articles, Total stock, Average stock, Minimum stock/art., No.UL/art., Available stock, Shortages
- Movement data: Goods receipts/day, Goods issues/day, Storages/d, Retrievals/d, Quantity transship./a, Restorages/d, Orders/d, Orders per article, Positions/order, Positions/d, Grips/Pos., Incoming orders/h, Order lead time, Material lead time, No. of orders/order type, Double cycles/d, Complete units/d
• Other systems data: Order types, Unit load master data, Packaging master data, Storage capacity, Space restrictions, Room restrictions, Utilization space/volume, No. UL/art., No. staff/dept., Operating costs (manpower, energy, maintenance), Investment costs (replacement), Value turnover/a, Productivity
For TMS, the following data have been identified:
• Modes of transport available: capacity, cost, fuel consumption, load restrictions, environmental footprint
• Schedule data: intermediate stops
• Product data: size, weight, volume, special conditions, best before date
• Routing data: routing information, high, low and average speed per mode of transport, transportation costs, paths, connectivity groups (intermodal)
For MES, the following data have been identified:
• Lots or batches coming in or being created
• Lot and batch attributes describing material
• Operating supplies used
• Machines used
• Process data obtained
• Individuals involved in the production process
• Tools used
• Repairs of machines and tools
• Quality data (such as measured values, inspection equipment used, inspection decisions)
For IMS, the following data have been identified:
• Items
• Item number
• Item Description
• Item categories
• Item cost data
• Item price data
• Inventory movements
• Location
• Available quantities
• Item suppliers
• Item manufacturers
• Item customers
• Lead time
• Best before date
For production planning systems, the following data have been identified:
• Current status
• Start, end and latest update dates
• Quantity
• Criticality
• Owner
• Data source
For demand planning systems, the following data have been identified:
• Name, description, and relevant categories of the sale
• Spatial and temporal data, such as Location and Due date
• Priority
• Status
• Minimum shipment and maximum lateness dates
• The Item related to the sale
• Quantity of the Item
• Owner of the Item
• Category and price of the item
• The Item's source
• Materials required
• Job details
• Projected materials requirements
• Historical data on material requirements
• Customer service levels
• Reorder point
• Economic order quantity
• Suppliers data
Various sources of external information have also been identified. This use of external data has two objectives. The first is to act as a complementary source of information, offering data that is not available in ICONET systems, while the second is to offer data that can be used for validation of data obtained from ICONET systems. For example, data obtained by IoT devices monitoring the location of PI-container can be cross-checked using global ship traffic data. The identified relevant sources are listed below. This list is by no means exhaustive.
Another type of information that might affect decision making in logistics operations is weather related data. Adverse weather conditions might affect the estimated time of arrival for ships or even trucks. The following types of information have been identified:
• Current weather conditions at a location
• Forecasted conditions at a location
Routing data are expected to be key for the realisation of PI. They are also expected to be invaluable input for many of ICONET’s offered services. The following routing data have been identified:
• All possible paths
• Distance of paths
• Time of paths
• Coordinates of paths
• Instructions
• Instruction description (navigation)
• Instruction distance
• Instruction duration
In the example described above (as well as the Living Lab descriptions in Section 6.2.2), is clear that the bulk of the data to be used by the PI services will originate from these legacy, external and extensional systems. As such, the inclusion of integration points between these systems and the PI service stack is of utmost importance. To address this, the PI Reference architecture relies on PI Data Adapters. These adapters will allow the transfer of information from systems outside the PI stack to PI Services, utilizing the principles outlined in the ICONET ontology for the necessary data transformation of non-PI semantics to PI enabled data.
To achieve this, these adapters will make use of established technologies for communication with outside systems, such as utilizing integration drivers/SDKs/Libraries (if available), APIs, direct database connections for retrieval of data or simple parsing of output files (usually .csv or JSON files). The communicated data will then be transformed to PI enabled data, to be forwarded to the corresponding services. Figure 20 below showcases the transformation that will occur between data identified from the aforementioned systems to PI data.
These Data Adapters will also have to address issues regarding confidentiality, protection and security of data. While the technical implementation is not in the scope of ICONET for these adapters, from an architectural perspective, multiple solutions can be proposed. As is expected, these solutions might not apply to every single case since the adapters will need to be built on a per use basis. Such solutions could include:
- Air gapped data adapters for local system installations. Disallowing any network access means that the data is not accessible from outside and thus is safe from potential malicious third parties.
- Secure communications protocols, such as HTTPS or OAuth2 based protocols should be used for Data Adapters communicating through APIs.
- Anonymization protocols, which would be configured to mask potential sensitive data that are not necessary for PI operations.
On the last point specifically, anonymization protocols can also be used to address regulatory compliance issues, such as GDPR compliance. Section 6 also describes how the decentralized architecture approach provides innate benefits regarding data security, regulatory compliance and separation of concern.
Overall, using the methodologies described previously will enable integration and interoperability of extensional, external and legacy data with the PI service stack. As the adoption of PI increases, it is envisioned potential marketplaces may be created for the development of such adapters. Additionally, potential system developers could benefit from creating interoperability adapters themselves to include themselves in this upcoming vertical market.
5.4.6 IoT component
The aim of the IoT component is to enable the realization of smart containers, facilitating track and trace of containers in real-time or near real-time monitoring. To achieve that ICONET will employ IoT mechanisms and devices that will monitor PI-containers and other PI-elements. The details of IoT requirements are presented in D1.7. Initially, the plan for data requirements and IoT interoperability with the rest of the ICONET infrastructure was to use the iot-lite ontology. (Figure 21).
In this diagram, representing an example of the iot-lite ontology, a sensing device (ssn:SensingDevice) is described as part of a wider system (ssn:System). The iot-lite ontology also describes the type of the sensor (e.g. temperature), the unit (e.g. celcius), the location of the sensor (lat, long, alt), the coverage of the sensing device, as well as the service that exposes the data captured by the device. As the project and the work on the IoT cloud service has progressed significantly, it was decided that a less verbose data scheme would be used, containing the values of the IoT sensors, as well as required meta-data for the configuration of said sensors. An example of this operational message in JSON format can be seen below(Figure 22 Example of IoT operational data).
{
"currentLocation": {
"time": "001623.0 210619",
"gps": {
"east": "15.48921",
"north": "43.68558"
}
},
"Content": {
"temperature": 27.01,
"humidity": 50,
"light": 1000,
"bat": 3.3,
"code": 0x00,
"shock": 0,
"ax": 100,
"ay": 200,
"az": 300
}
}
6 Physical Internet Decentralized Reference Architecture
6.1 Background of previous work
For the design of the reference architecture, ICONET has adopted an incremental and iterative approach in order to ensure that the architecture is robust and constantly updated by the evolving requirements and technology advents within ICONET and the research community in general. The outcome of the first version of this work is shortly presented in this section. The concept of Digital Twin was used, in order to position the required modules in relation to existing supply chain functions & roles. As such, the first iteration of the reference architecture, presented in Figure 23, presents a high-level view of the different components and the relations to ICONET work packages.
For the first iteration of the conceptual architecture, an attempt was made to capture the different systems that need to be covered by the architecture and subsequently identify the data that will be available and therefore needed to be extracted, transformed and processed. In addition, all requirements stemming from the work carried out in WP1 were taken into account in order to inform the reference architecture. Table 3 provides an outline of the WP1 outputs used to inform the first iteration of the ICONET reference architecture.
Table 3 Mapping WP1 outputs
| Deliverable | Output |
|----------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
| D1.1 PI-aligned digital and physical interconnectivity | Previous work and existing standards on Physical & Digital Interconnectivity |
| D1.3 PI network optimisation strategies and hub location problem modelling | Definition of optimisation service requirements (inputs and outputs) |
| D1.6 Requirements And High-Level Specifications For IoT Based Smart PI-Containers | Definition of requirements for IoT connectivity |
This first iteration was further extended with research findings from relevant work, offering a more focused view of the different modules (services). The initial identified modules (Figure 24), include modules that are needed in order to support the key functionalities of ICONET.
The analysis of the modules and interactions of the conceptual reference architecture were based on the findings of D1.10, and more specifically on the findings regarding the applicability of OLI in ICONET. The analysis made followed the layers defined in OLI (Montreuil et al., 2012). For the final version of the conceptual reference architecture, further developments & findings on the applicability and functions of the various layers and their corresponding services were used. More specifically, work done in D1.11 & D2.3 were used as a basis for the needs that the architecture will need to cover. The significant progress made in the technological implementation of these services and relevant discussions that were had with WP2 partners also played an important role in the final design. This further guided the conceptual ICONET reference model, where a combination of OLI & NOLI models were used to inform the final architecture. As such, the final reference architecture shifted to a more technical approach, describing the topology of services and their interactions in relation to the physical building blocks of the PI.
6.2 Current version of Reference Architecture
6.2.1 Design Principles
During the project, various workshops were organized to identify the key requirements of the services. The identification of these requirements, as presented in Section 4 of this document, helped shape the final architecture. Additional work done in WP1 and WP2 also played a critical role in the decisions made when designing the reference PI architecture. In the previous version of this document, a simple conceptual architecture was defined based on the initial interconnection requirements of the PI world with the various logistics entities, as shown in Figure 25. This diagram presents an interface between the various entities, through the use of ICONET Protocol Stack (as detailed in D1.11) and relevant ontology, with the ICONET services and PI node.

Figure 25 Conceptual Architecture
18.104.22.168 Eliciting Technical Requirements
The initial technical requirements as presented in the previous version of this report, were elicited during project wide workshops that were organized in the first period of the project. As the work progressed, the focus shifted to the design and development of the core services. The design of these services was iterated upon (as shown in deliverable report D2.4) based on requirements stemming from WP1, as well as real use case scenario requirements and capabilities needed, as was communicated to WP2 partners by the Living Lab partners. Additionally, the interconnection of these services done for the PoC Integration platform (as presented in deliverable report D2.20), spawned additional requirements that were needed to fully enable the seamless communication between them, as well as the simulation platform. During this iterative process, the PI
architecture has been continuously evolving to adapt to the newly found requirements and capabilities. The cyclical nature of the design process that was followed can be seen in Figure 26 below.

**Figure 26 Cyclical influence of ICONET work**
### 22.214.171.124 Technologies
The technologies that were used when developing the various components of the ICONET project were varied depending on the service. However, from the architectural point of view, internal technologies used by the independent services are not very relevant. Instead, the focal point of the architecture is enabling data exchange and secure communication between the components, as well as scalability and interoperation between different scopes of businesses and operations. As such, the decision to create REST API interfaces for services was made. This allows for standardized communication between services and potential different implementations of client systems, as the REST paradigm is language agnostic. Using JSON as a common data format, interoperability between the services is also achieved. Additionally, a basis for secure communications is established through using HTTPS for a standardized defence against man-in-the-middle attacks. Additional authentication capabilities are supported by the service and the proposed architecture design, such as Basic (username & password) authentication, as well as multiple Oauth2 implementations.
The data that will be generated and used by the ICONET services will grow with time and this led to discussions regarding the use of central or distributed data storage repository. The data needs of the project can be
addressed using both of these approaches. Assuming the point of the decentralization is done on the PI Hubs (meaning that each PI Hub will also act as a node for the decentralized data storage), the decentralized storage could be slow to implement, as this will require business agreements between participating parties. While the decentralized paradigm would complement the decentralized nature of the Physical Internet concept, a centralized data storage for each PI Hub would be easier to implement and set up. This means that for each party hosting a set of the ICONET services, they will also utilize a data storage repository to be used by the services.
126.96.36.199 Digital Interconnectivity
According to previous work detailed in “D1.1 PI-aligned digital and physical interconnectivity models and standards” to achieve the Digital Interconnectivity there is some previous work done to be considered as a starting point. Digital interconnectivity ensures that physical entities, constituents and factors can seamlessly exchange meaningful information across the PI, fast knowledge and fact-based decision-making in action.
The review done shows that there are many research projects and standardization initiatives but a full integrated approach for PI is not available. ICONET Project created a PI-compliant stack of models and took existing standards as basis.
The following list summarizes the analysis done in D1.1 regarding the state-of-the-art of existing projects, standards and emerging trends in the field of Digital Interconnectivity for the PI.
- Data Capture & Encapsulation: the usage of GS1 Data Identification was considered and partially utilized.
- Data Integration & Standard Smart Interfaces: a good work has been done in the MODULUSHCA Project, which may be extended by GS1 Data Exchange standard.
- B2B Interoperability: ICONET software architecture considered distributed software systems, integrated by using loosely coupled protocols like web-services or RESTful services.
- Service Orchestration & Service Choreography: to achieve an integration of distributed systems, both approaches were considered.
6.2.2 Living Lab Considerations
Apart from the design principles and generic technical requirements mentioned in the previous sections, another validation of the approach chosen for this final architectural blueprint was its applicability to the Living Labs of the project. To achieve the generic and widely applicable architecture blueprint, we can use the 4 Living Labs for validation, as they differ significantly on their scale and operations. The following Section will provide an overview of each LL with the corresponding needed information flow, as has been decided by previous work done in WP3 deliverables.
188.8.131.52 Living Lab 1 – Port of Antwerp
LL1 concentrates its efforts on the improvements of the overall railway performance within the port of Antwerp by creating a railway community for all relevant stakeholders under the coordination of the Port Antwerp Authority.
Figure 27 outlines the information flows among the key actors/platforms in Living Lab 1, along with the required PI Services. The external IoT Cloud Platform, has been included to visualize the interaction with an externally managed tracking service, feeding the PI Shipping Services.
This information flow helps to derive the interactions of the various services and actors in the PoA Use Case. While this is helpful in orchestrating the services, for the architectural blueprint to be applicable in this case we need to take under consideration the complexity and the physical setup of the PoA.
The PI application for the PoA will need to orchestrate shipments making optimal use of all the individual nodes & operators contained therein, as shown in Figure 28 below.
| Actors | Description |
|------------------------------|--------------------------------------------------|
| Deep sea terminals (2) | MPET MSC European Terminal |
| | DPW Antwerp Gateway |
As such, the combination of the various “building blocks” of LL1 and the engagement of the capabilities and functionalities offered by the services can be seen in Figure 29 below.
In this Figure, two scenarios are examined:
1. The shipment arriving from a Deep-Sea port with an inland destination will fill the capacity of a train, and as such is loaded directly from the deep-sea terminal to depart for its destination.
2. The shipment will have to be unloaded into the deep-sea terminal, sorted and moved to an inland terminal, where it will be moved to a bundling station for wagon loading. The wagons will then be assembled into a train to depart for its inland destination.
As is apparent, there is an overlap of concern for the engaged services. In more detail:
- The Deep-Sea Ports will need to engage with the Shipping Service for information about incoming and outgoing cargo Shipments. The Encapsulation service will need to be engaged for information regarding bundling and sorting operations of containers as they were loaded onto the incoming ship, or bundling and sorting decisions for cargo to be loaded into the outgoing ship. Additional Encapsulation concerns come up for bundling cargo onto a train that will be loaded directly from the deep-sea terminals. Furthermore, Routing and Networking services will need to be contacted for information about actual routes and destinations of the outgoing cargo.
- In the cargo terminals, the Encapsulation service will make decisions regarding container stacking. The networking service will contain information about the cargo residing on each terminal.
- For bundling, the Encapsulation functionalities are handled by the optimization service, for optimal wagon loading and train composition.
- Finally, for the rail operators, Shipping, Routing and Networking services will need to be engaged to initiate the corresponding shipments optimally with the aforementioned functionalities.
As such, in the concept of PI, the Port of Antwerp can be described as PI Hub with its’ own PI Network comprising of PI Nodes of various functionalities. To accommodate this, the PI Reference Architecture needs to enable the multilevel application of the PI Services while maintaining an optimal separation of concern and eliminating duplications of roles and functionalities. In addition, it needs to make use of the available external systems (RTS & BTS) of PoA, as the data and decisions made therein will have to function in parallel with the PI applications. The scheduling functionalities offered by these systems will need to be taken into account when planning for cargo consolidation and wagon bundling.
To achieve this, the PI reference architecture orchestrates the services and functionalities to be generic and abstract enough to fit a variety of criteria and operational needs, enabling them to operate on different levels. For the case of PoA, this would mean that services can be used in each of the individual nodes comprising the network on the lowest level, while at the same time having a top-level instance that would be handled by the Port Authority. This top-level instance could then be used by other PI Nodes & Hubs, routing cargo specifically to it, with the top level PI Hub then making the decisions (based on parameters discussed previously) regarding which of the lower level nodes would be assigned the continuation of the shipment. The simplified visual representation of this multilevel nature of the PoA PI Hub is presented in Figure 30 below.
184.108.40.206 Living Lab 2 – Proctor & Gamble
The goal of the PI application in LL2 is to provide visibility and monitoring capabilities in shipments across established multimodal corridors. The 2 corridors that are examined for this Living Lab, are Mechelen to West Thurrock using trucks and ferries, and Mechelen to Agnadello using trucks and trains.
Figure 31 outlines the information flows among the key actors/platforms in Living Lab 2, along with the required PI Services. The external IoT Cloud Platform, has been included to visualize the interaction with an externally managed tracking service, feeding the PI Shipping Services.
While this scenario is simpler than LL1, the Track & Trace functionalities that are being utilized is one of the most important aspects of the PI concept. The PI nodes, or the “building blocks” that are utilized in this Living Lab, are essentially the origin, destination and intermediate stops of a shipment. In Figure 32 below, a high-level view of the steps taken to complete the routes using the 2 corridors are shown, in conjunction with the relevant services.
As is apparent, the PI services are engaged across every stop in these corridors. The common functionality across these nodes, in terms of service usage, can be described as follows:
- The Shipping Service is present to orchestrate the rest of the services. It also provides a high-level overview of the shipment and its’ state and at the same time, it receives updates from the containers containing IoT sensors through the IoT Cloud Service.
- The Encapsulation service is responsible for the bundling and unbundling operations that occur in each terminal, as needed. In a potential shipment that would use a single mode of transport, this could be potentially omitted but, in this case, as is expected, bundling/unbundling operations take place before each mode change.
- While the Routing service will remain mostly inactive in these corridors, seeing as the routes that the shipments take are mostly predetermined to follow the exact corridor, its’ presence is important as rerouting could potentially be needed depending on the conditions of the links between two nodes.
- The Networking service is also needed to have availability and capability information about these nodes, as well as information regarding the state of the links and routes between them.
The main concern for this Living Lab, as already mentioned, is to provide the tracking capabilities innate in PI across an end-to-end shipment that is using these corridors. As such, the architecture will need to position the building blocks in such a manner that runs the minimum risk of loss of information. This is achieved by positioning the Shipping Service, which is the principal tracking service, in every intermediate stop. This placement allows
for 2-way tracking of the containers and the shipment, from both the origin Shipping Service and the destination Shipping Service in any step of the Shipment.
It is also important to note, that while these corridors are comprised of individual links and routes between the nodes they encompass, they are also considered as a single route. A visual representation of this can be seen in Figure 31 below.
As such, when a shipment needs to arrive to the end destination of the PI corridor A for example, the initial routing will only need to route to the “PI Corridor A” initial hub. The following routing and shipment through the corridor will be then performed by the internal corridor services residing on individual hubs. This means that the initial routing, in conjunction with the networking service, can consider a PI Corridor as a single link, with corresponding information, such as mean travel time, condition, expected delays etc. referring to the unified corridor instead of each single individual link.
220.127.116.11 Living Lab 3 – SONAE
LL3 will address the challenges of eCommerce channel fulfilment in urban environments and study the role of the regional distribution centres-hubs as PI nodes within the PI paradigm. In particular, SONAE will investigate on how regional warehouses jointly with local stores (smaller) can act as PI Nodes for urban distribution and fulfilment of eCommerce Purchase Orders, with the main goal of reducing stock-outs, costs and reducing lead times.
Figure 34 outlines the envisioned information flows among the key actors/platforms in Living Lab 3, along with the required PI Services. The image below shows an outline of the initial approach to the connection between the different services and the simulation model. The model begins with a set of orders, one sender wants to send through PI network. Using the networking service, a coordinator can configure the nodes and transports which are available in the network to manage the selected orders. The simulation model can also be interfaced with the route optimization service to determine the best urban transport routes to deliver the orders to the customers with the selected strategy.
Apart from the interconnection of the services, Figure 35 below presents the high level flow between the main components of this Living Lab.
While some similarities exist with the previous Use Cases, this Living Lab is quite different. As can be seen in Figure 35, the engaged PI services exist across most of the PI Hubs & Nodes that comprise the operations of LL3, with the exception of regular stores and last mile delivery destinations. The key difference stems from the goal of this Living Lab, which is reducing stock outs and lead times. This means that the transfer of goods between the involved nodes needs to be optimized. For this reason, ICONET examines the possibility of additional internal orders being generated to replenish stock across stores. Again, the common functionality can be summarized as follows:
- Shipping Service will need to engage with Suppliers, Warehouses and stores where possible to provide better visibility of incoming and outgoing shipments. This can be particularly useful across the board in this LL, as it will allow for better tracking of incoming and outgoing stock of products in stores.
- Encapsulation service is needed in every location where loading & unloading operations occur, meaning across Suppliers’ warehouses, central SONAE warehouses, darkstores, support stores and retail stores. The Encapsulation service will be tasked with optimally loading the products that are to be distributed in an order that makes sense and streamlines delivery and unloading operations.
- Routing service will need to route the various shipments across the different nodes, from origin to destination, and as such is needed on each node where shipments can originate from or conclude to. Additionally, a Rerouting action can occur when a delivery truck needs to load products to be delivered from another facility than originally planned, due to stock-outs.
• Networking Service is needed across the various nodes, as it needs to be up to date regarding available stock, by using the information tracked by other services and external systems to monitor the incoming and outgoing stock of products, based on orders.
Taking into account what was previously mentioned, it is clear that the problem that the PI concept is called upon to give a solution to, is the internal orchestration of stock traffic, to reduce stock-outs, as well as position the stock in stores closer to the end customers, to reduce delivery times. These two optimizations would obviously reduce costs associated with transport and customer support that would be needed to address these stock outs and offer replacements.
To accommodate this, the PI reference architecture will need to be flexible enough to allow additional internally orchestrated orders to be generated, originating from within the PI services instead of the classic paradigm that positions the PI orders as originating externally. These dynamic, data-driven generated orders will greatly optimize an eCommerce Network, as replenishment actions can be triggered and orchestrated automatically. Of course, the level of automation will need to be configurable by the interested parties and be set within desired limits.
18.104.22.168 Living Lab 4 – Stockbooking
LL4 is designed to investigate the potential of e-Warehousing as a key enabler of the PI concept. LL4 will provide warehousing services structured around the PI concept, which will be tested and enhanced in the LL.
This section describes the technical integration performed and the next simulation dealing with PI network. It will start by defining the test scenarios. Then describe the algorithms planned to simulate the PI and lastly defining the data management of the system.
Figure 36 outlines the information flows (To-Be) among the key actors/platforms in Living Lab 4, along with the required PI Services. The external IoT Cloud Platform, has been included to visualize the interaction with an externally managed tracking service, feeding the PI Shipping Services.
Order
PI Web Logistics Service
PI Encapsulation Service
PI Routing Service
PI Network Service
WaaS
PI Shipping Service
Cloud IoT Service
Request Routing Table
PI Order & Assigned PI Container
Network (incl. warehouse availability)
Network Discovery
Request Warehouse Availability/Conditions
PI Order
Warehouse Availability/Condition
Calculate Optimal Warehouses
Optimal Warehouses
Route
Shipping Institutions
Form & Expose Shipping Instructions
During Transport & Storage
Request IoT Data
IoT Data
Transforms IoT to Transport Events
Recalculate & Expose ETA to next PI Node
IoT Storage/Transport events & conditions
Transport Events stored in Blockchain
Establish Smart Storage Contract
Apart from the interconnection of the services, as shown in the Figure above, this Living Lab, similar to Living Lab 3, focuses on the dynamic allocation of stock across Stockbooking warehouses. Unlike LL3, this allocation originates from customer orders and is not entirely decided by Stockbooking itself. However, the goals of the PI offered optimizations for this Use Case are alike. Reducing stock-outs, allocation of stock in different warehouses (per customer request) and optimization of the warehouse fulfillment rate. As such, the PI services will offer the same dynamic data-driven decisions to automate and optimize decisions made on products originating from the Suppliers, or moving cargo between the Stockbooking warehouses.
6.2.3 Final PI Architecture Blueprint
Based on the extended study of PI Reference models, the functionalities of services and their interactions, as well as their inputs/outputs and data requirements, the design approach mentioned in the previous section and the Living Lab considerations, the current version of the PI reference architecture was formulated. The main challenge was to find a solution that can accommodate the efficient operations for various logistics actors who participate not only in different scales, but different operational modes. The PI architecture will need to cover logistics operations that occur through a city, but at the same time cover operations that occur through countries. Additionally, multiple modes of transport need to be covered by a generic solution. Moreover, the dynamic nature of some of the cases examined will need to be accommodated by the PI architecture and the overall service functionality.
Similar to LL1, Figure 37 provides a simplified example of the different scales of operations. In the example, a shipment produced by a factory, needs to reach a deep sea port for global transport. It would need to be initially transported locally to a PI hub that will support transport across a region (national or international). Assuming the regional transport destination is not the deep sea port, it will need an additional local transport to reach it. In this very simple example, the shipment will go through 3 hubs that operate on a local, regional and global level respectively. Additionally, 3 modes of transport are used: trucks, trains and freighters.
The PI architecture needs to support each of those hubs, their level of operations and the different modes of transport. In a centralized architecture, this would mean that the centralized repository of the ICONET services would need to have data available that span a vast geographical area, while also maintaining and updating routes, locations, and link conditions for all the potential nodes that reside therein. In contrast, in a decentralized paradigm, each node will only have to contain information about the operations pertaining to the specific node.
Moreover, based on the needs of LL1, it is apparent that the physical installation of the services doesn’t always correspond with the physical operations that need to take place for the fully enabled PI flow of goods. As such, it is important to note that the stack of PI services can either be installed using local hardware or even be hosted in the cloud, as long as their operations concern a single node or hub. Additionally, some of the building blocks will need to be engaged by one or two of the services, and not the whole stack. This means that the services will be built using a plug-&-play approach, while the instances of the services they depend on will be configurable, allowing for single service installations that communicate directly with services located in other nodes, if needed.
Based on the conclusions made from LL3 and LL4, regarding the dynamic nature decisions in some networks, the architecture as well as the services themselves have already been positioned in a manner that allow a single service to initiate an order. Assuming that these decisions are based on stock-outs, the Networking services would be the most appropriate one to initiate these orders, since it has knowledge for stock levels across each node.
As such, following the concept of the PI, where software and networking concepts and technologies are applied to the logistics world, the decision was made to follow a decentralized paradigm for the PI reference architecture, with PI nodes acting as nodes of the system. Decentralized systems in the modern world offer many advantages in comparison to centralized systems. System availability on a decentralized system is not dependent on the health of a single node, as if one node fails others can continue operating independently, essentially eliminating the single point of failure as would be the case with a centralized system. Scaling of resources can be done vertically for individual nodes, allowing them to add more resources according to their operations’ needs. Furthermore, the autonomous nature of a decentralized system is a great fit for ICONET and the PI concept, as services residing on each node allow these nodes to configure their instances as they see fit and will not have to rely on a central authority or service provider for specific implementations.
The decentralized paradigm offers great capabilities in terms of service interactions and relevant data. The services residing in a single node need to only contain and access information about operations pertaining to that single node, eliminating complex data stores. This information is then made available when requested by another node in the network.
As is apparent, this also has ramifications in the data security department. Node operators will be more likely to allow data stemming from external and proprietary systems to be made available to the PI network when having the capability to host these services themselves locally. As mentioned in section 5.4.5, this data can be transitioned to the PI network by using the PI Data adapters for external systems which would also be hosted in the same installation as the PI services, allowing greater control of when, how and which data should be made available to the PI network.
To better visualize the implementation of the decentralized architecture (based on the points raised above), the PI services and their interactions are presented using the PI nodes as a basis in Figure 38 below:
This architecture diagram shows the interactions between services on a single node, as well as communication on a node-to-node basis, with shared interactions between services being highlighted. Figure 39 below shows the same internal interactions positioned along with physical logistics elements, showcasing the applicability of the architecture in scenarios with the same goals and needs as the Living Labs described in the previous sections.
This figure examines two scenarios (note that all nodes contain their own set of PI services):
- In black, flow of an order passing through a corridor and its’ internal links and stops.
- In blue, flow of an order originating from a composite PI hub due to an internal node Stock out, with no option of local replenishment.
For the first scenario, a shipment originating from a PI node with its’ own set of services is routed to Corridor A. For this to happen, the shipment is first encapsulated into PI containers and loaded onto a PI mover. The originating node services communicate with the initial node of the corridor to notify it of the incoming shipment. This is done with the service flows described in previous sections. Similarly, to LL2, this initial corridor node then utilizes its’ own stack of PI services to communicate and route the shipment to the corridors’ internal nodes. After the shipment reaches the final corridor node, the node communicates with the services residing in PI Hub A to the incoming shipment, after which the routed and potentially further encapsulated cargo is transported to PI Hub A.
For the second scenario, a sub-node of PI Hub B (similar to the multilevel nature of LL1) detects a stock-out using the Networking service. This sub-node then queries its’ neighbouring nodes for potential replenishment options.
As no such options are available, it finally communicates with PI Hub A, which can fulfil the stock replenishment. A PI order is then generated from PI Hub A, which follows the PI service flows (creating an SLA, encapsulating products and routing them to PI Hub B). The order then is transported to the central PI Hub B node, after which the central node, using its’ own services, routes the shipment to the sub-node. This scenario is similar to the dynamic nature of LL3 & LL4, as described in the previous section.
The overall functionalities, in both cases, can be split in three distinct planes, Control, Management, and Forwarding. A detailed overview of these planes can be found in the following section.
22.214.171.124 Management Plane
In computer networking, the management plane of a networking device is the element of a system that configures, monitors, and provides management, monitoring and configuration services to, all layers of the network stack and other parts of the system.
The management plane of the PI Node and its services is responsible for orchestrating, managing, configuring and monitoring a shipment through its lifecycle. As can be seen in Figure 21, this plane contains the Logistics Web, Shipping, and IoT Cloud services. The overall management of a shipment will happen through these services, either directly through the PI via the Shipping Service, or potentially even through a legacy system through the connecting Web Logistics service. All of these services contribute to maintaining the quality of a shipment intact, by continuously querying each other to monitor sensor values. Additionally, the Track & Trace functionality of the PI is enabled by two of those services. This allows the shipping services of both nodes to receive live updates on the state of the shipment, while checking (as mentioned previously) against constraints on SLAs stored in the Web Logistics service. This allows both origin and destination to better orchestrate their overall operations, greatly reducing intake and outtake times. Furthermore, in case of an impeachment, both origin and destination are notified and can take appropriate actions.
126.96.36.199 Forwarding Plane
In computer networking, the forwarding plane, sometimes called the data plane or user plane, defines the part of the router architecture that decides what to do with packets arriving on an inbound interface. Most commonly, it refers to a table in which the router looks up the destination address of the incoming packet and retrieves the information necessary to determine the path from the receiving element, through the internal forwarding fabric of the router, and to the proper outgoing interface(s).
In the PI, the forwarding plane of a node contains the encapsulation service. This service is responsible for encapsulating the incoming shipments to PI containers and potentially encapsulating these containers further into other containers or directly into PI movers, essentially enabling the forwarding of the shipment to other nodes.
188.8.131.52 Control Plane
In network routing, the control plane is the part of the router architecture that is concerned with drawing the network topology, or the information in a (possibly augmented) routing table that defines what to do with incoming packets. Control plane functions, such as participating in routing protocols, run in the architectural control element. In most cases, the routing table contains a list of destination addresses and the outgoing interface(s) associated with them. In the PI node, the Control plane contains the Routing and Networking services. The networking service is responsible for holding all information of the node that needs to be made available to other nodes, describing capabilities, potential routes, capacity, locations etc. This allows the Routing
service to make better decisions as to where to route a potential shipment to, as the Networking service of the candidate nodes can provide important information. As such, the Networking service of one node can exchange information with the corresponding service of another node, and similar to network routing, create a “routing table” to be used by the routing service from the data exchanged between the various nodes. This can be done through Network Discovery, essentially the communication between Networking services of nodes, or can be directly fed information from legacy/external systems.
184.108.40.206 Relation to NFV and SDN Network Paradigms
NFV, or Network Function Virtualization is a relatively new conceptual network architecture, that uses IT virtualization techniques to virtualize entire classes of network node functions into individual building blocks (Virtual Network Function or VNF Components) that may be used to connect, chain together, or create communication services. Essentially, NFV describes a way to reduce cost and accelerate service deployment for network operators by decoupling functions like a firewall or encryption from dedicated hardware and moving them to virtual servers. Instead of installing expensive proprietary hardware, service providers can purchase inexpensive switches, storage and servers to run virtual machines that perform network functions.
While NFV is not directly associated with the PI and ICONET project, some of its’ innate benefits were used as an inspiration for what the architecture should achieve. In a more abstract sense, the concept of PI, which is equating classical logistics functions and decisions to corresponding IT components and services, is similar to that of the NFV, as they both take traditionally hardware or physical related roles and virtualize them to be performed via software methods. Of course, the main difference being, that the PI still has a physical element to it, as no matter the decisions made by the software components, corresponding physical actions still must be taken. However, offerings of the NFV concept, such as modularity, scaling and virtualization of functions is covered in the ICONET architecture. The services are positioned in such a way that they can be substituted or removed entirely if a PI node is not interested in a specific function of the PI. For example, if a PI node operator does not perform additional bundling operations in their premises, the encapsulation service would not be used. The distributed nature of the architecture as mentioned previously, offers great capabilities for scaling up or out. Additionally, the virtualization of functions occurs throughout the PI services, as again in a more abstract way, they perform virtual actions to reach conclusions that will later be translated to physical actions.
Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring, making it more like cloud computing than traditional network management. SDN is meant to address the fact that the static architecture of traditional networks is decentralized and complex while current networks require more flexibility and easy troubleshooting. SDN attempts to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane). For SDN, while some parallels could be drawn in regards to the PI architecture, with the separation of planes and the centralized management and configuration through the Management plane, there is no exact correspondence. The combined NFV-SDN paradigm is a better comparison to PI, as some SDN principles can be used to better manage and orchestrate VNF components. Similarly, in PI, it could be said that the overall Management plane is used in a similar fashion to what SDN describes, where configuration of components occurs in an open and flexible way through a centralized orchestrator/controller, while keeping the different planes of concern separate.
It is important to note that while both paradigms can be used in conjunction, they are not interchangeable or dependent on one another. Their principles complement each other, but there can be implementations where only one of both is used. The feasibility and relation of these network paradigms has been also explored in D1.11 and D2.19, where an application of SDN is implemented in relation to the PoC platform.
220.127.116.11 Conclusions
In conclusion, the presented PI reference architecture is sufficiently generic and high level to encompass a variety of requirements, as expressed previously. Both multilevel and dynamic use cases were considered and addressed, based on real world use cases from the ICONET Living Labs. NFV and SDN network paradigms were examined in relation to PI, with some of their design patterns being utilized for the reference architecture. Interfacing with external systems can be done through the use of the PI Data Adapters, as described in section 5.4.5. Communications between external systems and the PI stack, as well as the internal communications of the PI stack will be sufficiently secured by using current industry standards for data protection, security and privacy. Regulatory compliance is also achieved by utilizing some of the same standards. In addition, the decentralized paradigm followed has a clear separation of concern that occurs naturally, as services are positioned locally instead of centrally, which also emphasizes data sovereignty across various operators and parties. The result is a widely applicable decentralized PI reference architecture, that provides value to the PI concept as it describes a simple yet robust architectural paradigm that can be adapted to most logistics operations and can be iterated and built upon as the PI concept becomes more widespread.
7 ICONET Ontology
Ontologies are knowledge representation tools that have advanced through the years and are currently used in a extensive range of applications across different fields. The most widely used definition is the one coined by (Studer, Benjamins & Fensel 1998), who defined ontology as “a formal explicit specification of a shared conceptualisation”. This definition is a product of the evolution of the initial definition coined by (Gruber 1991), who defined ontologies as “vocabularies of representational terms – classes, relations, functions, object constants – with agreed-upon definitions, in the form of human readable text and machine enforceable, declarative constraints on their well-formed use.”
Ontologies are essentially a group of terms organised in a hierarchical structure (class-subclass), describing a specific domain (Trokanas, Cecelja & Raafat 2014). Ontologies are enriched with properties characterizing terms and restrictions on these properties. Finally, ontologies are completed with instances representing specific entities of the domain.
The data that need to be exchanged will be represented by an ontology. The aim of the ICONET ontology is to enable interoperability and standardisation among different systems and services, hence eliminating any syntactic or semantic heterogeneity. The ICONET is based on the common data model for PI (as developed in the MODULUCSHA project) (Figure 40), however the ontology development process followed a bottom-up approach, starting by modelling data identified in Figure 41 (Sternberg and Norman 2017). The initial ontology is presented in Figure 40.
Building on the common data model (Figure 41) and extending it, the ICONET ontology also covers sensor data.
The ICONET ontology builds on existing knowledge and is expanded to include semantic rules that can capture relevant knowledge and use it to evaluate data as well as certain aspects of operations. For example, semantic rules can be used to flag cases where the temperature of a product in-transit has exceeded the permitted limit and needs to be pulled out of the network. Such an example rule is presented in Figure 42.
To achieve that, the data model covers all aspects of an end-to-end supply chain and their digital representation. Additionally, ontologies have been recognized as a tool for achieving interoperability between systems and IoT devices. The complete ICONET Domain Model is presented in Figure 43 below.
The Domain Model, as shown above, models the PI entities that are of interest to the key services. For the IoT sensors, a different model, which is less granular than mentioned earlier, was chosen. As data size in sensor communications is important, this minified model does not contain PI semantics but rather uses the ICONET Domain Model through the PI order and the Shipping Service. Figure 44 shows an initial PI order and the relevant IoT section to be configured on the Cloud Platform to configure the various sensors and corresponding alerts.
```
"PIOrder": {
// other meta data
"CloudIoTService": {
"Shipment": {
"carrierCode": "iconet",
"tripCode": 54,
"orderID": "12AbCxYz",
"containerID": "00291939"
},
"Requirements": {
"temperature": {
"basicConfiguration": {
"activation": "true",
"unit": "C",
"sendMeasurement": "true",
"sendAlarm": "true"
},
"advancedConfiguration": {
"numberOfAlarms": 3
},
"alarmConfiguration": {
"alarm1": {
"max": 50,
"min": 0,
"sendAlarmIfOutsideRange": "true"
},
"alarm2": {
"max": 100,
"min": -70,
"sendAlarmIfInsideRange": "true"
},
"alarm3": {
"min": -5,
"sendAlarmIfOutsideRange": "true"
}
}
}
}
}
}
```
The corresponding acknowledgement from the IoT Cloud Service is shown in Figure 45.
```
{
"ShippingService": {
"Key": {
"apiKey": "aAbBcCdDeEfFgGhH",
"Response": {
"temperature": {
"basicConfiguration": "true",
"advancedConfiguration": "true",
"alarmConfiguration": {"alarm1": "true", "alarm2": "true", "alarm3": "true"}
},
"humidity": {
"basicConfiguration": "true",
"advancedConfiguration": "true",
"alarmConfiguration": {"alarm1": "true", "alarm2": "true"}
},
"acceleration": {
"basicConfiguration": "true",
"advancedConfiguration": "true",
"alarmConfiguration": {"alarm1": "true"}
},
"light": {
"basicConfiguration": "true",
"advancedConfiguration": "true",
"alarmConfiguration": {"alarm1": "true"}
}
}
}
}
}
```
Figure 45 Acknowledgement of PI Order & IoT settings
The specifics of the IoT operations and the corresponding Data Model are described in detail in D2.8.
8 Conclusions
This deliverable provides a final blueprint for a PI enabled decentralized architecture, which will serve as a steppingstone towards the realization of the Physical Internet. It introduces the ICONET Reference Model, which is comprised by the OLI & NOLI layers. This reference model is then associated with the service specifications. A consolidation of the requirements stemming from the key services, their interoperability and the data specifications needed for their operations is then presented, accompanied by an example which demonstrates their roles, as well as overarching flows and interconnections between them in a PI enabled shipment scenario. The integration with external and legacy systems is also addressed, introducing the PI Data Adapters, which through various implementations can be used to achieve interoperability, while also maintaining data security and protection and regulatory compliance.
The work done previously on the PI reference architecture, the overall design approach and the Living Lab considerations is presented as a segue to the final PI Reference Architecture. Based on the design principles and requirements analysed before, as well as NFV and SDN technologies, a final high-level, generic and widely applicable blueprint is presented as a decentralized system, with each PI Node serving as a node hosting its’ own set of services. NFV and SDN technologies, and their corresponding design and principles, are presented and further analysed with relation to the reference architecture. Furthermore, the allocation and separation of services into the Forwarding, Control and Management planes is presented and explained. A common data model (an ontology) has also been presented for organizing the required data and enabling interoperability, leading to the final resulting ICONET Domain Model and IoT data models.
The experiences and lessons learned that were applied in making this architectural blueprint, will shape future implementations of the Physical Internet. While the widespread acceptance and adoption of the technologies and operations described are still quite far, the work done in ICONET and the resulting architecture, which was shown to be able to function in a variety of scales and operations, will be useful in future considerations of the technological implications of making the Physical Internet a reality.
9 References
Bermudez-Edo, M., Elsaleh, T., Barnaghi, P., & Taylor, K. (2017). IoT-Lite: a lightweight semantic model for the internet of things and its use with dynamic semantics. Personal and Ubiquitous Computing, 21(3), 475-487.
Gruber, T.R., 1991. The role of common ontology in achieving sharable, reusable knowledge bases. KR, 91, pp.601-602.
Mertzanis, K. (2016). Designing Protocols for the Physical Internet (Dissertation).
Montreuil, B, E. Ballot, F. Fontane. An Open Logistics Interconnection model for the Physical Internet. Proceedings of the 14th IFAC Symposium on Information Control Problems in Manufacturing Bucharest, Romania, May 23-25. 2012. Available: https://www.sciencedirect.com/science/article/pii/S1474667016331718
Sternberg, H., & Norrman, A. (2017). The Physical Internet—review, analysis and future research agenda. International Journal of Physical Distribution & Logistics Management, 47(8), 736-762.
Studer, R., Benjamins, V.R. and Fensel, D., 1998. Knowledge engineering: principles and methods. Data & knowledge engineering, 25(1-2), pp.161-197.
Trokanas, M. Bussemaker, and F. Cecelja, 2016. Utilising Semantics for Improved Decision Making in Bio-refinery Value Chains. Computer-Aided Chemical Engineering, 38, pp.2097-2102. |
LDLR c.415G>A causes familial hypercholesterolemia by weakening LDLR binding to LDL
Kaihan Wang\textsuperscript{1†}, Tingting Hu\textsuperscript{2†}, Mengmeng Tai\textsuperscript{1†}, Yan Shen\textsuperscript{1}, Haocheng Chai\textsuperscript{3}, Shaoyi Lin\textsuperscript{1*} and Xiaomin Chen\textsuperscript{1*}
**Abstract**
**Background** Familial hypercholesterolemia (FH) is a prevalent hereditary disease that can cause aberrant cholesterol metabolism. In this study, we confirmed that c.415G>A in low-density lipoprotein receptor (LDLR), an FH-related gene, is a pathogenic variant in FH by in silico analysis and functional experiments.
**Methods** The proband and his family were evaluated using the diagnostic criteria of the Dutch Lipid Clinic Network. Whole-exome and Sanger sequencing were used to explore and validate FH-related variants. In silico analyses were used to evaluate the pathogenicity of the candidate variant and its impact on protein stability. Molecular and biochemical methods were performed to examine the effects of the LDLR c.415G>A variant in vitro.
**Results** Four of six participants had a diagnosis of FH. It was estimated that the LDLR c.415G>A variant in this family was likely pathogenic. Western blotting and qPCR suggested that LDLR c.415G>A does not affect protein expression. Functional studies showed that this variant may lead to dyslipidemia by impairing the binding and absorption of LDLR to low-density lipoprotein (LDL).
**Conclusion** LDLR c.415G>A is a pathogenic variant in FH; it causes a significant reduction in LDLR's capacity to bind LDL, resulting in impaired LDL uptake. These findings expand the spectrum of variants associated with FH.
**Keywords** Familial hypercholesterolemia, Low-density lipoprotein receptor, Pathogenic variant, Functional study
**Introduction**
Familial hypercholesterolemia (FH) is a hereditary metabolic disease typified by the dysregulation of cholesterol homeostasis [1]. Its principal features are highly elevated plasma low-density lipoprotein cholesterol (LDL-C), xanthoma of the skin and tendon, and the early onset of coronary heart disease [2]. Among patients with FH, a high level of plasma LDL-C is the main driver of cardiovascular risk [3]. In particular, those with homozygous FH may develop atherosclerosis as early as adolescence, affecting not just the arteries but also the valves, resulting in a heavy burden [4]. Clinically, FH is classified as either homozygous or heterozygous; the prevalence of heterozygous FH is about 1:313, while that of homozygous FH...
is 1:400,000 [5]. However, the early symptoms of FH are easily ignored, making the diagnosis of FH extremely difficult [6]. At present, the diagnosis rate of FH is very low in most countries and regions [7]. For example, it is <10% in the United States [8], 4% in Australia and New Zealand [9], 2% in South Africa [10], and even <1% in Russia, Latin America, and other countries [11]. Only a small percentage of those diagnosed with FH have undergone genetic testing. In most areas, the rate of genetic diagnosis is <5%, such as <5% in the United States [12] and <2% in Asia [13]. Based on the phenomenon of serious complications and the low diagnosis rate of FH, it is urgent to improve the diagnosis rate around the world. Because FH is a genetic disorder, cascade screening based on genetic diagnosis is the most effective way to improve the diagnosis rate [7].
The main pathogenic mechanism underlying FH is the incapacity of LDLR to remove LDL-C from the blood [14]. Under normal physiological conditions, the endoplasmic reticulum produces LDLR, which is then transported to the Golgi apparatus for glycosylation modification and carried to the plasma membrane. Finally, the LDLR on the plasma membrane binds to circulating LDL particles to promote endocytosis [14]. Once the vesicles encasing the LDLR-LDL complex have been absorbed into the cell, they merge with the endosome. In the acidic endosome, LDLR undergoes a conformational change and separates from the bound LDL [15]. This allows LDLR to return to the plasma membrane for later use or be directed to lysosomes for degradation by interaction with proprotein convertase subtilisin/kexin type 9 (PCSK9) [16]. In addition, LDLR on the cell membrane can also bind to circulating PCSK9 and be carried to lysosomes for degradation [17]. Circulating PCSK9 is secreted from hepatocytes and engages in several biological processes, such as lipid metabolism, immune response, hemostasis, glucose metabolism, and neuronal survival [18]. Among these, the regulation of plasma LDL-C concentrations is the most significant and extensively studied. The activity of PCSK9 is negatively correlated with LDLR density on the surface of hepatocytes and positively correlated with plasma LDL-C concentrations [19]. It has been shown that PCSK9 can form dimers and higher multimers through self-associating, which is influenced by concentration, temperature, and pH, and can increase LDLR degrading activity [20]. Besides, the half-life of circulating PCSK9 might be extended from 5 to 15 min by binding to LDLR [21]. Therefore, the increase in PCSK9 expression as well as the increase in activity and the decrease in degradation all lead to a decrease in LDLR, which increases the level of LDL-C in plasma. Any disruption in these processes leads to a notable and significant buildup of LDL-C in the plasm.
Similarly, genetic variants in FH patients cause anomalies in the receptor endocytosis pathway, which abnormally raises plasma levels of LDL-C [22], the extent of which differs between different countries and ethnic groups [23]. Of the previously mentioned genetic variants, variants in the *LDLR* gene account for the bulk of FH instances [24]. During the last few decades, plenty of studies on the *LDLR* variants of FH have been carried out globally, and many of them have been found in China. For example, in Han Chinese populations, there are 143 different variants of *LDLR* known to exist, and the four most frequent variants include c.986G>A, c.1747 C>T, c.1879G>A, and c.268G>A [25, 26]. In Hong Kong, there have been reports of 73 different *LDLR* variants, and the four most common variants included c.1241 T>G, c.1474 G>A, c.769 C>T, and c.1765 G>A [27]. Although more than 4,000 *LDLR* variants have been identified, less than 15% of them have been identified as benign or pathogenic through functional studies [28]. Theoretically, the clinical diagnosis cannot be verified until a genetic variant is identified and subsequently shown to modify the metabolism of LDL [29]. Therefore, it is vital to conduct genetic testing and functional studies on patients with FH, which can provide a strong basis for the diagnosis of FH.
In this study, genetic testing was conducted to identify variants associated with FH in a familial context. Subsequent in silico analysis and in vitro functional assessments were performed to identify the pathogenicity of *LDLR* c.415G>A. These findings contribute to broadening the spectrum of FH-related variants, thereby facilitating early diagnosis.
**Methods**
**Study participants and blood sample collection**
According to the diagnostic criteria of the Dutch Lipid Clinic Network (DLCN), individuals with scores of ≥8 points and their families were included in this study. After using DLCN diagnostic criteria to evaluate the participants, a family tree was built. Venous blood was collected from participants for a blood lipid analysis and subsequent whole-exome sequencing. Participants had completed informed consent forms, which were authorized by the First Affiliated Hospital Ethics Committee of Ningbo University.
**Whole-exome sequencing**
Venous blood samples were forwarded to the Beijing Genomics Institution (BGI, Wuhan, China) for whole-exome sequencing. After low-quality reads, adapters, and a high percentage of N-bases were removed from the raw sequencing data, alignments against the human reference genome hg19 sequence were generated using Burrows Wheeler Aligner [30]. Using the Genome Analysis
Toolkit (GATK), duplicate reads were tagged, and base mass value recalibration was performed. GATK4’s HaplotypeCaller was used to find single nucleotide polymorphisms (SNPs) and InDels [31]. Rigorous filtering was applied to extract SNPs and InDels that are both highly dependable and of excellent quality.
**Sanger sequencing**
Utilizing the EZ N.P. * Blood DNA Mini Kit (D3392-02; Omega Bio-Tek, Norcross, GA, USA) for DNA extraction, which was then amplified using polymerase chain reaction (PCR). The total PCR system was 50 μL, which includes 25 μL of 2x ES Taq Master Mix, 2 μL of forward primer (5’-CAGGACGAGTTTCGTCGCAC-3’), 2 μL of reverse primer (5’-ATCCGAGCCATCTTTCGCAGTC-3’), 500 ng of DNA and enzyme-free water. After sending the PCR products to BGI for Sanger sequencing, data analysis was conducted using Chromas software.
**In silico analysis**
MutationTaster was used to predict the pathogenicity of point variants [32]. DynaMut was utilized to evaluate how point variants affected the stability and flexibility of proteins [33]. A normal mode analysis was used to determine the difference in free energy change (ΔΔG) between the structures of the wild-type (WT) and the variant. ENCoM-based difference in vibrational entropy (ΔΔSVib) was used to predict the difference in flexibility [34]. SnapGene v6.0.2 was employed to determine the conservation of protein sequences among species using the multiple sequence comparison by log-expectation (MUSCLE) algorithm.
**Plasmid construction, cell culture, and transfection**
Shanghai GeneChem Co. (Shanghai, China) completed the construction of human WT LDLR and LDLR c.415G>A with a FLAG epitope close to the N terminus in the GV208 vector. HEK293T cells were used for the plasmid transfection [35]. The cells were grown in Eagle media that had been modified by Dulbecco (high glucose) (Cytiva, Shanghai, China) containing 10% fetal bovine serum (Vivacell, Shanghai, China). For transfection, the cells were transfected with 2500 ng of plasmid DNA using Lipofectamine™ 3000 Reagent (Invitrogen, Shanghai, China) in a six-well plate.
**Quantitative real-time PCR**
Following transfection, TRIzol (Omega, Norwalk, CT, USA) was used to extract RNA, and the HiFiScript cDNA Synthesis Kit (CW2569M; CWBIO, Beijing, China) was used for reverse transcription. The Mastercycler® Nexus X2 (Eppendorf, Hamburg, Germany) was used to carry out quantitative real-time PCR (qPCR). TaqMan assays were employed for the detection of fluorescence. The relative amplification efficiency of LDLR was established using the comparative Ct method. The primers used were as follows: LDLR, F-5’-AAGTGCATCTCTCGGCAGTT-3’, LDLR, R-5’-CCACTCATCCGAGCCATCTT-3’; GAPDH, F-5’-GGAAATCGTCGTCGTGACATTA-3’, R-5’-GGAAGGAAGGCTGGAAGAG-3’.
**Western blotting**
The cells were lysed using RIPA solution (Solarbio, Beijing, China), which contains inhibitors of phosphatase and protease. The proteins were boiled for 10 min with loading buffer (Solarbio, Beijing, China) in preparation for western blotting. Following 7.5% SDS/PAGE resolution, the samples were blotted onto PVDF membranes (Merck, Darmstadt, Germany). After using 5% skim milk to prevent non-specific binding, monoclonal mouse anti-FLAG (1:3000, F1804; Sigma, Shanghai, China) and monoclonal rabbit anti-β-actin (1:10000, AF7018; Affinity Biosciences, San Francisco, California, USA) primary antibodies were added, and the mixture was incubated for a whole night at 4 °C. Then, the samples were treated with the corresponding horseradish peroxidase-conjugated IgG for 60 min. Lastly, the immunoreactive proteins were identified using enhanced chemiluminescence.
**Flow cytometry**
Cells were added to a six-well plate with 0.05% trypsin and transferred into a 2 mL EP tube. Diluted rabbit anti-human LDLR monoclonal antibody conjugated with allophycocyanin (1:200, ab275614; Abcam, Cambridge, MA, USA) was added, and the mixture was maintained in the dark for an additional hour after blocking with 10% donkey serum for an hour at room temperature. The mean fluorescence levels from at least three replicate estimates were obtained using a Beckman CytoFlex S flow cytometer (Beckman Coulter, Shanghai, China). Data analysis was done with FlowJo software.
**Immunofluorescence**
After transfection, cells were fixed using 4% paraformaldehyde (P1110; Solarbio, Beijing, China). Following a wash with 1× PBS, the cells were blocked using a 10% goat serum solution to prevent non-specific binding. Next, mouse anti-flag antibody (1:3000, F1840; Sigma-Aldrich, Saint Louis, USA) was diluted in 1x PBS and incubated at 4 °C for 4 h, along with 20 μg/mL labelled human plasma LDL (Dil-LDL; L3482; Thermo Fisher, Shanghai, China). After incubation, the cells were washed with 1× PBS and then treated with goat anti-mouse IgG conjugated with AlexaFluor488 (1:500, ab150113; Abcam, Cambridge, UK). After completing the staining of nucleus with 4',6-diamidino-2-phenylindole (DAPI), the cells can be observed under a LEICA TCS SP8 confocal laser scanning microscope.
In order to assess the uptake capacity of LDLR, transfected HEK293T cells were treated with 20 μg/mL Dil-LDL for 4 h at 37 °C. Similarly, confocal microscopy was used for analysis after washing with PBS, fixation with 4% paraformaldehyde, and DAPI labeling of cell nuclei.
**Statistical analysis**
All data was analyzed using GraphPad Prism (version 9.0.0; La Jolla, CA), and presented as means±SEM. Normal distribution was evaluated using the D’Agostino–Pearson omnibus normality test. Group differences were evaluated using a one-way ANOVA. $P<0.05$ was used as the statistical significance criterion.
**Results**
**Clinical data for the proband and his family members**
The proband, a 39-year-old male who presented to the First Affiliated Hospital of Ningbo University due to chest tightness following physical activity. Coronary angiography revealed that his coronary artery was severely stenotic. Because of the early onset of atherosclerotic cardiovascular disease, FH was suspected; therefore, cascade screening was conducted. The biochemical results and DLCN scores for this family member are shown in Tables 1 and 2. Figure 1 depicts the pedigree. A positive family history of dyslipidemia was found in the pedigree analysis of the index case, which is compatible with an autosomal dominant mode of inheritance.
**Genetic analysis and in silico screening**
Owing to geographical constraints, blood samples were only obtained from three family members for whole-exome sequencing. The whole-exome sequencing data (Supplemental Table 1) were analyzed for variants in FH-related genes (*LDLR, APOB, PCSK9, LDLRAP1*). Two patients with FH in this family were identified to carry the missense variant *LDLR* c.415G>A (Fig. 2A). The presence of *LDLR* c.415G>A, which is found in exon 4 of the *LDLR* gene on chromosome 19 p13.2, was verified using Sanger sequencing (Fig. 2B). An interspecific sequence analysis revealed that the altered amino acid sequence is highly conserved (Fig. 2C). A MutationTaster analysis showed that the variant is pathogenic.
Additionally, the interatomic interactions of *LDLR* c.415G>A were assessed using DynaMut. The differences in interatomic interactions between the WT and variant are depicted in Fig. 2D. According to the predicted DynaMut $\Delta \Delta G$ values and $\Delta \Delta S$ ENCoM (Empirical Normal-Coordinate Analysis Method), the variant resulted in decreased molecular flexibility and increased stability of the LDLR protein.
| Family member | age | gender | TG (mmol/L) | Reference range (mmol/L) | TC (mmol/L) | Reference range (mmol/L) | LDL-C (mmol/L) | Reference range (mmol/L) | HDL-C (mmol/L) | Reference range (mmol/L) | Corneal arcus | Xanthoma | history of atherosclerosis/myocardial infarctions |
|---------------|-----|--------|-------------|--------------------------|-------------|--------------------------|----------------|--------------------------|----------------|--------------------------|--------------|---------|-----------------------------------------------|
| I1 | 65 | male | 0.82 | 0.00-1.70 | 2.34 | 3.00-5.70 | 1.55 | 1.89-3.37 | 0.69 | 1.03-1.55 | No | No | No |
| I2 | 65 | female | 0.74 | 0.00-1.70 | 8.84 | 7.96 | 1.02 | 1.13 | No | No | Yes | No | No |
| II1 | 36 | male | 0.56 | 0.00-1.70 | 8.76 | 7.31 | 1.13 | 1.15 | No | No | Yes | No | No |
| II2 | 39 | male | 0.87 | 0.00-1.70 | 6.51 | 4.45 | 1.15 | 1.17 | No | No | No | No | No |
| III3 | 35 | female | 0.64 | 0.00-1.70 | 4.43 | 3.04 | 1.17 | 1.05 | No | No | No | No | No |
| III1 | 7 | male | 0.52 | 0.00-1.70 | 6.18 | 5.01 | 1.05 | 1.05 | No | No | No | No | No |
Table 2 DLCN (Dutch Lipid Clinic Network) scores for the proband and his first-degree relatives
| Diagnostic criteria of the Dutch Lipid Clinic Network | Participant scores |
|------------------------------------------------------|---------------------|
| | Score | I1 | I2 | II1 | II2 | II3 | III1 |
| **Family History** | | | | | | | |
| First-degree relative with known premature (<55 years of age in men, <60 years of age in women) coronary heart disease or first-degree relative with known low-density lipoprotein (LDL) cholesterol >95th percentile by age and sex for country | 1 | 0 | 1 | 1 | 2 | 2 | 1 |
| First-degree relative with tendon xanthoma and/or arcus cornealis or children <18 years of age with LDL cholesterol >95th percentile by age and sex for country | 2 | | | | | | |
| **Clinical History** | | | | | | | |
| Patient with premature coronary artery disease (age as above) | 2 | 0 | 2 | 2 | 2 | 0 | 0 |
| Patient with premature cerebral or peripheral vascular disease (age as above) | 1 | | | | | | |
| **Physical Examination** | | | | | | | |
| Tendon Xanthomas | 6 | 0 | 0 | 0 | 0 | 0 | 0 |
| Arcus Cornealis at age ≤45 years | 4 | | | | | | |
| **LDL Cholesterol (mmol/L) (mg/dL)** | | | | | | | |
| LDL-C ≥8.5 (330) | 8 | 0 | 5 | 5 | 1 | 0 | 3 |
| LDL-C 6.5–8.4 (250–329) | 5 | | | | | | |
| LDL-C 5.0–6.4 (190–249) | 3 | | | | | | |
| LDL-C 4.0–4.9 (155–189) | 1 | | | | | | |
| **DNA analysis** | | | | | | | |
| DNA Analysis – functional mutation LDLR, APOB, and PCSK9 | 8 | / | / | / | 8 | 0 | 8 |
| Total Score | 0 | 8 | 8 | 13 | 2 | 12 | |
Fig. 1 Pedigree of the proband. The arrow indicates the proband. Dark circles or boxes in the lineage indicate subjects with FH. Circles represent females, and boxes represent males.
**LDLR c.415G>A variant does not change LDLR expression in vitro**
To confirm LDLR c.415G>A’s effect on gene expression, HEK293T cells were transfected with plasmids carrying WT LDLR, variant LDLR, and blank. According to the immunofluorescence results, the transfection success rate was approximately 85% (Supplemental Fig. 1). qPCR results, as illustrated in Fig. 3A, demonstrated that cells transfected with variant plasmids did not exhibit any differences in LDLR mRNA expression compared with those transfected with WT plasmids, whereas cells transfected with blank plasmids exhibited extremely low expression of LDLR mRNA. Western blotting results (Fig. 3B) revealed that LDLR protein expression was similar in the variant and WT groups but was essentially absent in the blank group. The flow cytometry results, as shown in Fig. 3C, demonstrated that the cell membrane in the blank group did not express the LDLR protein, while cell membranes in the variant and WT groups exhibited similar LDLR protein expression levels. The expression levels of the variant and WT groups did not differ statistically in any appreciable way. These findings show that gene expression is unaffected by the LDLR c.415G>A variant.
**LDLR c.415G>A decreases Dil-LDL absorption by cells**
To examine if LDLR c.415G>A impacts protein activity, plasmid-transfected cells were co-incubated with Dil-LDL at 37 °C for at least 4 h. The variant group’s red fluorescence was much less than that of the WT group, as shown in Fig. 4, suggesting that the variant group’s LDL uptake was noticeably lower than that of the WT group. The capacity to absorb LDL in the blank control group was minimal. These results indicate that LDLR c.415G>A impaired the capacity to absorb LDL significantly.
**LDLR c.415G>A weakens the ability of LDLR to bind to LDL**
The mechanism underlying the lower absorption ability induced by the LDLR c.415G>A variant was further
Fig. 2 (See legend on next page.)
evaluated using laser confocal microscopy to investigate the ability of LDLR to bind to LDL after co-incubating the plasmid-transfected cells with LDL antibodies and Dil-LDL at 4 °C for 4 h. As shown in Fig. 5, although there was a significant decrease in LDL binding, the LDLR protein content in the variant group was nearly equal to that of the WT group. Therefore, by decreasing LDLR binding to LDL, the variant dramatically lowers the absorption capacity.
**Discussion**
In this study, serious issues in lipid metabolism were observed in the proband and his son, who carried *LDLR* c.415G>A. This variant was previously described [36] and was included in the ClinVar database. (NM_000527.4(*LDLR*)c.415G>A (p.Asp139Asn)) as likely pathogenic with accession number RCV000237450.1, but no functional studies have been conducted. The present functional studies revealed that while this variant had no effect on protein synthesis, it dramatically lowered LDL absorption via impairing the ability of LDLR to bind LDL. It is hypothesized that this variant affects the uptake ability of LDLR, inhibiting the regular excretion of LDL-C in plasma and resulting in FH.
Mature LDLR proteins consist of 860 amino acids and can be divided into five functional domains [37]. Among these, the interaction between acidic residues of the LDLR ligand-binding domain and basic residues of apoB100 mediates the binding of LDL to LDLR [38]. The variant detected in this study is located in the ligand-binding domain. The replacement of asparagine with aspartic acid results in distinct molecular interactions with the surrounding residues. This may explain the lack of affinity of the variant protein for LDL. Previous functional studies of pathogenic of *LDLR* variants have revealed that *LDLR* p.L799R disrupts the transmembrane domain, inhibiting membrane insertion and resulting in the secretion of LDLR [39]. *LDLR* p.D482H and C667F were trapped in the endoplasmic reticulum owing to misfolding [40]. *LDLR* p.W23X, S78X, or W541X nonsense mutations significantly decreased the levels of mRNA expression [41]. To sum up, *LDLR* variants can lead to FH by altering different stages of receptor-mediated endocytosis. Certain circumstances may result in the total absence of receptors, whereas other circumstances may result in receptors that are present but with impaired function. All these will lead to the inability of cells to absorb LDL, which will build up cholesterol in the blood and raise the risk of atherosclerosis. In this family, both the proband and his son exhibited serious problems related to lipid metabolism, although genetic testing showed that they were heterozygous and cell tests demonstrated that the variant did not affect protein expression. Lipid metabolism is closely related to a decline in receptor function.
Even though FH is the most common disease associated with disorders in cholesterol metabolism, it has received fairly little public attention, and its rate of diagnosis is quite poor [6]. Most patients with FH do not receive an effective lipid-lowering medication [42]. Since early myocardial infarction, stroke, and an elevated risk of overall mortality are frequent features of untreated FH, it is well acknowledged that the illness poses a serious risk to life [43]. Therefore, more research is required to improve outcomes for patients with FH and their families.
**Study strengths and limitations**
The pathogenicity of missense variant *LDLR* c.415G>A was confirmed by this study, which impaired binding and uptake of LDLR to LDL. These findings underpin the early diagnosis of FH, contribute to cascade screening of FH families, and advocate for personalized treatment strategies. However, this study still had some limitations. Firstly, no in vivo functional experiments were conducted. Further evidence from gene-edited murine models to confirm the pathogenicity of the *LDLR* c.415G>A variant is required. Secondly, the assessment of LDLR activity relied solely on one cell line model system, necessitating validation across diverse cell line models to ensure research robustness. Finally, the scope of the current study did not extend to investigating how to address the pathogenicity of this variant. Future studies are required to ameliorate the harmful consequences of variants, which will help achieve more effective lipid-lowering treatments.
**Conclusion**
*LDLR* c.415G>A is a pathogenic variant in FH. It causes acidic amino acids to be replaced, greatly reducing the capacity of LDLR to bind to LDL. This prevents LDL-C from being taken up by cells and produces a noticeable LDL-C increase in plasma. This study advances our
Fig. 3 LDLR c.415G>A does not affect LDLR expression. (A) Western blot analysis of LDLR expression in HEK293T cells transfected with wild-type LDLR plasmids, variant LDLR plasmids, and blank plasmids. (B) Quantitative reverse transcription polymerase chain reaction analysis (n=6/group). (C) Flow cytometry quantification of LDLR expression on the HEK293T cell surface (n=6/group). Data are presented as means±SEM. ****P<0.0001; ns indicates P>0.05
Fig. 4 Capacity of LDLR to absorb LDL, as determined using laser confocal microscopy. Double immunofluorescence staining of LDL (red) and DAPI (blue). Image magnification: 100x, scale bar: 10 μm. WT: wild-type LDLR; NC: negative control
understanding of FH-associated gene variants and identifies a pathogenic variant, providing information that contributes to the study of early diagnosis and treatment of FH.
Fig. 5 Ability of LDLR to bind LDL, as determined using laser confocal microscopy. Triple immunofluorescence staining of LDL (red), LDLR (green), and DAPI (blue). Image magnification: 100×, scale bar: 10 μm
Abbreviations
FH familial hypercholesterolemia
LDL low-density lipoprotein
LDL-C low-density lipoprotein cholesterol
LDLR low-density lipoprotein receptor
PCSK9 proprotein convertase subtilisin/kexin type 9
DLCN Dutch Lipid Clinic Network
BGI Beijing Genomics Institution
GATK Genome Analysis Toolkit
SNP single nucleotide polymorphism
MUSCLE multiple sequence comparison by log-expectation
PCR polymerase chain reaction
qPCR quantitative real-time PCR
DAPI 4′,6-diamidino-2-phenylindole
Dil-LDL labelled human plasma LDL
WT wild-type
Funding
The research was supported by the grants from the Key Technology R&D Program of Ningbo (2022Z149) and the Key Laboratory of Precision Medicine for Atherosclerotic Disease of Zhejiang Province (2022E10026).
Data availability
The datasets presented in this article are not readily available because sharing of genomic data in the public domain is not allowed according to the requirements of the Institutional Ethics Committee. Requests to access the datasets should be directed to the corresponding authors.
Declarations
Ethics approval and consent to participate
This study was approved by the Ethics Committee of the First Affiliated Hospital of Ningbo University (2019-R020), and all participants signed written informed permission forms for participation.
Consent for publication
All participants provided consent for publication.
Competing interests
The authors declare no competing interests.
Received: 5 January 2024 / Accepted: 28 February 2024
Published online: 21 March 2024
Supplementary information
The online version contains supplementary material available at https://doi.org/10.1186/s12944-024-02068-2.
Supplementary Material 1: The success rate of transfection. Double immunofluorescence staining of LDLR (green) and DAPI (blue)
Supplementary Material 2: The whole-exome sequencing data of FH related gene
Supplementary Material 3: The certificate of language editing
Supplementary Material 4: The proof report of the overall similarity index
Supplementary Material 5: The image of LDLR and Actin obtained through chemiluminescence
Supplementary Material 6: The image of Marker obtained through colorimetric
Supplementary Material 7: the overlay image of LDLR, Actin and Marker
Supplementary Material 8: Western blot analysis of LDLR expression
Author contributions
X.C. and S.L. contributed to the supervision, conception, project administration, funding acquisition and final approval of the submitted version. K.W., T.H., and M.T. contributed to the methodology, software, analysis, and original draft preparation. H.C. and Y.S. contributed resources. All authors have read and approved the final manuscript.
References
1. Brandts J, Ray KK. Familial hypercholesterolemia: JACC Focus Seminar 4/4. J Am Coll Cardiol. 2021;78(18):1831–43.
2. Defesche JC, et al. Familial hypercholesterolaemia. Nat Rev Dis Primers. 2017;3:17093.
3. Ference BA, et al. Low-density lipoproteins cause atherosclerotic cardiovascular disease. 1. Evidence from genetic, epidemiologic, and clinical studies. A consensus statement from the European Atherosclerosis Society Consensus Panel. Eur Heart J. 2017;38(32):2459–72.
4. Tromp TR, et al. Worldwide experience of homozygous familial hypercholesterolaemia: retrospective cohort study. Lancet. 2022;399(10326):719–28.
5. Beheshti SO, et al. Worldwide Prevalence of Familial Hypercholesterolemia: Meta-analyses of 11 million subjects. J Am Coll Cardiol. 2020;75(20):2553–66.
6. Benito-Vicente A et al. Familial hypercholesterolemia: the most frequent cholesterol metabolism disorder causing Disease. Int J Mol Sci. 2018. 19(1).
7. Nordestgaard BG, Benn M. Genetic testing for familial hypercholesterolaemia is essential in individuals with high LDL cholesterol; who does it in the world? Eur Heart J. 2017;38(20):1580–3.
8. de Ferranti S, Sheldrick RC, Wong JB et al. Response by de Ferranti to Letter Regarding Article, Prevalence of Familial Hypercholesterolemia in the 1999
to 2012 United States National Health and Nutrition Examination Surveys (NHANES). *Circulation*. 2016; 134(18).
9. Watts GF, et al. *International Atherosclerosis Society Roadmap for Familial Hypercholesterolaemia*, Global Heart. 2024, 19(1).
10. Hesse R, et al. Familial hypercholesterolaemia identification by machine learning using lipid Profile Data performs as Well as Clinical Diagnostic Criteria. Volume 15: Circulation: Genomic and Precision Medicine; 2022.
11. Dharmayat KI, et al. Familial hypercholesterolaemia in children and adolescents from 48 countries; a cross-sectional study. *Lancet*. 2024;403(10421):55–66.
12. Ahmad ZS, et al. US physician practices for diagnosing familial hypercholesterolemia: data from the CASCADE-FH registry. *J Clin Lipidol*. 2016;10(5):1223–9.
13. Harada-Shiba M, et al. Guidelines for the diagnosis and treatment of Pediatric Familial Hypercholesterolemia 2022. *J Atheroscler Thromb*. 2023;30(5):531–57.
14. Goldstein JL, Brown MS. The LDL receptor. *Arterioscler Thromb Vasc Biol*. 2009;29(4):431–8.
15. Bartuzi P, et al. CCC- and WASH-mediated endosomal sorting of LDLR is required for normal clearance of circulating LDL. *Nat Commun*. 2016;7:10916.
16. Fedoseienko A, et al. The COMMD Family regulates plasma LDL levels and attenuates atherosclerosis through stabilizing the CCC complex in endosomal LDLR trafficking. *Circ Res*. 2018;122(12):1648–60.
17. Moussavi SA, Berge KE, Leren TP. The unique role of proprotein convertase subtilisin/kexin 9 in cholesterol homeostasis. *J Intern Med*. 2009;266(6):507–19.
18. Cesaro A et al. *Beyond cholesterol metabolism: The pleiotropic effects of proprotein convertase subtilisin/kexin type 9 (PCSK9). Genetics, mutations, expression, and perspective for long-term inhibition* BioFactors, 2020. 46(3): p. 367–380.
19. Bottomley MJ, et al. Structural and biochemical characterization of the wild type PCSK9-EGF(AB) Complex and Natural familial hypercholesterolemia mutants. *J Biol Chem*. 2009;284(2):1313–23.
20. Fan D, et al. Self-association of human PCSK9 correlates with its LDLR-Degrading activity. *Biochemistry*. 2008;47(6):1631–9.
21. Greffhorst A, et al. Plasma PCSK9 preferentially reduces liver LDL receptors in mice. *J Lipid Res*. 2008;49(6):1303–11.
22. Vigimaa M, et al. New Horizons in the Pathogenesis, pathophysiology and treatment of familial hypercholesterolaemia. *Curr Pharm Design*. 2019;24(31):3599–604.
23. Liyanage KE, et al. Familial hypercholesterolaemia: epidemiology, neolithic origins and modern geographic distribution. *Crit Rev Clin Lab Sci*. 2011;48(1):1–18.
24. Abifadel M, Boileau C. Genetic and molecular architecture of familial hypercholesterolemia. *J Intern Med*. 2023;293(2):144–65.
25. Chiou K-R, Chang M-J. Common mutations of familial hypercholesterolaemia patients in Taiwan: characteristics and implications of migrations from southeast China. *Gene*. 2012;498(1):100–6.
26. Chiou K-R, Chang M-J. Genetic diagnosis of familial hypercholesterolaemia in Han Chinese. *J Clin Lipidol*. 2016;10(3):490–6.
27. Yip M-K et al. *Genetic Spectrum and Cascade Screening of Familial Hypercholesterolemia in Routine Clinical setting in Hong Kong*. Genes. 2023. 14(11).
28. Bourbon M, Alves AC, Sijbrands EJ. Low-density lipoprotein receptor mutational analysis in diagnosis of familial hypercholesterolemia. *Curr Opin Lipidol*. 2017;28(2):120–9.
29. Benito-Vicente A et al. *Validation of LDLr activity as a Tool to improve genetic diagnosis of familial hypercholesterolemia: a retrospective on functional characterization of LDLr variants*. Int J Mol Sci. 2018. 19(6).
30. Li H, Durbin R. Fast and accurate long-read alignment with Burrows-Wheeler transform. *Bioinformatics*. 2010;26(5):589–95.
31. McKenna A, et al. The genome analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. *Genome Res*. 2010;20(9):1297–303.
32. Schwarz JM, et al. MutationTaster evaluates disease-causing potential of sequence alterations. *Nat Methods*. 2010;7(8):575–6.
33. Rodrigues CHM, Pires DEV, Ascher DB. DynaMut: predicting the impact of mutations on protein conformation, flexibility and stability. *Nucleic Acids Res*. 2018;46(W1):W350–5.
34. Ajith A, Subbaiah U. *In silico screening of non-synonymous SNPs in human TUFT1 gene*. J Genet Eng Biotechnol. 2023. 21(1).
35. Hu J, et al. Human embryonic kidney 293 cells: a vehicle for Biopharmaceutical Manufacturing. *Structural Biology, and Electrophysiology*. Cells Tissues Organs. 2018;205(1):1–8.
36. Deiana L, Garuti R, Pes GM, Carru C, Errigo A, Rolleri M, Pisciotta L, Masturzo P, Cantaforda A, Calandra S, Bertolini S. Influence of beta(0)-thalassemia on the phenotypic expression of heterozygous familial hypercholesterolemia: a study of patients with familial hypercholesterolemia from Sardinia. *Arterioscler Thromb Vasc Biol*. 2000;20(1):236–43.
37. Jeon H, Blacklow SC. Structure and physiologic function of the low-density lipoprotein receptor. *Annu Rev Biochem*. 2005;74(1):535–62.
38. Yamamoto T, Davis CG, Brown MS, Schneider WJ, Casey ML, Goldstein JL, Russell DW. The human LDL receptor: a cysteine-rich protein with multiple alu sequences in its mRNA. *Cell*. 1984;31(1):27–38.
39. Ström TB, Laerdahl JK, Leren TP. Mutation p.L799R in the LDLR, which affects the transmembrane domain of the LDLR, prevents membrane insertion and causes secretion of the mutant LDLR. *Hum Mol Genet*. 2015;24(20):5836–44.
40. Kizhakkedath P, et al. Endoplasmic reticulum quality control of LDLR variants associated with familial hypercholesterolemia. *FEBS Open Bio*. 2019;9(11):1994–2005.
41. Holla ØL, Kulseth MA, Berge KE, Leren TP, Panheim T. Nonsense-mediated decay of human LDL receptor mRNA. *Scand J Clin Lab Invest*. 2009;69(3):409–17.
42. Nordestgaard BG, et al. Familial hypercholesterolaemia is underdiagnosed and undertreated in the general population: guidance for clinicians to prevent coronary heart disease: consensus statement of the European Atherosclerosis Society. *Eur Heart J*. 2013;34(45):3478–90a.
43. Gidding SS, et al. The Agenda for Familial Hypercholesterolemia: A Scientific Statement from the American Heart Association. *Circulation*. 2015;132(22):2167–92.
**Publisher’s Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
SCF\(\beta\)-TRCP targets MTSS1 for ubiquitination-mediated destruction to regulate cancer cell proliferation and migration
Jiateng Zhong\(^{1,2,*}\), Shavali Shaik\(^{1,*}\), Lixin Wan\(^{1,*}\), Adriana E. Tron\(^{1}\), Zhiwei Wang\(^{1}\), Liankun Sun\(^{2}\), Hiroyuki Inuzuka\(^{1}\) and Wenyi Wei\(^{1}\)
\(^{1}\) Department of Pathology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA
\(^{2}\) Department of Pathophysiology, Norman Bethune College of Medicine, Jilin University, Changchun, P. R. China
* These three authors contributed equally to this work
Correspondence to: Hiroyuki Inuzuka, email: firstname.lastname@example.org
Wenyi Wei, email: email@example.com
Keywords: tumor suppressor, MTSS1, ubiquitination, phosphorylation, migration
Received: September 25, 2013 Accepted: November 4, 2013 Published: November 6, 2013
This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
ABSTRACT:
Metastasis suppressor 1 (MTSS1) is an important tumor suppressor protein, and loss of MTSS1 expression has been observed in several types of human cancers. Importantly, decreased MTSS1 expression is associated with more aggressive forms of breast and prostate cancers, and with poor survival rate. Currently, it remains unclear how MTSS1 is regulated in cancer cells, and whether reduced MTSS1 expression contributes to elevated cancer cell proliferation and migration. Here we report that the SCF\(\beta\)-TRCP regulates MTSS1 protein stability by targeting it for ubiquitination and subsequent destruction via the 26S proteasome. Notably, depletion of either Cullin 1 or \(\beta\)-TRCP1 led to increased levels of MTSS1. We further demonstrated a crucial role for Ser322 in the DSGXXS degron of MTSS1 in governing SCF\(\beta\)-TRCP-mediated MTSS1 degradation. Mechanistically, we defined that Casein Kinase I\(\delta\) (CKI\(\delta\)) phosphorylates Ser322 to trigger MTSS1’s interaction with \(\beta\)-TRCP for subsequent ubiquitination and degradation. Importantly, introducing wild-type MTSS1 or a non-degradable MTSS1 (S322A) into breast or prostate cancer cells with low MTSS1 expression significantly inhibited cellular proliferation and migration. Moreover, S322A-MTSS1 exhibited stronger effects in inhibiting cell proliferation and migration when compared to ectopic expression of wild-type MTSS1. Therefore, our study provides a novel molecular mechanism for the negative regulation of MTSS1 by \(\beta\)-TRCP in cancer cells. It further suggests that preventing MTSS1 degradation could be a possible novel strategy for clinical treatment of more aggressive breast and prostate cancers.
INTRODUCTION
Tumor metastasis is a major problem encountered during clinical anti-cancer treatments, which causes higher mortality in cancer patients [1]. Therefore, elucidating the underlying molecular mechanisms that cause tumor growth and metastasis will lead to the development of more effective therapies, in part by eradicating metastatic cancer cells. To this end, it has been established that for many types of human cancers, tumor cells could acquire the capability to metastasize to distant organs that ultimately results in organ failure and death [1, 2]. Although the mechanisms remain largely unknown, overexpression of certain oncoproteins [3] or downregulation of tumor suppressor proteins [4] have been demonstrated to play important roles in the process of tumor growth and metastasis. In this regard, the metastasis suppressor 1 (MTSS1) protein, which is also known as MIM (missing in metastasis) has been recently characterized as a tumor suppressor protein [5]. Notably, MTSS1 is mostly expressed in normal tissues and in some non-metastatic cancer cell lines, however its expression is significantly decreased or mostly absent in many metastatic cancers including metastatic bladder cancer [6], prostate cancer [7], gastric cancer [8] and kidney cancer [9], suggesting that MTSS1 could function as an anti-metastatic protein. Furthermore, an inverse correlation was also observed between MTSS1 expression and poor prognosis in breast cancer [10]. Specifically, findings from large cohort breast cancer clinical samples indicated that...
decreased MTSS1 expression was positively associated with poorer prognosis, whereas high levels of MTSS1 correlated with an increased overall patient survival [10]. Interestingly, it was reported that all three MTSS1 splice variants were significantly reduced in prostate cancer, whereas overexpression of MTSS1 markedly reduced the proliferation of prostate cancer cells [7]. These findings indicate that MTSS1 might function as a tumor suppressor, and loss of MTSS1 facilitates the development of human cancers including breast and prostate cancers. However, in contrast to its reduced expression in many human cancers, overexpression of MTSS1 was observed in hepatocellular carcinoma [11] although its physiological significance to liver cancer remains elusive.
Functionally, MTSS1 acts as a cytoskeletal scaffold protein that regulates cytoskeletal dynamics through interacting with many different proteins such as Rac, actin and actin associated proteins [12-14]. By doing so, MTSS1 increases the formation of lamellipodia, membrane ruffles, and filopodia-like structures and is also involved in promoting the disassembly of actin stress fibers. Mechanistically, MTSS1 contains a WH2 domain in its C-terminal region that preferentially interacts with ATP-bound G-actin, an active form of actin involved in polymerization. Furthermore, it has been observed that MTSS1 possesses five-folds more affinity towards ATP-bound G-actin compared to ADP-bound G-actin associated with actin monomers in the cell. Recent studies have also suggested that MTSS1 competes with the WH2 domain-containing neuronal Wiskott–Aldrich Syndrome protein (N-WASP) VCA protein for binding with G-actin [15]. Given the critical role of N-WASP in actin modeling and cytoskeleton formation [16], these findings reveal a critical role for MTSS1 in G-actin polymerization in part by inhibiting the physiological interaction between N-WASP and G-actin. In addition to interacting directly with G-actin, MTSS1 is also reported to interact with various other proteins such as the Rac GTPase, Cortactin and RPTPδ, all of which have been well-characterized as critical regulators of cell migration, invasion and cell-cell interaction [14, 16, 17]. Therefore, MTSS1 might govern various cellular processes including cellular migration or invasion in part by influencing cellular cytoskeleton. To this end, previous studies have also indicated that MTSS1 could promote the formation of dorsal ruffels in response to PDGF to result in cell shape changes. Interestingly, PDGF induces phosphorylation of MTSS1 at Tyr-397 and Tyr-398 in a Src kinase-dependent manner [18]. These findings indicated that MTSS1 is potentially involved in mediating the PDGF signaling pathway to promote actin cytoskeleton formation via the Src-related kinases [18]. However, it remains largely unclear how MTSS1 stability is physiologically controlled, and which upstream signaling pathway aberrantly activated in cancer cells, contributes to the reduced MTSS1 abundance frequently observed in various human cancers.
The ubiquitin proteasome system (UPS) plays an important role in the timely regulation of key cellular proteins and thereby controlling many cellular processes including cell signaling and cell cycle regulation [19]. Dysfunction of the UPS is involved in the development of many diseases including cancer [20, 21]. Three enzymes are involved in protein ubiquitination and destruction process, the ubiquitin-activating enzyme (E1), the ubiquitin-conjugating enzyme (E2) and the ubiquitin ligase (E3), respectively and the E3 ligases determine the substrate specificity of the three-step ubiquitination process [19]. The SCF\textsuperscript{\beta-TRCP} E3 ubiquitin ligase complex plays a key role in cell cycle regulation [19]. However, its exact role as a tumor suppressor or oncogene might be tissue- or cellular context-dependent as both loss of \textbeta-TRCP, and aberrant upregulation of \textbeta-TRCP, have been reported in different types of human cancers [22]. Notably, elevated levels of \textbeta-TRCP were observed in a number of cancers including pancreatic cancer [23], gastric cancer [24] and breast cancer [25]. Furthermore, consistent with a possible oncogenic role for \textbeta-TRCP in certain tissues, another study demonstrated that suppression of \textbeta-TRCP reduces prostate cancer [26]. These findings indicate that in certain tissue types, increased expression of \textbeta-TRCP may potentially lead to enhanced degradation of its substrates including possible tumor suppressors to facilitate tumorigenesis. In keeping with this notion, we report here that the tumor suppressor MTSS1 is a novel substrate of \textbeta-TRCP. In further support of our hypothesis, we have identified an evolutionally conserved phosphodegron (DSGXXS) in MTSS1 that mediates the interaction with, and subsequent ubiquitination by \textbeta-TRCP in a CKI-dependent manner. More importantly, ectopic expression of a non-degradable MTSS1 exerts stronger effects than WT-MTSS1 in suppressing tumor cell proliferation and migration. Therefore, these studies reveal the CKI/\textbeta-TRCP signaling axis as the novel regulatory route to govern the stability of the MTSS1 tumor suppressor and that elevated CKI or \textbeta-TRCP expression might lead to accelerated destruction of MTSS1 to facilitate tumorigenesis and tumor metastasis.
**RESULTS**
**MTSS1 stability is negatively regulated by the SCF\textsuperscript{\beta-TRCP} E3 ubiquitin ligase complex:**
Cullin–RING complexes comprise the largest known class of E3 ubiquitin ligases, which play essential roles in targeting regulatory proteins for ubiquitin-mediated destruction [27]. Cullins are the critical scaffold proteins that complex with other essential components such as Skp1, F-box protein and Rbx1 to form various functional E3 ubiquitin ligases. Thus, we began our
investigation by examining whether a specific Cullin–RING complex interacts with MTSS1. Notably, we found that Cullin 1 specifically binds with endogenous as well as ectopically expressed MTSS1, but not with other members of the Cullin family (Cullin 2–5) we examined (Figure 1A and Supplementary Figure S1A). This result suggests that the SCF complex (Skp1–Cullin1–F-box protein complex), might be specifically involved in the regulation of MTSS1 protein stability. Next, we conducted studies to identify the specific F-box protein that regulate MTSS1 stability. To this end, previous studies from various groups including us showed that β-TRCP specifically binds with substrates that contain specific DSG(XX)S phosphodegron motif(s), in which the two serine residues are phosphorylated by one or more upstream kinases [28–30]. Notably, a DSG(XX)S phosphodegron motif was readily spotted within the MTSS1 protein, which was found to be conserved among various species (Figure 4A). To test the hypothesis that MTSS1 is a novel substrate of SCFβ-TRCP, we examined whether β-TRCP directly interacts with MTSS1. We found that both exogenously expressed β-TRCP1 as well as endogenous β-TRCP1 interacts with MTSS1 (Figure 1B, 1C and Supplementary Figure S1B, S1C). Furthermore, β-TRCP1-R474A, which contains a mutation in its substrate-interacting motif [31], was deficient in associating with MTSS1, suggesting a specific interaction between β-TRCP1 and MTSS1 (Figure 1B and Supplementary Figure S1B, S1C). Importantly, we observed that phosphatase treatment significantly reduced the interaction between MTSS1 and β-TRCP1 (Figure 1D), supporting a phosphorylation-dependent interaction between MTSS1 and β-TRCP1.
**Figure 1: SCF complex containing β-TRCP1 and Cullin 1 interacts with MTSS1.**
(A) Immunoblot (IB) analysis of whole cell lysates (WCL) and immunoprecipitates (IP) derived from 293T cells transfected with Myc-tagged Cullin constructs or empty vector (EV) as a negative control. (B) IB analysis of WCL and IP derived from 293T cells transfected with Flag–tagged wild-type or R474A mutant β-TRCP1 constructs, or EV as indicated. (C) 293T cell extracts were immunoprecipitated with antibody against MTSS1, or control IgG and analyzed by IB analysis. (D) IB analysis of WCL and IP derived from 293T cells transfected with Myc-MTSS1 and Flag–β-TRCP1 constructs as indicated. Where indicated, cell lysates were pre-treated with λ-phosphatase before the IP procedure. (E) IB analysis of WCL and IP derived from 293T cells transfected with Myc-MTSS1 and Flag-Rbx1 constructs, as indicated. (F) IB analysis of WCL and IP derived from 293T cells transfected with GST-MTSS1 and Myc-Skp1 constructs, as indicated.
Consistent with the key role of the SCF complex in the regulation of MTSS1 stability, we also found interactions between MTSS1 and Rbx1 (Figure 1E and Supplementary Figure S1D) as well as between MTSS1 and Skp1 (Figure 1F). These findings together suggest that the SCF complex comprising of Cullin 1, Rbx1, Skp1, and β-TRCP is involved in the regulation of MTSS1 stability. In further support of the physiological roles of β-TRCP and Cullin 1 in the regulation of MTSS1, we found that depletion of endogenous β-TRCP or Cullin 1 significantly upregulated MTSS1 (Figure 2A, 2B and Supplementary Figure S2A, S2B). Importantly, depletion of β-TRCP caused a marked increase in MTSS1 half-life (Figure 2C and 2D), but not in MTSS1 mRNA levels (Figure 2E and 2F). Moreover, in support of the notion that SCFβ-TRCP might regulate MTSS1 abundance in a post-translational mechanism, treatment with the proteasome inhibitor, MG132, significantly upregulated MTSS1 protein levels,
**Figure 2: MTSS1 protein stability is controlled by the SCFβ-TRCP E3 ubiquitin ligase.**
(A) Immunoblot (IB) analysis of whole cell lysates (WCL) derived from 293T cells infected with shRNA constructs specific for GFP, β-TRCP1 (four independent lentiviral β-TRCP1-targeting shRNA constructs namely, -A, -B, -C, -D), or β-TRCP1+2, followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. B) IB analysis of WCL from 293T cells transfected with shRNA specific for GFP, or several shRNA constructs against Cullin 1 (five independent lentiviral Cullin 1-targeting shRNA constructs namely, -A, -B, -C, -D, -E) followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. (C) 293T cells were infected with the indicated shRNA constructs followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. The generated stable cell lines were then split into 60-mm dishes. 20 hours later, cells were treated with 20 μg/ml CHX. At the indicated time points, WCL were prepared, and immunoblots were probed with the indicated antibodies. (D) Quantification of the band intensities in C. MTSS1 band intensity was normalized to tubulin, and then normalized to the t = 0 controls. The error bars represent mean ± SD (n = 3). (E-F) Relative mRNA levels of MTSS1 (E) or β-TRCP1 (F) in 293T cells infected with shRNA constructs specific for GFP, β-TRCP1 (-A and -B) or β-TRCP1+2 followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. MTSS1 and β-TRCP1 mRNA levels were normalized to GAPDH, and then normalized to the control cells (shGFP). (G) IB analysis of WCL derived from 293T cells treated with vehicle or MG132 as indicated.
indicating the potential involvement of 26S proteasome in MTSS1 degradation (Figure 2G). These findings together suggest that a post-transcriptional regulatory mechanism such as the ubiquitin proteasome system may be involved in the regulation of MTSS1 stability.
**Figure 3: CKIδ is involved in the regulation of MTSS1 protein stability mediated by SCFβ-TRCP.**
(A) Immunoblot (IB) analysis of whole cell lysates (WCL) derived from HeLa cells transfected with Myc-MTSS1, Flag-β-TRCP1, and indicated kinases. Where indicated, cells were treated with the proteasome inhibitor MG132. (B) IB analysis of WCL derived from 293T cells transfected with Myc-MTSS1 and/or Myc-CKIδ together with Flag-WT–β-TRCP1 or Flag-R474A–β-TRCP1. (C) IB analysis of WCL and immunoprecipitates (IP) derived from 293T cells transfected with GST-MTSS1 and Myc-tagged versions of the indicated CKI isoforms. (D) IB analysis of HeLa cells that were infected with shRNA specific for GFP or the indicated CKI isoforms, followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. (E) IB analysis of HeLa cells treated with the CKI inhibitor D4476 at the indicated concentrations for 12 hours. (F) IB analysis of WCL and IP derived from HeLa cells that were infected with shGFP, shCKIδ-A or shCKIδ-B, followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. The various generated HeLa cell lines were then transfected with Flag–β-TRCP1 and/or Myc-MTSS1 as indicated. (G) HeLa cells were infected with the indicated shRNA constructs followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. The various generated HeLa cell lines were then transfected with Myc-MTSS1, Flag–β-TRCP1. 20 hours post-transfection, the cells were split into 60-mm dishes before being treated with 20 μg/ml CHX. At the indicated time points, WCL were prepared, and immunoblots were probed with the indicated antibodies. (H) Quantification of the band intensities in G. Myc-MTSS1 band intensity was normalized to tubulin, and then normalized to the t = 0 controls. The error bars represent mean ± SD (n = 3).
Casein Kinase Iδ (CKIδ) is involved in the regulation of MTSS1 protein stability:
As β-TRCP only recognizes its substrates when they are properly phosphorylated by one or a combination of kinase(s) [32, 33], we sought to identify the upstream kinase that phosphorylates MTSS1 to trigger its destruction by β-TRCP. In this regard, both CKIδ and GSK3β have been previously identified as critical players in phosphorylating the targeted proteins of β-TRCP for ubiquitin-mediated protein degradation [34, 35]. Hence, to determine the specific kinase involved in the degradation of MTSS1, we transfected HeLa and 293T cells with Myc-MTSS1 and Flag-β-TRCP1 along with CKIδ or GSK3β and further analyzed MTSS1 levels by western blot analysis. Interestingly, we found that CKIδ, but not GSK3β, efficiently promoted the degradation of MTSS1 (Figure 3A and Supplementary Figure S3A). Furthermore, MG132, a 26S proteasome inhibitor, completely prevented MTSS1 degradation mediated by CKIδ and β-TRCP1, suggesting the involvement of the 26S proteasome in this process (Figure 3A and Supplementary Figure S3A).
Importantly, the mutant β-TRCP1 (R474A), which is unable to interact with MTSS1 (Figure 1B), failed to promote MTSS1 degradation in the presence of CKIδ (Figure 3B and Supplementary Figure S3B), indicating a critical role for the C-terminal substrate-binding WD40 repeat motif in β-TRCP1-mediated destruction of MTSS1.
To further confirm the potential role of CKIδ in MTSS1 regulation, we utilized co-immunoprecipitation (Co-IP) experiments using various Myc-tagged CKI isoforms and GST-MTSS1 to determine whether CKIδ directly interacts with MTSS1 *in vivo*. Notably, we found that CKIδ, but not other CKI isoforms such as CKIα or CKIε, specifically interacts with MTSS1 (Figure 3C). In further support a physiological role for CKIδ in governing MTSS1 stability, we demonstrated that MTSS1 abundance was significantly elevated upon inactivation of CKIδ by either depletion of endogenous CKIδ or by using a CKI inhibitor, D4476 (Figure 3D, 3E and Supplementary Figure S3C). More importantly, either depletion of CKIδ (Figure 3F) or inactivation of CKIδ by D4476 (Supplementary Figure S3D), significantly disrupted the interaction between β-TRCP and MTSS1. In keeping with the critical role of CKIδ in MTSS1 stability control,
**Figure 4: CKIδ-mediated phosphorylation of MTSS1 at Ser322 triggers its interaction with β-TRCP1 for subsequent ubiquitination and degradation.**
(A) Alignment of the candidate phospho-degron sequence in MTSS1 from different species. (B) Immunoblot (IB) analysis of HeLa cells transfected with Flag-β-TRCP1 and Myc-tagged wild-type or S322A mutant MTSS1 constructs, as indicated. (C) IB analysis of whole cell lysates (WCL) and immunoprecipitates (IP) derived from HeLa cells transfected with Flag-β-TRCP1 together with Myc-WT-MTSS1 or Myc-S322A-MTSS1. (D) IB analysis of WCL and IP derived from 293T cells transfected with Flag-β-TRCP1, His-Ubiquitin, and Myc-tagged wild-type or S322A mutant MTSS1 constructs, or EV, as indicated.
depletion of CKIδ significantly extended the MTSS1 protein half-life (Figure 3G and 3H). These findings coherently indicated a potential role of CKIδ in negative regulation of the MTSS1 stability.
**CKIδ phosphorylates Ser322 to promote MTSS1’s interaction with β-TRCP1 for subsequent ubiquitination and degradation:**
In keeping with previously identified β-TRCP substrates, there is a canonical DSGXXS phospho-degron present in MTSS1 that could be recognized by β-TRCP upon proper phosphorylation by kinases [28]. Importantly, this degron is conserved among various species (Figure 4A). To test the significance of this putative DSGXXS phosphodegron in MTSS1 protein stability, we created a point mutation in the DSG motif by replacing the Ser322 residue with alanine (S322A). In support of the critical role for Ser322 in β-TRCP1-mediated destruction of MTSS1, we found that wild-type, but not the S322A mutant form of MTSS1, could be efficiently degraded in the presence of β-TRCP1 and CKIδ (Figure 4B). Moreover, the proteasome inhibitor MG132 completely prevented the degradation of MTSS1 suggesting the involvement of a proteasome-mediated degradation mechanism in this process (Figure 4B). In keeping with this finding, unlike WT-MTSS1, S322A-MTSS1 was deficient in interacting with β-TRCP1 (Figure 4C), providing a possible explanation for its resistance to β-TRCP1/CKIδ-mediated destruction. Consistently, *in vivo* ubiquitination assays revealed that wild-type, but not the S322A mutant form of MTSS1, could be ubiquitinated *in vivo* (Figure 4D). These findings indicated that phosphorylation of Ser322 within the canonical phospho-DSG degron motif in MTSS1 is potentially involved in governing MTSS1 destruction mediated by β-TRCP and CKIδ.
**β-TRCP-mediated destruction of MTSS1 affects cancer cell proliferation and migration:**
Given that a significant decrease in MTSS1 abundance is frequently observed in both prostate and breast cancers [7, 10], we sought to investigate whether MTSS1 expression in these cancer cells inversely correlates with cellular proliferation and migration. To begin this investigation, we first analyzed MTSS1 protein levels in various prostate and breast cancer cell lines. Notably, we found that the PC3 prostate cancer cells and the MDA-MB-231 breast cancer cells displayed a significantly reduced expression of MTSS1, whereas
---
**Figure 5:** β-TRCP levels inversely correlate with MTSS1 abundance in several cancer cell lines. (A) Whole cell lysates (WCL) prepared from the indicated cancer cell lines were analyzed by immunoblot (IB) analysis. (B) IB analysis of WCL prepared from PC3 cells that were infected with shRNA constructs specific for GFP or Cullin 1, followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. (C) IB analysis of WCL prepared from PC3 cells that were infected with shRNA constructs specific for GFP, β-TRCP1 (A, B), or β-TRCP1+2, followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. (D) IB analysis of WCL prepared from MDA-MB-231 cells that were infected with shRNA constructs specific for GFP or Cullin 1, followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells. (E) IB analysis of WCL prepared from MDA-MB-231 cells that were infected with shRNA constructs specific for GFP, β-TRCP1 (A, B), or β-TRCP1+2, followed by selection with 1 μg/ml puromycin for three days to eliminate the non-infected cells.
DU145 and MCF-7 cells expressed relatively high MTSS1 levels (Figure 5A). Furthermore, we noticed that the MTSS1 levels inversely correlate with the endogenous β-TRCP1 levels, arguing that β-TRCP1 expression levels might dictate the abundance of MTSS1 in this experimental setting. To further examine this hypothesis, we depleted endogenous Cullin 1 or β-TRCP via lentiviral shRNA infection to examine its effects on MTSS1 abundance. In keeping with a critical role for SCFβ-TRCP in governing MTSS1 stability, we found that depletion of either Cullin 1 or both β-TRCP isoforms led to a significant upregulation of MTSS1 in both PC3 and MDA-MB-231 cells (Figure 5 B-E).
These results indicated that the SCF complex consisting of Cullin 1 and β-TRCP might play a key role in the regulation of MTSS1 in both breast and prostate cancer cells. As β-TRCP is the first identified E3 ligase for MTSS1, to explore the biological significance for
**Figure 6: Mutant MTSS1 inhibits PC3 and MDA-MB-231 cancer cell proliferation.** (A-D) PC3 (A, B) or MDA-MB-231 (C, D) cells were infected with pBabe-EV, pBabe-HA-wild-type-MTSS1 or pBabe-HA-S322A-MTSS1 retroviral vectors and photographs were taken after growing the generated stable cell lines for 3 days in puromycin (1 μg/ml) selection medium. The numbers of PC3 (B) or MDA-MB-231 (D) cells were normalized against the indicated time points. The number of cells was normalized against the number of cells in the corresponding pBabe-EV cells. The error bars represent mean ± SD (n = 3). (E-H) PC3 (E, F) or MDA-MB-231 (G, H) cells were infected with pBabe-EV, pBabe-HA-wild-type-MTSS1 or pBabe-HA-S322A-MTSS1 retroviral vectors and photographs were taken after growing the cells for 3 days in puromycin (1 μg/ml) selection medium to eliminate the non-infected cells. Furthermore, the generated cell lines were subjected to pulse of BrdU and then immunostained using anti-BrdU antibody as described in the methods section. Quantitative measurements of PC3 (F) or MDA-MB-231 (H) cells stained for BrdU were presented. The number of cells was normalized against the number of cells in the corresponding pBabe-EV cells. The error bars represent mean ± SD (n = 3) * p< 0.05.
SCF\textsuperscript{β-TRCP}-mediated destruction of MTSS1, next we intended to examine how ectopic expression of a non-degradable mutant form of MTSS1 (S322A-MTSS1) or wild-type MTSS1 (as a control) in both PC3 and MDA-MB-231 cancer cells could affect cellular migration or proliferation (Supplementary Figure S4A-B). Empty vector (EV) expressing cells were also used as a negative control for this experimental system. Importantly, S322A-MTSS1 expressing PC3 and MDA-MB-231 cells exhibited significantly reduced growth potential compared to wild-type MTSS1 or EV infected cells (Figure 6A-6D). Consistent with this finding, ectopic expression of S322A-MTSS1 exerted stronger ability than WT-MTSS1 or EV controls in decreasing cell entry into the S phase,
**Figure 7:** **Mutant MTSS1 inhibits PC3 and MDA-MB-231 cancer cells migration.** (A-D) PC3 (A) or MDA-MB-231 (C) cells were infected with pBabe-EV, pBabe-HA-wild-type-MTSS1 or pBabe-HA-S322A-MTSS1 retroviral vectors and after 3 days of puromycin (1 μg/ml) selection to eliminate the non-infected cells, the generated various cell lines were subjected to trans-well cell migration assay and photographed. Quantitative measurement of migrated PC3 (B) or MDA-MB-231 (D) cells was assessed after 12 hours. The number of cells was normalized against the number of cells in the corresponding pBabe-EV cells. The error bars represent mean ± SD (n = 3) * p< 0.05 (n= 3). (E-F) Scratch assays were performed with PC3 cells that were infected with pBabe-EV, pBabe-HA-wild-type-MTSS1 or pBabe-HA-S322A-MTSS1 retroviral vectors followed by 3 days of puromycin (1 μg/ml) selection to eliminate the non-infected cells. The generated various PC3 cell lines were seeded on a 6 well plate and scratched on the surface with a 200 μl pipette tip. Relative values were set at 1 of the gap width at the time of the scratch. Representative photographs at time points 0 and 18 hours after the scratch (E). Measurements were done in duplicate in 3 separate experiments, and data were depicted as average gap width (F). The error bars represent mean ± SD (n = 3). ***p<0.001; *p<0.05. (G-H) Scratch assays were performed with MDA-MB-231 cells that were infected with pBabe-EV, pBabe-HA-wild-type-MTSS1 or pBabe-HA-S322A-MTSS1 retroviral vectors followed by 3 days of puromycin (1 μg/ml) selection to eliminate the non-infected cells. The generated various MDA-MB-231 cell lines were seeded on a 6 well plate and scratched on the surface with a 200 μl pipette tip. Relative values were set at 1 of the gap width at the time of the scratch. Representative photographs at time points 0 and 18 hours after the scratch (G). Measurements were done in duplicate in 3 separate experiments, and data were depicted as average gap width (H). The error bars represent mean ± SD (n = 3). ***p<0.001; *p<0.05.
as illustrated by reduced BrdU staining in both PC3 and MDA-MB-231 cells (Figure 6E-6H). This suggests that elevated MTSS1 expression, in part due to deficient destruction by the SCF\(\beta\)-TRCP E3 ligase, might suppress tumorigenesis by reducing S phase entry and cellular proliferation.
Furthermore, given the well-characterized role of MTSS1 in both cell cytoskeleton remodeling and cellular migration, we conducted cell migration assays to investigate how SCF\(\beta\)-TRCP-mediated destruction of MTSS1 might affect cellular migration, an important feature of human cancer invasion and metastasis. Notably, compared to wild-type MTSS1 and empty vector expressing cells, S322A-MTSS1 expressing prostate (PC3) and breast (MDA-MB-231) cancer cells exhibited a significant reduction in cell migration (Figure 7A-7D) and subsequently, reduced ability in recovering from scarring (Figure 7E-7H). Importantly, these results coherently suggest that non-degradable MTSS1 expressing cancer cells exhibit more dramatic effects in inhibiting growth and migration compared to wild-type MTSS1 or EV-expressing cells, advocating a critical role of SCF\(\beta\)-TRCP-mediated destruction of MTSS1 in suppressing tumor growth and migration (Figure 8).
**DISCUSSION**
Downregulation of MTSS1 has been observed in many types of human cancers, and complete loss of MTSS1 is associated with poorly differentiated metastatic tumors, and with poor survival rate [6-9]. However, the underlying molecular mechanisms responsible for reduced expression of MTSS1 in human cancers remain largely unknown. To this end, our study, for the first time, identified the \(\beta\)-TRCP/CKI\(\delta\) signaling as a major regulatory mechanism that governs MTSS1 degradation. \(\beta\)-TRCP is an F-box protein, which regulates many cellular processes by the timely targeting of various substrates for proteasome-mediated degradation [36]. However, the exact role of \(\beta\)-TRCP in tumorigenesis could be tissue specific or cellular context-dependent. \(\beta\)-TRCP dysfunction, including both loss of \(\beta\)-TRCP and elevated expression of \(\beta\)-TRCP, have been reported in distinct types of human cancers [22]. Notably, \(\beta\)-TRCP overexpression has been reported in certain tumor types including human breast or prostate cancers, which could lead to increased degradation of its substrates [24, 37]. Our study suggests that at least in the breast or prostate cancer settings, \(\beta\)-TRCP overexpression could possibly lead to increased degradation of the tumor suppressor MTSS1, thereby contributing to tumorigenesis. In this direction, we found a possible inverse relationship between MTSS1 and \(\beta\)-TRCP expression in both prostate and breast cancer cell lines (Figure 5A). Our study also demonstrated that \(\beta\)-TRCP specifically interacts with MTSS1 in a CKI\(\delta\)-dependent manner, and further promotes the ubiquitination and subsequently proteasome-mediated degradation of MTSS1 (Figure 4B). Although previous reports indicate that dysregulation in MTSS1 gene expression in part contributes to loss of MTSS1 [8], our study indicates the accelerated MTSS1 degradation, possibly due to elevated expression of \(\beta\)-TRCP, could be an alternative mechanism accounting for reduced abundance of the MTSS1 tumor suppressor in various human cancers.
In this study, we also identified CKI\(\delta\) as the upstream kinase that is potentially involved in \(\beta\)-TRCP-mediated degradation of MTSS1. We found that CKI\(\delta\), but not other CKI isoforms, phosphorylates MTSS1 at

**Figure 8.** Proposed model for how aberrant elevation of SCF\(\beta\)-TRCP-mediated ubiquitination and degradation of the MTSS1 tumor suppressor may contribute to tumorigenesis and metastasis in part by promoting tumor cell growth and migration.
Ser322 within the DSGXXS phosphodegron to trigger its interaction with β-TRCP for subsequent ubiquitination and degradation of MTSS1. Consistently, wild-type, but not a S322A mutant form of MTSS1 interacted with β-TRCP. As a result, S322A-MTSS1 was resistant to β-TRCP/CKIδ-mediated degradation, thereby displaying extended half-life. These findings support a critical role for phosphorylation of Ser322 in regulating MTSS1 stability. However, it remains to be determined whether in addition to elevated β-TRCP expression [24, 37, 38], cancer cells also exhibit increased CKI activity, resulting in increased degradation of MTSS1. Notably, it has been demonstrated that aberrant CKI activity is linked to carcinogenesis [39]. More specifically, the CKIδ and CKIε isoforms have been known to possess growth promoting and anti-apoptotic characteristics, and elevated CKIδ and CKIε activities are associated with the development of ductal carcinoma of the pancreas [40]. Moreover, increased CKI activity promotes SV40-induced cellular transformation both *in vitro* and *in vivo*, suggesting a potential oncogenic role of CKI in tumorigenesis [41]. Taken together, these observations indicate that aberrancies in the MTSS1 degradation pathway, either by elevated expression of β-TRCP, or hyper-activation of CKIδ, may possibly downregulate the MTSS1 tumor suppressor to facilitate tumor cell growth and cancer progression (Figure 8).
Consistent with this notion, we found that compared to cells expressing either WT-MTSS1 or EV control, ectopic expression of non-degradable MTSS1 (S322A) significantly reduced cell growth and S phase entry as evidenced by decreased BrdU staining, as well as markedly halted cellular migration ability. These findings indicate that MTSS1 expression could prevent cancer cell growth, migration and possibly invasion. Although the precise molecular mechanisms by which MTSS1 inhibits tumor growth and metastasis remain largely unknown, it was reported that MTSS1 levels inversely correlate with the growth, invasion, adhesion and migration of kidney cancer cells, and MTSS1 suppresses kidney cancer cell migration via the Sonic hedgehog (SHH) pathway [9]. Other studies have indicated that MTSS1 promotes cell-cell junction assembly through recruiting the small GTPase and actin, which drives junction maintenance. Therefore, loss of MTSS1 in cancers may lead to the loss of junction stability, which ultimately promotes EMT and metastasis [42]. Furthermore, MTSS1 is known to negatively regulate the epidermal growth factor signaling to suppress metastasis [43]. Further studies are required to reveal the exact molecular mechanisms and signaling pathways through which MTSS1 modulates cancer cell migration and invasion.
In summary, our study provides a possible novel molecular mechanism for the frequent reduction in expression of the MTSS1 tumor suppressor in various types of human cancers. Our work further suggest that in part by restoring MTSS1 expression to suppress cancer cell growth, proliferation and metastasis, β-TRCP inhibitors or CKI inhibitors may be beneficial in treating various types of human cancers, particularly the metastatic cancers that are associated with poor survival rates.
**MATERIAL METHODS**
**Cell Culture:**
HeLa, 293T, 293FT cells and the breast cancer cell lines MCF-7 and MDA-MB-231 were cultured in DMEM medium (Life Technologies, CA) supplemented with 10% FBS, penicillin and streptomycin. The prostate cancer cell lines DU145 and PC3 were cultured in RPMI 1640 medium with 10% FBS and antibiotics.
**Plasmids:**
MTSS1 cDNAs were subcloned using the Pfu polymerase (Agilent Technologies) into the pCMV-GST [34] or the pCDNA3-Myc vector to create GST-MTSS1 or Myc-MTSS1 in frame fusion protein, respectively. Short hairpin RNAs (shRNA lentivirus vectors), i.e., shRNA–β-TRCP1, shRNA–β-TRCP1+2, shRNA-GFP, and CKI constructs were described previously [32, 44]. Flag–β-TRCP1 and Flag–β-TRCP1-R474A constructs were described previously [30, 34]. Myc-Cullin 1, Myc-Cullin 2, Myc-Cullin 3, Myc-Cullin 4A, and Myc-Cullin 5 constructs were gifts from J. DeCaprio (Dana-Farber Cancer Institute, Boston, MA). Lentiviral shRNA constructs against GFP and CKIδ were obtained from W. Hahn (Dana-Farber Cancer Institute, Boston, MA). shRNA lentiviral vectors against Cullin 1 and Cullin 4A were gifts from J. Wade Harper (Harvard Medical School, Boston, MA).
**Cell transfection and viral transduction procedures:**
For cell transfection, $5 \times 10^5$ HeLa or 293T cells were seeded in 60-mm plates and transfected using Lipofectamine (Invitrogen) in OptiMEM medium (Invitrogen) for 48 hours according to the manufacturer’s instructions. For viral transduction experiments, $6 \times 10^5$ HEK 293T cells were seeded in 60-mm dishes and cotransfected the next day with each lentivirus or retrovirus vector, along with helper plasmids (i.e., gag-pol and VSV-G were used for lentiviral infections). Media with progeny virus from transfected cells was collected every 24 h for 2 d, and then filtered with 0.45-μm filters (Millipore) and freshly used to infect 293T, HeLa, prostate or breast cancer cells overnight in the presence of 8 μg/ml Polybrene (Sigma-Aldrich). After infection, the cells were
selected with 1 μg/ml puromycin (Sigma-Aldrich) for 72 hours to eliminate the uninfected cells before collecting the whole cell lysates (WCLs) for the subsequent biochemical assays. Knockdown or overexpression in the transduced cells was confirmed by real-time RT-PCR or western blot analysis.
**Antibodies and reagents:**
Anti-MTSS1 antibody (4386) was purchased from Cell Signaling Technology. α-c-Myc (9E10) and polyclonal α-HA antibodies (Y-11) were purchased from Santa Cruz Biotechnology. α-Tubulin antibody (T-5168), α-Vinculin antibody (V-4505), polyclonal α-Flag antibody (F-2425), monoclonal α-Flag antibody (F-3165), α-HA agarose beads (A-2095), peroxidase-conjugated α-mouse secondary antibody (A-4416) and peroxidase-conjugated α-rabbit secondary antibody (A-4914) were purchased from Sigma. Monoclonal α-HA antibody (MMS-101P) was purchased from Covance and α-GFP antibody (632380) was from Invitrogen. Anti-Cullin 1 (4995) and anti-β-TRCP1 (4394) antibodies were purchased from Cell Signaling Technology.
**Immunoblots and immunoprecipitation:**
Cells were lysed in EBC-lysis buffer (50 mM Tris, pH 8.0, 120 mM NaCl, and 0.5% NP-40) supplemented with protease inhibitors (Complete Mini; Roche) and phosphatase inhibitors (phosphatase inhibitor cocktail set I and II; EMD Millipore). The protein concentrations of the lysates were measured using a protein assay reagent (Bio-Rad Laboratories, CA) on a DU-800 spectrophotometer (Beckman Coulter). The lysate samples were then resolved by SDS-PAGE and immunoblotted with the indicated antibodies. For immunoprecipitation assays, 20 hrs of post transfection, cells were treated with 10 μM MG132 overnight before harvesting for immunoprecipitation. 800 μg of protein lysates were incubated with the appropriate antibodies (1–2 μg) overnight at 4°C, followed by addition of carrier beads. Immunocomplexes were washed five times with NETN buffer (20 mM Tris, pH 8.0, 100 mM NaCl, 1 mM EDTA, and 0.5% NP-40) before being resolved by SDS-PAGE and immunoblotted with indicated antibodies.
**Protein degradation analysis and protein half-life studies:**
Cells were seeded in 6-cm culture dishes 20 hrs before transfection. Cells were transfected with 2.0 μg Myc-MTSS1, along with 1.0 μg Flag-β-TRCP1 and 0.1 μg of a plasmid encoding GFP as an internal control, in the presence or absence of 0.4 μg Myc-CKIδ. For half-life studies, 20 μg/ml cycloheximide (CHX; Sigma-Aldrich) was added to the medium 40 hrs of post transfection. At various time points thereafter, cells were lysed and protein concentrations were measured. 30 μg of the indicated whole cell lysates (WCL) were separated by SDS-PAGE and protein levels were measured by immunoblot analysis.
**In vivo ubiquitination assay:**
Cells were transfected with His-Ubiquitin along with Myc-MTSS1 (wild-type or S322A) and Flag-β-TRCP1. Thirty-six hours after transfection, cells were harvested, and the lysates were incubated with Ni-NTA matrices (Qiagen) at 4 °C for 12 h in the presence of 8 M Urea pH 7.5. Immobilized proteins were washed five times with 8 M Urea pH 6.3 before being resolved by SDS-PAGE and immunoblotted with the anti-Myc antibody [45].
**Scratch assay:**
Cancer cells were grown to confluency in a 6-wells plate. The cell monolayer was scraped in a straight line with a tip. Photographs of the scratch were taken at 0 h and 18 h. Gap width at 0 h was set to 1. Gap width analysis was performed with PhotoshopCS4 using the analytical ruler tool. Measurements were taken at multiple defined sites (> 5) along the scratch. Each scratch was given an average of all measurements. Data are expressed as the average of three independent experiments.
**Cell migration assay:**
For cell migration assay, $1 \times 10^5$ cells in serum-free media containing 0.1% BSA were added to the upper chamber of a Transwell Filter (8 μm pore size; Corning) in triplicates. Cell-conditioned media was added to the lower chamber. After a 16-h incubation at 37 °C, non-migrated cells at the top of the filter were removed using cotton swabs. Cells that had migrated to the bottom of the filter were fixed and stained using the Hema-3 stain set. Cells were then counted using a 20× objective, and four fields were chosen per well with three wells per each condition [29].
**Bromodeoxyuridine (BrdU) labeling:**
BrdU labeling was performed as described previously [46]. Briefly, cultures were incubated with 1 μg/ml (3 μM) BrdU and 1 mg/ml uridine for 48 h. Cells were washed with PBS, fixed with ice-cold absolute methanol for 10 min, treated with 1.5 M HCl for 1 h at room temperature, and neutralized with 0.1 M borate buffer (pH 8.5) for 30 min. After blocking with 0.1% bovine serum albumin (BSA) in PBS for 30 min at 37°C,
cells were incubated with 5 μg/ml anti-BrdU monoclonal antibody (PharMingen) in 0.1% BSA/PBS for 1 h, washed with 0.1% BSA/PBS, and incubated with 1 μg/ml HRP-conjugated rabbit anti-mouse secondary antibody (Jackson Immunoresearch) for 1 h. Cells were then washed extensively with ammonia-buffered phosphate (ABP; 0.1 M NaH₂PO₄ brought to pH 7.0 with ammonium hydroxide) and stained for 12–16 h at room temperature with 1.3 mM 3,3-diaminobenzidine in ABP containing 0.004% H₂O₂. The experiments were performed 3 times to generate error bars.
**FUNDING:**
This work was supported by grants from the National Institute of General Medicine, NIH (GM089763, GM094777) to W.W. W.W. is an American Cancer Society Scholar and Leukemia and Lymphoma Society Research Scholar. A.T is supported by NIH NRSA fellowship. H.I is supported by NIH K01 (AG041218).
**ACKNOWLEDGEMENTS:**
We thank Alan Lau, Pengda Liu, Wenjian Gan and other members of the Wei laboratory for critical reading and discussion of this manuscript.
**Competing Financial Interests:**
The authors have no conflicting financial interests.
**REFERENCES**
1. Spano D, Heck C, De Antonellis P, Christofori G and Zollo M. Molecular networks that regulate cancer metastasis. Semin Cancer Biol. 2012; 22(3):234-249.
2. Chaffer CL and Weinberg RA. A perspective on cancer cell metastasis. Science. 2011; 331(6024):1559-1564.
3. Yang JL, Ow KT, Russell PJ, Ham JM and Crowe PJ. Higher expression of oncoproteins c-myc, c-erb B-2/neu, PCNA, and p53 in metastasizing colorectal cancer than in nonmetastasizing tumors. Ann Surg Oncol. 1996; 3(6):574-579.
4. Goncharuk VN, del-Rosario A, Kren L, Anwar S, Sheehan CE, Carlson JA and Ross JS. Co-downregulation of PTEN, KAI-1, and nm23-H1 tumor/metastasis suppressor proteins in non-small cell lung cancer. Ann Diagn Pathol. 2004; 8(1):6-16.
5. Lee YG, Macoska JA, Korenchuk S and Pienta KJ. MIM, a potential metastasis suppressor gene in bladder cancer. Neoplasia. 2002; 4(4):291-294.
6. Nixdorf S, Grimm MO, Loberg R, Marreiros A, Russell PJ, Pienta KJ and Jackson P. Expression and regulation of MIM (Missing In Metastasis), a novel putative metastasis suppressor gene, and MIM-B, in bladder cancer cell lines. Cancer Lett. 2004; 215(2):209-220.
7. Loberg RD, Neeley CK, Adam-Day LL, Fridman Y, St John LN, Nixdorf S, Jackson P, Kalikin LM and Pienta KJ. Differential expression analysis of MIM (MTSS1) splice variants and a functional role of MIM in prostate cancer cell biology. Int J Oncol. 2005; 26(6):1699-1705.
8. Liu K, Wang G, Ding H, Chen Y, Yu G and Wang J. Downregulation of metastasis suppressor 1(MTSS1) is associated with nodal metastasis and poor outcome in Chinese patients with gastric cancer. BMC Cancer. 2010; 10:428.
9. Du P, Ye L, Li H, Yang Y and Jiang WG. The tumour suppressive role of metastasis suppressor-1, MTSS1, in human kidney cancer, a possible connection with the SHH pathway. J Exp Ther Oncol. 2012; 10(2):91-99.
10. Parr C and Jiang WG. Metastasis suppressor 1 (MTSS1) demonstrates prognostic value and anti-metastatic properties in breast cancer. Eur J Cancer. 2009; 45(9):1673-1683.
11. Ma S, Guan XY, Lee TK and Chan KW. Clinicopathological significance of missing in metastasis B expression in hepatocellular carcinoma. Hum Pathol. 2007; 38(8):1201-1206.
12. Yamagishi A, Masuda M, Ohki T, Onishi H and Mochizuki N. A novel actin bundling/filopodium-forming domain conserved in insulin receptor tyrosine kinase substrate p53 and missing in metastasis protein. J Biol Chem. 2004; 279(15):14929-14936.
13. Mattila PK, Salminen M, Yamashiro T and Lappalainen P. Mouse MIM, a tissue-specific regulator of cytoskeletal dynamics, interacts with ATP-actin monomers through its C-terminal WH2 domain. J Biol Chem. 2003; 278(10):8452-8459.
14. Woodings JA, Sharp SJ and Machesky LM. MIM-B, a putative metastasis suppressor protein, binds to actin and to protein tyrosine phosphatase delta. Biochem J. 2003; 371(Pt 2):463-471.
15. Lin J, Liu J, Wang Y, Zhu J, Zhou K, Smith N and Zhan X. Differential regulation of cortactin and N-WASP-mediated actin polymerization by missing in metastasis (MIM) protein. Oncogene. 2005; 24(12):2059-2066.
16. Suetsugu S, Murayama K, Sakamoto A, Hanawa-Suetsugu K, Seto A, Oikawa T, Mishima C, Shirouzu M, Takenawa T and Yokoyama S. The RAC binding domain/IRSp53-MIM homology domain of IRSp53 induces RAC-dependent membrane deformation. J Biol Chem. 2006; 281(46):35347-35358.
17. Utikal J, Gratchev A, Muller-Molinet I, Oerther S, Khzyshkowska J, Arens N, Grobholz R, Kannookadan S and Goerdt S. The expression of metastasis suppressor MIM/MTSS1 is regulated by DNA methylation. Int J Cancer. 2006; 119(10):2287-2293.
18. Wang Y, Zhou K, Zeng X, Lin J and Zhan X. Tyrosine
phosphorylation of missing in metastasis protein is implicated in platelet-derived growth factor-mediated cell shape changes. J Biol Chem. 2007; 282(10):7624-7631.
19. Shaik S LP, Fukushima H, Wang Z, Wei W. Protein Degradation in Cell Cycle. Encyclopedia of Life Sciences. 2012; 1:1-8.
20. Devoy A, Soane T, Welchman R and Mayer RJ. The ubiquitin-proteasome system and cancer. Essays Biochem. 2005; 41:187-203.
21. Yang Y, Kitagaki J, Wang H, Hou DX and Perantoni AO. Targeting the ubiquitin-proteasome system for cancer therapy. Cancer Sci. 2009; 100(1):24-28.
22. Lau AW, Fukushima H and Wei W. The Fbw7 and betaTRCP E3 ubiquitin ligases and their roles in tumorigenesis. Front Biosci (Landmark Ed). 2012; 17:2197-2212.
23. Muerkoster S, Arlt A, Sipos B, Witt M, Grossmann M, Kloppel G, Kalthoff H, Folsch UR and Schafer H. Increased expression of the E3-ubiquitin ligase receptor subunit betaTRCP1 relates to constitutive nuclear factor-kappaB activation and chemoresistance in pancreatic carcinoma cells. Cancer Res. 2005; 65(4):1316-1324.
24. Saitoh T and Katoh M. Expression profiles of betaTRCP1 and betaTRCP2, and mutation analysis of betaTRCP2 in gastric cancer. Int J Oncol. 2001; 18(5):959-964.
25. Kudo Y, Guardavaccaro D, Santamaria PG, Koyama-Nasu R, Latres E, Bronson R, Yamasaki L and Pagano M. Role of F-box protein betaTrcp1 in mammary gland development and tumorigenesis. Mol Cell Biol. 2004; 24(18):8184-8194.
26. Gluschnaider U, Hidas G, Cojocaru G, Yutkin V, Ben-Neriah Y and Pikarsky E. beta-TrCP inhibition reduces prostate cancer cell growth via upregulation of the aryl hydrocarbon receptor. PLoS One. 2010; 5(2):e9060.
27. Petroski MD and Deshaies RJ. Function and regulation of cullin-RING ubiquitin ligases. Nat Rev Mol Cell Biol. 2005; 6(1):9-20.
28. Frescas D and Pagano M. Deregulated proteolysis by the F-box proteins SKP2 and beta-TrCP: tipping the scales of cancer. Nat Rev Cancer. 2008; 8(6):438-449.
29. Shaik S, Nucera C, Inuzuka H, Gao D, Garnaas M, Frechette G, Harris L, Wan L, Fukushima H, Husain A, Nose V, Fadda G, Sadow PM, Goessling W, North T, Lawler J, et al. SCF(beta-TRCP) suppresses angiogenesis and thyroid cancer cell migration by promoting ubiquitination and destruction of VEGF receptor 2. J Exp Med. 2012; 209(7):1289-1307.
30. Gao D, Inuzuka H, Tan MK, Fukushima H, Locasale JW, Liu P, Wan L, Zhai B, Chin YR, Shaik S, Lyssiotis CA, Gygi SP, Toker A, Cantley LC, Asara JM, Harper JW, et al. mTOR drives its own activation via SCF(betaTrCP)-dependent degradation of the mTOR inhibitor DEPTOR. Mol Cell. 2011; 44(2):290-303.
31. Wu G, Xu G, Schulman BA, Jeffrey PD, Harper JW and Pavletich NP. Structure of a beta-TrCP1-Skp1-beta-catenin complex: destruction motif binding and lysine specificity of the SCF(beta-TrCP1) ubiquitin ligase. Mol Cell. 2003; 11(6):1445-1456.
32. Jin J, Shirogane T, Xu L, Nalepa G, Qin J, Elledge SJ and Harper JW. SCFbeta-TRCP links Chk1 signaling to degradation of the Cdc25A protein phosphatase. Genes Dev. 2003; 17(24):3062-3074.
33. Cardozo T and Pagano M. The SCF ubiquitin ligase: insights into a molecular machine. Nat Rev Mol Cell Biol. 2004; 5(9):739-751.
34. Inuzuka H, Tseng A, Gao D, Zhai B, Zhang Q, Shaik S, Wan L, Ang XL, Mock C, Yin H, Stommel JM, Gygi S, Lahav G, Asara J, Xiao ZX, Kaelin WG, Jr., et al. Phosphorylation by casein kinase I promotes the turnover of the Mdm2 oncoprotein via the SCF(beta-TRCP) ubiquitin ligase. Cancer Cell. 2010; 18(2):147-159.
35. Liu C, Li Y, Semenov M, Han C, Baeg GH, Tan Y, Zhang Z, Lin X and He X. Control of beta-catenin phosphorylation/degradation by a dual-kinase mechanism. Cell. 2002; 108(6):837-847.
36. Fuchs SY, Spiegelman VS and Kumar KG. The many faces of beta-TrCP E3 ubiquitin ligases: reflections in the magic mirror of cancer. Oncogene. 2004; 23(11):2028-2036.
37. Spiegelman VS, Slaga TJ, Pagano M, Minamoto T, Ronai Z and Fuchs SY. Wnt/beta-catenin signaling induces the expression and activity of betaTrCP ubiquitin ligase receptor. Mol Cell. 2000; 5(5):877-882.
38. Spiegelman VS, Tang W, Chan AM, Igarashi M, Aaronson SA, Sassoon DA, Katoh M, Slaga TJ and Fuchs SY. Induction of homologue of Slimb ubiquitin ligase receptor by mitogen signaling. J Biol Chem. 2002; 277(39):36624-36630.
39. Knippschild U, Wolff S, Giamas G, Brockschmidt C, Wittau M, Wurl PU, Eismann T and Stoter M. The role of the casein kinase 1 (CK1) family in different signaling pathways linked to cancer development. Onkologie. 2005; 28(10):508-514.
40. Brockschmidt C, Hirner H, Huber N, Eismann T, Hillenbrand A, Giamas G, Radunsky B, Ammerpohl O, Bohm B, Henne-Bruns D, Kalthoff H, Leithauser F, Trauzold A and Knippschild U. Anti-apoptotic and growth-stimulatory functions of CK1 delta and epsilon in ductal adenocarcinoma of the pancreas are inhibited by IC261 in vitro and in vivo. Gut. 2008; 57(6):799-806.
41. Hirner H, Gunes C, Bischof J, Wolff S, Grothey A, Kuhl M, Oswald F, Wegwitz F, Bosl MR, Trauzold A, Henne-Bruns D, Peifer C, Leithauser F, Deppert W and Knippschild U. Impaired CK1 delta activity attenuates SV40-induced cellular transformation in vitro and mouse mammary carcinogenesis in vivo. PLoS One. 2012; 7(1):e29709.
42. Dawson JC, Bruche S, Spence HJ, Braga VM and Machesky LM. Mts1 promotes cell-cell junction assembly and stability through the small GTPase Rac1. PLoS One. 2012; 7(3):e31141.
43. Dawson JC, Timpson P, Kalna G and Machesky LM. Mtss1 regulates epidermal growth factor signaling in head and neck squamous carcinoma cells. Oncogene. 2012; 31(14):1781-1793.
44. Shirogane T, Jin J, Ang XL and Harper JW. SCFbeta-TRCP controls clock-dependent transcription via casein kinase 1-dependent degradation of the mammalian period-1 (Per1) protein. J Biol Chem. 2005; 280(29):26863-26872.
45. Inuzuka H, Shaik S, Onoyama I, Gao D, Tseng A, Maser RS, Zhai B, Wan L, Gutierrez A, Lau AW, Xiao Y, Christie AL, Aster J, Settleman J, Gygi SP, Kung AL, et al. SCF(FBW7) regulates cellular apoptosis by targeting MCL1 for ubiquitylation and destruction. Nature. 2011; 471(7336):104-109.
46. Wei W and Sedivy JM. Differentiation between senescence (M1) and crisis (M2) in human fibroblast cultures. Exp Cell Res. 1999; 253(2):519-522. |
The 2020 School Year has been challenging from the first school day in August, however with the cooperation of students, parents and SGS Staff members, we have made it through the 1st trimester. Overall, we have all made adjustments to our “normal” day-to-day activities and make the best out of a less than perfect situation. As we approach this Thanksgiving Holiday, it is easy to reflect on the many things that I am thankful for during this unique 2020 school year.
I am thankful:
- that our students can attend school and learn through in-person and remote settings.
- for hardworking students who have done a great job following all of the new guidelines and restrictions in order to safely attend school.
- for supportive and hardworking parents and community members.
- for dedicated and creative teachers and staff who strive to provide the SGS students with a safe and nurturing learning environment.
- for a supportive and proactive school board and administrative team.
- for good health, good friends and a supportive family.
Over the last two years we have been working with Sunvest Solar to develop a strategy to provide electricity for the District. This summer we were able to start construction, in cooperation and partnership with Lamb Sunny Farm LLC, on a 3 acre site behind South Campus. This 600-kilowatt ground mounted solar array went on-line September 23rd. It is estimated to generate about 750,000 Kilowatts of electricity per year, enough to provide 100% of the electrical needs for South Campus. Attached, please find pictures of the array. Thank you to Sunvest Solar, The Lamb Family and Whitt Law for helping make this project possible.
Once again, I would like to thank you for making the first third of the 2020-2021 school year successful. We will continue to consult with the LaSalle County Health Department and as long as our numbers reflect positive school health, we will continue to provide in-person learning. Please check out the SGS webpage www.sgs170.org under Live Feed for our weekly COVID Numbers and Communications.
It’s great to be part of Raider Nation!!
Eric Misener
As I have reflected back on this past trimester, I have a wide range of feelings and reactions. There are many things that I miss from the traditional start of the school year – with the largest being having all our students in person with us. And while I am sad we are not all “together” under the same building roof, I am so unbelievably proud of how we have come together as our Raider Nation regardless of where a student receives his/her instruction. I can’t express my gratitude enough to the amazing staff who have worked so hard to meet every challenge 2020 has thrown at them. Our families have put their trust in us to provide a safe environment for all students and we appreciate your trust and support. All our support staff have been outstanding providing additional help to meet our building and student needs. Of course I am most proud of the students and their resilience and positivity throughout this situation. Students are ready to learn whether that is in-person or remotely and are working so hard to do their best. When I doubt a decision or become frustrated with another roadblock or issue, I just go spend time with the students and things become remarkably clear – we are lucky to have this amazing community of learners and we will come through this.
Keep up with the various happenings at the school on our school website and app. All classrooms are connected to ClassDojo – so be sure to check it often. Teachers are frequently sharing all the great things students are doing in their classrooms. They also post important reminders about upcoming events.
We are so excited to welcome Mrs. Elaine Virgo to our staff as our new STEM teacher. STEM stands for Science, Technology, Engineering and Math. Mrs. Virgo visits every classroom each week to share an activity aligned to the recently adopted state science standards. She has been a wonderful technology resource for both staff and students. But the real fun and learning are in the weekly STEM projects. During this time, it offers all students interactive science instruction that promotes analysis and interpretation of data, critical thinking, problem solving and connections across science disciplines. The students have had the opportunity to design and build a variety of projects in all grade levels. This has been a welcomed addition to our curriculum! Be sure to ask your child about STEM class.
As we enter the winter season, I would like to remind families that we will go outside for recess when the temperature is 15 or above. Please check the weather each day and make sure your child has all necessary items to stay warm. We have many generous supporters of the school, so please contact us if you need any help with coats, hats, mittens, etc. We want everyone to be warm this winter!
Mr. Severson’s Report
The start of the 2020-21 school year has been a very unique experience filled with uncertainty and unprecedented circumstances. That being said, I am very excited that we have been able to offer an in-person option for our students throughout the 1st Trimester and are continuing to offer that option as we start our 2nd Trimester. Our ability to continue to extend educational options is a credit to our staff, our students and to all of you. On behalf of everyone here at the South Campus, I would like to sincerely thank all of our parents, guardians and community members for your continued patience, understanding and cooperation. Our staff continues to work hard providing instruction to our in-person and remote learners under circumstances that often prove challenging, and our students continue to achieve academically while overcoming many obstacles they have never experienced. I feel very fortunate to be part of our school community and look forward to working with all of you as we continue to find new ways to overcome challenges and provide our students with the best opportunities to succeed.
Parent/Teacher Conferences
Parent Teacher Conferences will take place Monday, November 23rd and Tuesday, November 24th. Conferences provide a great opportunity to meet with your student’s teachers, whether in-person or remotely, to discuss current performance levels as well as expectations moving forward. The following is the scheduled times for conferences:
**Monday, November 23rd**
1:00 – 4:00 pm
5:00 – 8:00 pm
(4:00 – 5:00 pm will serve as a dinner break for the teachers)
**Tuesday, November 24th**
8:30 am – 12:00 pm
Please contact the South Campus Main Office with any questions or concerns about parent/teacher conferences at (815) 357-8744 ext. 3100
The 2020 cross country season began at the beginning of August. The team was fortunate enough to have a season with 8 regular meets and a sectional. The Sectional was hosted at Seneca Middle School Campus for the first time. On Friday, October 16th, Evelyn O’Connor placed first and was crowned the sectional champion with a time of 12:30. Nate Sprinkel earned a 5th place medal with a personal best time of 13:29. Connor Pabian and Aiden Burton also competed at sectionals and ran very well. The cross country team consisted of all 8th grade runners this season. Congratulations to our runners on your hard work and dedication.
The 2020 Raider baseball season has come to an end and the program has a lot to be proud of. Although the A team didn’t finish with the record they had hoped for, the team still showed a lot of improvement as the season progressed, and has shown a lot of potential moving forward to the high school level. The B team finished with a record of 4-2 on the shortened season. The team was led by a group of six experienced 7th graders, as well as four 6th graders.
Even though it was a shortened, conference only season, we were all very happy to get an opportunity to get out and have a season! The boys worked hard and improved throughout the short season. We would like to wish all of the 8th graders good luck in high school, and can’t wait for young players to step up and grow for next season.
The Lady Raiders A team had a very successful season. Their hard work during practices earned them a 6-3 record. They found themselves with commanding leads during many of their games thanks to solid pitching and defense, as well as some clutch hitting. The girls played their best game of the season during Regionals losing by 1 run in the bottom of the 7th inning. Good luck to this year’s 8th graders who will be moving on to the next level!
The Lady Raiders B team ended the season with an impressive 5-0 record. Layken Callahan dominated on the mound with strong support from the defense. Offensively the bats were solid and the girls worked their way around the bases with smart base running. They worked hard throughout the season and gained confidence while up to bat and out in the field as well. It was nice to see the many improvements throughout the season, and I look forward to working with the girls next year.
From the Nurse’s Office:
The following is a list of Covid guidelines from the Illinois Department of Public Health:
If ONE person in your household has any of the following symptoms, please keep all of your children home and contact their pediatrician:
- Temperature, 100.0 or higher, chills or body aches
- Headache, sore throat, nasal congestion/runny nose
- Develops a cough &/or shortness of breath
- Nausea, vomiting, diarrhea
- Fatigue, muscle, body aches &/or abdominal pain (from an unknown reason)
- Loss of smell &/or taste
Please do not hesitate to call the nurses office with any questions, 357-8744 ext. 2110 for north campus nurse or ext. 3110 for the south campus nurse.
Thank you,
Mrs. Widman, Mrs. Berg and Mrs. Larson
Mark Your Calendar
Dates to Remember
NOVEMBER
NOV 23 EARLY DISMISSAL 11:25/11:35
PARENT TEACHER CONFERENCE
1PM-8PM
NOV 24 NO SCHOOL
PARENT TEACHER CONFERENCE
8AM-12PM
NOV 25 – 27 NO SCHOOL – THANKSGIVING BREAK
DECEMBER
DEC 18 CHRISTMAS BREAK
EARLY DISMISSAL 1:15 NC/1:25 SC
DEC 21 - JAN 1 NO SCHOOL CHRISTMAS BREAK
JANUARY
JAN 4 SCHOOL RESUMES
JAN 18 NO SCHOOL – M.L. KING’S BIRTHDAY
FEBRUARY
FEB 12 EARLY DISMISSAL 11:25/11:35
FEB 15 NO SCHOOL PRESIDENT’S DAY
FEB 26 END OF 2ND TRIMESTER
Seneca Community Consolidated School District #170 Inclement Weather Notification
In the event of severely inclement weather or mechanical breakdown, school may be closed. The same conditions may also necessitate any early dismissal. School closings or early dismissal will be announced on the school web page live feed, school Face Book page, school text, email and phone notification system as well as over WCSJ-AM (1550) and WJDK (95.7) in Morris and WCMY-AM (1430) and WRKX-FM (95.3) in Ottawa, and major Chicago radio stations. Reports will be made between 6:00 a.m. and 7:00 a.m. When possible, we will attempt to make the determination to cancel school as early as possible. |
Elaine Marieb Center for Nursing and Engineering Innovation
UNTAP
The unique insights of nurses and engineers to address healthcare challenges at the forefront of patient care.
FORGE
New pathways in interdisciplinary research and education to create an open forum for sharing and learning.
EMPOWER
Nurses and engineers to lead healthcare innovation for a healthier and more equitable future.
Elaine Marieb Center for Nursing and Engineering Innovation
651 Pleasant Street
Amherst, MA 01003
Website: umass.edu/marieb-nurse-engineer
Support the Center: minutefund.uma-foundation.org/project/41101
The Elaine Marieb Center for Nursing and Engineering Innovation aims to create and support innovative solutions to healthcare challenges that are fueled by the Nurse-Engineer approach and other interdisciplinary teaming. Three UMass Alumni were critical in the 2021 founding of the Center: Michael and Terry Hluchyj (from the College of Engineering and the College of Nursing, respectively) generously donated initial seed funding; the Elaine Nicpon Marieb Charitable Foundation gave additional funds. Like Terry Hluchyj, RN, Dr. Marieb, RN graduated from the College of Nursing. As part of the Center mission to disseminate the Nurse-Engineer approach, we provide seed funding to UMass faculty to help interdisciplinary teams jump start translational research, engage in joint projects, and obtain external funding. We look forward to continuing growth.
Elaine Marieb Center for Nursing & Engineering Innovation
651 Pleasant Street
Amherst, MA 01003
Website: umass.edu/marieb-nurse-engineer
Far left: Seated at table, clockwise: Asmita Deb ‘25; David Pinero-Jacome; Marco Vital; Seonhun ‘Hoon’ Lee; Dr. Frank Sup; Dr. Karen Giuliano; RN; Kourosh Alimohammadbeik, RN; Ben Shih ‘25; and Ruchi Gupta ‘24. Seated at bedside: Jenny Le ‘25; Alex Xu ‘24; and Gina Georgadarellis.
Above: 2023 Core Interns Ruchi Gupta ‘24 and Jenny Le ‘25 demonstrating how to take blood pressure.
Front cover: Gina Georgadarellis, MS and Jenny Le ‘25 demonstrating potential healthcare uses for Stretch the Center robot to Eureka! Scholars and Center Summer Interns.
A Message from our Co-Directors
By Dr. Karen K. Giuliano, RN and Dr. Frank C. Sup, IV
As we reflect upon the achievements of the Elaine Marieb Center for Nursing and Engineering Innovation over the last year, we are excited to highlight some of the most notable activities and milestones that we have achieved to advance the mission of the Center.
Noteworthy accomplishments include numerous interdisciplinary conference presentations and publications at the local, regional, and national levels, collaborative initiatives with influential organizations, such as the Institute for Safe Medication Practices (ISMP), the independent healthcare technology authority ECRI, and the American Nurses Association, our Second Annual Symposium, our first Faculty-Student Roundtable Research Discussion Day, submission of our first patent and our ongoing and strong collaboration with Baystate Health. We co-sponsored a pilot grant with the UMass Institute for Diversity Sciences to support issues in healthcare equity in our local community, received the 2023 State of Massachusetts legislature award as manufacturer of the year for Western MA, and submitted a large National Science Foundation Graduate Training Grant.
Hands-on experience in research and healthcare product development for undergraduate students, graduate students, and our postdoctoral fellows remains central to our approach. In this report, you will see several examples of our product development projects, which are always informed by real-world clinical issues identified by frontline caregivers. Additionally, we are engaged in both our IV smart pump laboratory in IALS (Institute for Advanced Life Sciences) and our robotics laboratories, advancing the science through nurse-engineer teaming.
The summer of 2023 kept our entire team very busy. We had a total of 13 Elaine Marieb Center staff working all summer, including Frank and Karen, three graduate students, five undergraduate summer interns, and our first three high school students. In addition to being involved in Center research and product development projects, our summer nurse-engineer teams participated in the Girls Inc. Summer Eureka! Program by creating and delivering four hands-on workshops: two in the nursing simulation lab at the Elaine Marieb College of Nursing and two in the College of Engineering Robotics lab.
In our roles as co-directors, we continue to draw upon our collective experiences in critical care nursing, human performance optimization, engineering, academic research, and industry healthcare product development. The activities supported by the Center represent our shared passion for promoting interdisciplinary partnerships and nurturing innovative mindsets. In the coming year, one of our goals is to create even stronger relationships with our industry partners to create additional real-world opportunities for our students.
Thank you for your ongoing interest, support, and collaboration. It is an honor for us as Elaine Marieb Center co-directors to continue to help the Center grow and thrive, and we remain committed to supporting the ever-increasing number of diverse projects initiated by our students, faculty, and partners.
Sincerely,
[Signatures]
Dr. Karen K. Giuliano, RN
Dr. Frank C. Sup, IV
Center Achievements
Center Infrastructure
- The Center laid the groundwork for key operational, budgetary, and marketing infrastructure.
- Developed short-term budgets and completed five-year financial plan.
- Completed the design and launch of a website with an outside vendor.
- Initiated our founding group of affiliated interdisciplinary faculty from across UMass.
- Developed a clinician database to assist us with product development and usability project.
- Established our initial group of industry partners.
- Established an LSL lab for the Center Product Usability Lab with a three-year agreement with IALS.
- Established additional lab space in the fifth-floor IALS open laboratory for product testing and other Center research activities.
Research and Innovation
- Engaged both graduate and undergraduate nursing and engineering students in Center activities and projects.
- Held the Second Annual Symposium which was attended by students, alumni, and industry partners.
- Supported student and faculty members in initiating their own projects.
- Supported three undergraduate honors thesis projects and the work of three doctoral students.
- Developed a research relationship with the Baystate Medical Center Nursing Science team to enable joint projects and studies.
- Presented the IV smart pump research program to the internationally renowned ISMP/ECRI, and were invited to advise ECRI on their current IV smart pump testing standards.
- In the first Center Pilot Award Updates, a select group of UMass Nursing and Engineering faculty gathered to network, introduce their collaborative nurse-engineer research, and hear presentations from the Pilot Awardees on the progress of their projects.
- Awarded an IALS-Manning Grant to a nurse-engineer team to support the development of a product which we expect to patent.
Awards
‘Manufacturer of the Year’
The Marieb Center was honored as Manufacturer of the Year in the Third Hampshire District—which includes Amherst and two precincts in Granby—by the Massachusetts Legislature’s Manufacturing Caucus.
The Manufacturing Caucus is a collection of more than 70 state legislators who focus on training for manufacturing employees, encouraging innovation by helping start-ups access resources and expanding apprenticeship opportunities in key manufacturing sectors.
The Marieb Center was selected by Third Hampshire District Representative Mindy Domb, who said it “represents not only a new and compelling approach to research and development, but also transforms the scope and impact of healthcare device manufacturing by engaging the voices of nurses and patients directly into this process.”
The award was presented to Marieb Center co-directors Karen Giuliano and Frank Sup on Sept. 19, 2023 at the 8th annual “Making it in Massachusetts” Manufacturing Awards Ceremony, held at Polar Park, home of the Worcester Red Sox.
Eastern Nurses Research Society
Karen K. Giuliano was awarded the 2023 Eastern Nurses Research Society (ENRS) Distinguished Contributions to Nursing Research Award for the depth, breadth, and influence of her work. The synergies between Karen’s expertise in critical care and industry medical device development experiences provide a unique perspective that continues to drive her clinical research and scholarship interests.
On July 27, 2023, the Center welcomed the American Association of Colleges of Nursing and New Media News TV to UMass Amherst. New Media News partners with the AACN “to produce network-quality video programming to get messages across for reaching out to the public.” They filmed Center faculty, staff, and collaborators as they discussed Center research projects and methodologies in the Elaine Marieb College of Nursing Simulation lab and at other locations across campus.
Graduate researchers Gina Georgadarellis, MS’ and Seonhun (Hoon) Lee, MS’, were present, as were summer interns Marco Vital, David Pinero-Jacome, Ben Shih ’25, and Asmita Deb ’25. Center post-doctoral researcher Dr. Jeannine Blake, RN, explained her program of research, which focuses on the flow rates of and potential safety improvements to IV Smart Pumps. The project team includes her mentor, engineering researcher Dr. Juan Jiménez de Cidália Vital, RN, of Baystate Health discussed the Center’s collaboration with the Baystate Health System, which enables laboratory research, such as the potential use of robotics in healthcare to be safely tested in the real-world hospital setting, stating, “Karen Giuliano and I have talked about making one of our units an innovation hub so that we can change the way one unit is doing the work, and then hopefully do that across the organization.”
The film created by AACN-TV and New Media News showcases the Center’s collaborations on campus and off and includes a conversation with co-directors Dr. Karen Giuliano, RN and Dr. Frank Sup about the importance of sharing the interdisciplinary nurse-engineer approach with undergraduate and graduate students. The result was a concise film that provided highlights from the Center’s achievements thus far and a view of the Center’s future. Watch the video on our website.
Chancellor Visit
In his first few weeks as Chancellor, Javier Reyes visited the university’s colleges and departments to immerse himself in the varied and broad research on the UMass Amherst campus, and the Elaine Marieb Center for Nursing and Engineering Innovation was honored to have him stop by.
Chancellor Reyes’s visit underscored the Center’s dedication to advancing knowledge and making a tangible impact on society within the UMass Amherst community. He heard from our student researchers about the work they have been doing with us: our Core Summer Interns discussed their internships and the UMass Eureka! program, which promotes the involvement of girls in STEM. The interns designed and implemented a workshop for the eighth-grade Eureka! scholars that introduced them to the nurse-engineer concept, potential uses of robotics in healthcare, and showed them how to measure vital signs.
Clockwise from left: Dr. Karen Giuliano, RN; Chancellor Javier Reyes; Dr. Frank Sup; Dean Allison Vordonztrasse, RN; Gina Georgadarellis, MS; Jenny Le ’25; Kourosh Alimohadbeik, PhD candidate; Ben Shih ’25; Ruchi Gupta ’24.
Baystate Collaboration
The Center’s ongoing interdisciplinary collaboration with Baystate Health (the largest health system in Western Mass, based in Springfield, MA) has resulted in the creation of an environment in which frontline nurses can develop and thrive as clinicians, researchers, experts in implementation science, and leaders in healthcare innovation. The program is led by a full-time, doctorally prepared nurse scientist (Dr. Cidália Vital, RN), in collaboration with an academic nurse scientist (Dr. Karen Giuliano, RN) and a professor of engineering (Dr. Frank Sup). Through this formal interdisciplinary collaboration with Baystate Health, the Center bridges the gap between the lab bench and bedside patient care. Intravenous Smart pump research takes place at the bedside during actual clinical infusions; the study of flow rate accuracy during clinical use is a key example of how learnings from the lab bench can be further understood through clinical observation. Dr. Cidália Vital and Dr. Karen Giuliano were invited as co-presenters at the 2023 American Nurses Association Research Symposium. Their presentation, entitled “Academic-Clinical Practice Partnerships: Engaging Frontline Nurses in Research and Clinical Innovation,” highlighted the numerous benefits of the collaboration between Baystate Medical Center and the Elaine Marieb Center for Nursing and Engineering Innovation.
Other projects include working with biomedical engineering students on a new approach to fall prevention and a chest tube holder developed by undergraduate students. The collaboration’s focus on the professional development of nurses and engineers will continue to result in the advancement of nursing and engineering research and (over time) is expected to lead to measurable improvements in patient care; the development of innovative new medical products, and improved job satisfaction for frontline nurses.
Roundtable Symposium
At the 2023 Round Table Symposium, representatives from each of the Center’s pilot projects gathered to discuss their research in a round table discussion format. Researchers funded by Center Pilot Grants respond to real-world healthcare challenges using the Nurse-Engineer approach, an interdisciplinary research method in which nurses, engineers, and others work together to investigate, design, and adapt healthcare products and processes to make them more effective and accessible.
Since its 2021 inception, the Center has funded seven Pilot Project teams that have successfully addressed a wide range of topics, such as osteoarthritis in Chinese older adults, hypertension in the predominantly Black and Hispanic Springfield, MA community, and home healthcare in which vital signs are remotely monitored via a cloud-based and scalable platform.
“The Roundtable Symposium provided a valuable in-person experience to learn about innovative, inclusive, and collaborative projects taking place between nurses and engineers.”
Ben Monat, Associate Director of Donor Relations at the Elaine Marieb College of Nursing
The partnership between nurses and engineers provides several perspectives and the roundtable format was enjoyed by all; researchers and attendees alike found the back-and-forth discussion exciting.
Innovation Dinner
In early November 2023, we welcomed the American Nurse Association (ANA) Vice President of Innovation, Dr. Oriana Beaudet, RN, as the keynote speaker at a dinner celebrating nurse innovation. Nurses often find themselves improvising solutions to healthcare challenges at the bedside and beyond. To promote the health and well-being of patients, nurses create or adapt products and processes to make them safer and more efficient. In her role as VP of Innovation, Oriana works to “Ensure the ability for nurses to lead at all levels of society—thereby transforming practice and the health of people across the continuum of care.”
As part of the informative evening built around her talk, there was a poster session with Center-funded researchers, Dean Sanjay Raman of the College of Engineering and Dean Allison Vorderstrasse of the Elaine Marieb College of Nursing, a talk by Dr. Cidalia Vital, RN, one of the nurse scientists leading Baystate Health’s collaboration with the Center, and remarks from Lily Stowe-Alekman, aide for State Senator Mindy Domb, who discussed the Manufacturer’s Award given to the Center by Domb for “leadership of and dedication to manufacturing medical devices.”
In her keynote presentation titled “Uniting Nurse-Led Healthcare Innovation with Nurse-Engineer Teaming,” Oriana discussed innovative products created by nurses such as Safe Seizure pads (inflatable, easy to stock, and cost-conscious pads to protect hospital patients from injuries during seizures), and community pop-up clinics located in farmers markets and other unique locations that provide fresh produce alongside preventative healthcare.
Impactful Internships
Five graduates and three high schoolers hosted by the Center
IN THE summer of 2023, five Center interns joined us as part of the Institute for Advanced Life Sciences (IALS) Core Summer Internships (CSI) program based in the IALS Core facilities, which seek to advance human health and well-being through the use of translational research. Through the program, interns participated in a cohort of workshops for career-related skills such as presenting, public speaking, and networking while working in research. With well over a dozen interns on campus, five of them were in our Center doing nurse-engineer research.
The Core Summer Interns we hosted for 2023 were Asmita Deb ’25, a biomedical engineering major, Ruchi Gupta ’24, a finance and computer science major, Jenny Le ’25, a nursing major, Ben Shih ’25, a nursing major, and Alex Xu ’24, a biomedical engineering major.
Interns had assigned research projects for the summer. They worked in the research lab with Dr. Jeannine Blake, RN, and ran trials on the efficacy of Smart IV Pumps under different conditions, using either Y-tubing or a Manifold. Their work involved administering mock medication through the IV Pump using the design to determine if the medication output amount was what was needed (or if the pump was behind/ahead on medication administration). They also used Tobii Pro eye-tracking technology to measure nurses’ error rates and difficulty of the task to program various IV Smart Pumps. Another project was developed with Dr. Frank Sup, with the aim of drafting and creating a design for a cage that could hold a Chest Tube Drainage system.
The impact of the students was felt by the progress and work accomplished, but also by the interns themselves who shared with us what it meant to be part of this research. As intern Shih noted, “To be there for the foundations of these new endeavors was an exciting and unique opportunity. It allowed me to consider the various aspects that go into new ideas and experiments, but also how to approach a complex idea from its most fundamental parts and put it back again without filler and only its necessities.”
‘Eureka!’ Moments
Eureka! summer program partners with EMCNEI
IN THE summer of 2023, five UMass Amherst undergraduate student Core Interns and graduate researchers hosted workshops for rising 8th and 9th grade Eureka! scholars. The Eureka! Program at UMass is a partnership between UMass Amherst and Girls Inc. of the Valley. The program works to address the gender gap in the fields of science, technology, engineering, and math (STEM).
The interns, assisted by graduate researchers Gina Georgadarellis, MS and Seonhun [Hoon] Lee, MS, worked with Eureka! scholars over two days conducting workshops that introduced nursing and engineering concepts to spark their interest through hands-on learning.
For nursing, the Core interns taught scholars about vital signs and the cardiovascular system. The scholars learned about respiratory rates, active and resting heart rates, and manual and automatic blood pressures, and taking temperatures. Core Interns Ben Shih ’25 and Asmita Deb ’25 demonstrated the functions of mannequins in the simulation lab and how student nurses use the mannequins in preparation for in-person interactions with patients. To interact and teach younger students about nurse education and the roles nurses have in healthcare was an impactful experience for the interns. Core Intern Jenny Le ’25 said, “Working with the Eureka! students honed my ability to explain nursing-specific knowledge in simple terminology, a crucial skill for effective communication with patients.”
The group also focused on engineering areas of robotics and programming, helping the scholars broaden their experience with science through hands-on fun.
Working in groups, Center interns instructed the Eureka! scholars on how to create Bristle Bot robots using toothbrush heads, circuits, and pipe cleaners. The Eureka! scholars participated in a robot simulation and learned about engineering and how by modifying the pipe cleaners and eyes, the robots moved in different ways. Graduate student researcher Hoon Lee remarked, “Their simple curiosity seemed to ignite more curiosity. What would happen if there were more motors? More batteries? What configurations would best predict the path the bristle bot would take? As I observed their actions, reaching and searching for spare parts, hot-gluing one to the other and testing out different scenarios, I felt that this workshop could be creating an appreciation for robotics and a possible career path for them.” §
Dr. Karen Giuliano, RN advising Eureka! scholars on Bristle Bot construction.
Two Eureka! students take turns and practice taking blood pressure.
Bristle Bots built by Eureka! students from toothbrush heads and pipe cleaners.
Pilot Project Updates
Pilot Projects 2022
Dr. Jeungok Choi (Nursing) and Dr. Yeonsik Noh’s (Nursing and Engineering) project, Development and Usability Test of a Tablet-Based Lifestyle-Modification Intervention for Chinese Older Adults with Osteoarthritis, is a cognitive behavioral therapy-based tablet application (CBT-OA) targeting Chinese older adults with osteoarthritis. The team used a “culturally competent” approach; that is, they grounded the development of the application in an awareness of and respect for Chinese culture. The team stated, “Addressing five areas in lifestyle modifications recommended by the 2017 Chronic Osteoarthritis Management Initiative, the CBT learning modules [included] healthy diet, healthy weight, exercise including simple walking and mind-body connections, as well as ‘think about your thinking’.” Specific tools within the modules were culturally relevant recipes, an activity log, a guide to exercise, and a video conferencing tool.
Consultants for this project were Christopher Martell, PhD, Director of the Psychological Services Center, Uhm Sueyeon, a mobile software developer, and Megan Chung, RN, MSW, Associate Director of Clinical Director in the Greater Boston Chinese Golden Age Center. Graduate research assistants were Kien To, Master’s student in computer science, mechanical engineering graduate Mihir Patki, and Miaomiao Shen, College of Nursing doctoral candidates.
The study is now complete, and it demonstrated that the CBT-OA tablet program is effective for managing arthritis symptoms.
Dr. Yeonsik Noh (Nursing and Engineering) and Dr. Cynthia Jacelon’s (Nursing) project, Home Healthcare Monitoring Based on Cloud Native Architecture, developed and evaluated a cloud native-based Healthcare Monitoring Platform (CN-HMP) that is ideal for remote home healthcare monitoring and relevant to the increasing global older population. As the population grows, so does the need for home healthcare services. CN-HMP is a distributed, elastic, and horizontal scalable system composed of wearable sensors that isolate states in a minimum of components. The wearable device collects health data in real time from one or more patient users, and the data are then forwarded to healthcare providers through the CN-HMP. This is ideal due to its flexible scalability of users, medical parameters, and rapid and stable maintenance to enable an efficient response to crises.
Student research assistants were Shiyang Wang and Beizong (Max) Chen, graduates from the Electrical and Computer Engineering College, Abu Bony Amin and Ebenezer Asabre, doctoral students from the Electrical and Computer Engineering College, and Kourosh Alimohammadeik, RN, College of Nursing doctoral student.
After evaluating the usability performance of the platform from the perspective of 14 nursing students, the completed study found that the platform was easy to use and effective. Stated the team, “By incorporating user-centered design principles, cloud-native architecture, and wearable sensors, this pilot study has laid a foundation for developing [large scale] home-based healthcare platforms.”
Dr. Joohyun Chung (Nursing) and Dr. Xian Du’s (Engineering) project, With An Unobtrusive Wearable EDA Sensor and Video-Recorded Body Movements to Assess Chronic Pain Among People with Alzheimer’s Disease and Related Dementias, addressed the need for innovative methods for chronic pain management in the Alzheimer’s Disease and Related Dementias (ADRD) population. Chronic pain is a prevalent symptom of people with ADRD that often goes undetected and untreated due to cognitive impairments and a reduced ability to communicate with caregivers.
The Chung-Du team developed a deep learning-based automatic pain detection algorithm to assess pain using EDA and video-recorded body movement to test the extent to which the physiological signs (such as heart rates, interbeat intervals, and EDA activity) and patterns of body movements correlate with pain scores by nurses’ direct observation among healthy adults, aided by engineering and robotics master’s student, Meysam Safarzadeh. The team discovered that using the algorithm in near real-time at pain episodes allowed for adaptation of engagement and was effective in measuring the impact on outcomes of coping changes over the long term, demonstrating the potential for real-time interventions to provide the right advice at the right time to help chronic pain among people with ADRD.
Dr. Jeannine Blake (Nursing) and Dr. Juan Jiménez’s (Engineering) project, Research into IV Smart Pumps, Safety Standards, and Flow Rate Inaccuracy, addresses the need for more effective IV Smart Pumps. Adverse events associated with the use of IV smart pumps are among the most frequent sources of error reported to the US Food and Drug Administration. Flow rate inaccuracies in IV smart pumps occur when the actual medication flow rate does not match the rate programmed and displayed on the pump user’s interface.
This team has developed novel methods for measuring flow and pressure through IV Smart Pump systems during clinically relevant system setup conditions to better collect and understand flow rate accuracy. Resulting research determined that variables in IV smart pump setups (such as length of tubing or modifying the height of IV bags) can alter pressure within the IV smart pump system and affect flow rates that yield incorrect medication dosing. Their work includes the creation of a prototype design of a product designed to mitigate those variables.
Research is ongoing and has the interest of regulatory bodies involved in policy making and the interest of many companies who manufacture the IV smart pumps currently in use.
Pilot Projects 2023
Dr. Yossi Chait (Engineering) and Dr. Jeungok Choi’s (Nursing) project, *Individualized Blood Pressure Management in the Community*, aims to introduce a novel blood pressure management device and examine the prevalence of hypertension in Springfield, MA and the surrounding area. Working at the Baystate High Street Health Center with local clinicians, Dr. Michael Germain, MD, and Baystate Health’s Dr. Paul Pirraglia, MD, the team will collect data from the predominantly underserved population that the Baystate Health Center serves.
Student researchers were Sophia Tsekov ’24, Leia Payano ’24, Jesus Tejeda ’24, and Alexandria Galicki ’25 from the College of Engineering.
The team points out that “Black Americans and Hispanic Americans have significantly lower rates of BP [blood pressure] control than White Americans; these disparities are likely due to systemic racial discrimination, socioeconomic inequity, and unequal access to healthcare services.” Introducing an innovative blood pressure management approach, the team’s work addresses transportation as a factor in hypertension (if people can’t get to places where treatment is offered, regardless of how leading-edge that treatment is, it becomes ineffective). This ongoing project is helping to strengthen and formalize the Baystate UMass relationship and streamline future research.
Dr. Muge Capan (Engineering), Dr. Joohyun Chung (Nursing), and Dr. Amanda Paluch’s (SPHHS Kinesiology) project, *An Exploratory Study to Identify Multi-Level Factors of Physical Health and Well-Being in the Nursing*, aims to categorize individual and social factors related to barriers to the development of healthy habits including Physical Activity (PA) to develop a sustainable intervention program.
Assisting are several students including doctoral candidate Yukti Kathuria and undergraduate student Lily Bigelow ’25, from the College of Engineering, Lingsong Kong, doctoral candidate from the SPHHS Kinesiology department and Emefa Aduwudu, College of Nursing doctoral candidate.
The team notes that “nurses work in an environment that involves physically demanding tasks, irregular sleep patterns, and emotionally taxing situations. Although they spend much of their day promoting the health of their patients, they have a high prevalence of poor health behaviors, poor cardiovascular health, obesity, and diabetes.”
The team collected data between September and October 2023 from 163 participants, with the study population including students currently enrolled in a nursing degree program at UMass Amherst. They determined that social support and education as well as access to healthcare technology are effective means of reducing barriers to PA and introduces the novel concept of nursing students as a potential early intervention population for the promotion of PA.
Research is ongoing and the team continues to examine the development of sustainable health interventions for nurses.
---
Black Maternal Mobility in Western Massachusetts
Dr. Lucinda Canty and Dr. Favorite Iradukunda (Nursing), Dr. Lindiwe Sibeko, SPHHS (Nutrition), and Dr. Shannon Roberts (Engineering), are collaborating on a project entitled *Black Maternal Mobility in Western Massachusetts: The experience of transportation among Black Pregnant Women*. Black women are disproportionately affected by maternal mortality and severe maternal morbidity compared to other women in the United States, and there is a gap in knowledge about how transportation influences the experience of care during pregnancy that this project aims to fill.
William Bazile ’24 and Madison Perry ’24 from the College of Engineering and Doctoral candidate Ruth Appiah-Kubi from SPHHS will be using mixed methods (interviews, ride-alongs, and secondary data analysis) to better understand the transportation needs of Black women in Western Mass. and to lay the foundation for future studies. The study is ongoing, and the end goal is to use this information in future grant proposals focused on developing interventions and suggesting solutions to transportation issues as it relates to accessing maternal healthcare facilities for Black pregnant women. The project is jointly funded by the Elaine Marieb Center and the Institute of Diversity Sciences. §
Original painting by Dr. Lucinda Canty, RN.
Meet Our Team
**Dr. Karen Giuliano, PhD, RN, MBA, FAAN**
- Professor, Elaine Marieb College of Nursing/Institute for Applied Life Sciences (Joint Appointment)
- Co-Director, Elaine Marieb Center for Nursing and Engineering Innovation
Dr. Karen Giuliano’s research is focused on innovation in healthcare practices and products that result in the improvement of patient outcomes. She is a Professor (joint) in the Elaine Marieb College of Nursing and the Institute for Applied Life Sciences. Karen has directed the creation of IV smart pumps that improve the safety of medication administration and research that reduces (non-ventilator) hospital-acquired pneumonia.
**Dr. Frank Sup, PhD**
- Professor, Department of Mechanical and Industrial Engineering
- Co-Director, Elaine Marieb Center for Nursing and Engineering Innovation
Dr. Frank Sup draws on his background in mechanical engineering to develop patient-focused solutions for medical challenges in physical movement and rehabilitation. Sup is a strong advocate for the interdisciplinary nurse-engineer approach to healthcare innovation. He is a Professor in the Department of Mechanical and Industrial Engineering. He also directs the Mechatronics and Robotics Research Lab, which is focused on human-centered robotics and advanced design and control structures and methodologies.
**Dr. Cynthia Jacelon, PhD, RN-BC, CRRN, FGSA, FAAN**
- Acting Co-Director, Elaine Marieb Center for Nursing and Engineering Innovation
- Professor Emerita, Elaine Marieb College of Nursing
Dr. Cynthia Jacelon has been a rehabilitation clinical nurse specialist focused on promoting function in older individuals. As a Fellow of both the American Academy of Nursing and the Gerontological Society of America, she has held numerous leadership roles within the profession, including President of the Association of Rehabilitation Nurses, Director of the PhD program, and Executive Dean at the UMASS College of Nursing, and Associate Dean of Research. Cynthia is currently the Acting Elaine Marieb Center Co-Director.
**Karen Shultz Battistoni, MA**
- Associate Director, Elaine Marieb Center for Nursing and Engineering Innovation
Karen Shultz Battistoni joined the Center in July 2023 after working 6 years in UMass Amherst Alumni Relations. Continuing her work at UMass in this new role, she enjoys using her project management, team development, and event planning experience to help support center growth. She earned her BA in Communication from Marist College and her MA in College Student Personnel from Bowling Green State University.
**Genevieve Sawyer, BA, AOS**
- Center Coordinator, Elaine Marieb Center for Nursing and Engineering Innovation
Genevieve Sawyer is a freelance food writer with a BA in Sociology who appreciates innovative approaches to nursing and engineering. Previously Academic Department Coordinator for Amherst College’s Film and Media Studies (FAMS) and Grants programs, as the Center Coordinator, Genevieve works behind the scenes to keep the Center on track in all of its processes and is glad to see nurses take on a more critical and collaborative role in research and product development with their engineering colleagues.
**Dr. Carrie-Ellen Briere, PhD, RN, CLC**
- Assistant Professor, Elaine Marieb College of Nursing
Dr. Carrie-Ellen Briere is a neonatal nurse who focuses on the biology of human milk and its involvement with infant health, growth, and development. The Briere Human Milk Research Laboratory analyzes bioactive components of human milk and seeks to understand how they interact within a biological systems perspective in order to improve infant health. Dr. Briere's research includes the investigation of milk delivery, especially with regard to preterm and ill neonates.
**Affiliated Faculty**
**Dr. Hari Balasubramanian, PhD**
- Associate Professor, Department of Mechanical & Industrial Engineering
Dr. Hari Balasubramanian is a recipient of the National Science Foundation's CAREER award. He has collaborated with Massachusetts General Hospital, Baystate Medical Center of Springfield, MA, and the University of Massachusetts Medical School of Worcester, MA. His research includes mathematical modeling applied to healthcare in order to improve patient flow and reduce patient delays in outpatient, inpatient, and emergency room settings.
**Dr. Lucinda Canty, PhD, CNM, FACNM, FAAN**
- Associate Professor, Elaine Marieb College of Nursing
- Director of Seedworks Health Equity Program
Dr. Lucinda Canty is a certified nurse-midwife, Associate Professor of Nursing, and Director of Seedworks Health Equity in Nursing Program at the University of Massachusetts Amherst. She received her BSN from Columbia University, her MSN from Yale University (specializing in nurse-midwifery), and her Ph.D. from the University of Connecticut. Her research interests include the prevention of maternal mortality and severe maternal morbidity, reducing racial and ethnic health disparities in reproductive health, promoting diversity in nursing, and eliminating racism in nursing and midwifery.
**Dr. Ellen Benjamin, PhD, RN**
- Assistant Professor, University of Massachusetts Boston - Nursing
Dr. Ellen Benjamin is an assistant professor at the University of Massachusetts Boston - Nursing where she studies patient flow management and the organizing work of nurses. Dr. Benjamin has a clinical background in emergency nursing and is interested in the ways that emergency nurses prioritize patients, ration and allocate resources, and manage care trajectories. As a doctoral student, Dr. Benjamin worked as a research assistant for the EMCNEI, and she continues to explore collaborations between nursing, operations research, and human factors approaches.
**Dr. Muge Capan, PhD**
- Assistant Professor, Department of Mechanical and Industrial Engineering
Dr. Capan's research focuses on data science, statistical analysis, and decision modeling in healthcare to develop smart and connected clinical decision support systems. Examples
include development and evaluation of stochastic models to identify optimal treatment policies, utilizing clinicians’ perceptions in clinical risk display decisions, and evaluating rapid response interventions using quantitative risk scoring systems enhanced by nursing insights.
**Dr. Yossi Chait, PhD**
- Professor, Mechanical and Industrial Engineering
- Adjunct, Department of Biomedical Engineering
Dr. Chait’s research focuses on chronic kidney disease, taking an integrated approach that combines biomedical engineering, clinical nephrology, mathematical modeling, feedback systems, system identification, artificial intelligence, and cloud-based implementation to develop more effective treatments. His current clinical applications aim to individualize hypertension management using bioimpedance cardiography, accurately estimate blood volume and design an ultrafiltration profile for improved fluid management in hemodialysis, and develop optimal dosing algorithms for anemia management.
**Dr. Jeungok Choi, PhD, RN, MPH**
- Associate Professor, Elaine Marieb College of Nursing
Dr. Jeungok Choi seeks ways to improve healthcare communication for people with low literacy skills. Her research has determined that appropriate pictographs (simple line drawings) alongside simplified text can improve cognitive learning processes and enhance engagement in deeper understanding. In particular, tablet and web-based pictograph images and text have proven effective for low literacy older adults with hip replacement surgery.
**Dr. Joohyun Chung, PhD, RN, MStat**
- Assistant Professor, Elaine Marieb College of Nursing
- Interim Honors Director (Fall 2023)
Dr. Joohyun Chung’s research includes nursing informatics, machine learning, biostatistics, extensive experience with big data, and the design of nursing research itself (including research instruments). Recently, she has been working on unstructured data in nursing documentation from the electronic health record system using natural language processing.
**Instructor Tracey Cobb, MS, RN**
- Clinical Instructor, Elaine Marieb College of Nursing
Instructor Tracey Cobb has practiced in a variety of settings and institutions and has extensive experience in pediatric nursing. Her background includes adult medical/surgical, inpatient psychiatric, and pediatric care. Her areas of interest include the interplay of nursing and engineering, robotics in patient care, family and child development, and palliative care.
**Dr. Lisa Duffy, PhD, RN**
- Associate Professor, Joint Position IALS/Elaine Marieb College of Nursing
With more than 20 years of clinical expertise as a pediatric nurse practitioner at Boston Children’s Hospital, Dr. Duffy specializes in the care of chronically ill children, adolescents, and young adults. Committed to advancing healthcare through digital innovations, her research concentrates on maximizing the potential of digital health to benefit patients and their families. Dr. Duffy excels in developing online, video-based interventions that leverage AI to train healthcare providers in building therapeutic connections, ultimately aiming to improve patient outcomes in chronic condition management.
**Dr. Chaitra Gopalappa, PhD**
- Associate Professor, Department of Mechanical and Industrial Engineering
Dr. Chaitra Gopalappa’s research focuses on developing mathematical and computational models necessary to capture the interactions between multiple interrelated diseases and social determinants of health, with the goal being disease prediction, prevention, and effective control analyses. Outside collaborators include the US Centers for Disease Control and Prevention, the World Health Organization, and the International Agency for Research on Cancer.
**Dr. Xian Du, PhD**
- Assistant Professor, Department of Mechanical and Industrial Engineering
Dr. Xian Du's research focuses on the scale-up of flexible electronics printing processes form lab to industry using high-precision in-line inspection and pattern recognition technologies for large surface quality control. He also works on automatic, high-resolution, accurate, and robust imaging tools for medical devices for noninvasive detection and description of biomarkers.
**Dr. Favorite Iradukunda, PhD, RN, MSN, FAAN**
- Assistant Professor, Elaine Marieb College of Nursing
Dr. Favorite Iradukunda is a nurse scholar who focuses on the intersection of multiculturalism, immigration, and health outcomes for African diasporic women/birthing people. Her research interests include addressing maternal health disparities through community-centered and culturally congruent interventions.
**Dr. Juan Jiménez, PhD**
- Associate Professor, Dept. of Mechanical and Industrial Engineering
- Adjunct, Department of Biomedical Engineering
Dr. Juan Jiménez is a recipient of a Graduate Education for Minorities Fellowship, a Ruth L. Kirschstein National Research Service Award, the National Institutes of Health K25 Mentored Quantitative Research Career Development award, the National Science Foundation CAREER award, and the Biomedical Engineering Society Innovation and Career Development Award. His current research focuses on experimental cardiovascular biomedicine as well as biomedical implantable devices; past research includes turbulence and the conduction of the highest Reynolds number wake measurements ever recorded.
**Dr. Ravi Karkar, PhD**
- Assistant Professor, Manning College of Information & Computer Sciences
Dr. Ravi Karkar’s research foci include designing, developing, and evaluating tools that enable people to gather data and interpret personal aspects of their medical condition in the context of their day-to-day lives, taking the research from lab studies into the hands of individuals in need. His work creates opportunities for individualized interventions that can be more effective and appropriate than one-size-fits-all population-based interventions. He collaborates closely with clinical researchers to build targeted tools to support patients in better understanding and managing chronic conditions.
**Dr. Raenn LeBlanc, PhD, DNP, AGPCNP-BC, CHPN**
- Seedworks Endowed Clinical Associate Professor, Elaine Marieb College of Nursing
Dr. LeBlanc is a clinical practitioner in gerontological nursing and palliative care, researcher,
and personal caregiver. Their research includes the impact of social processes on health outcomes and equity as well as the design of equitable and accessible technologies, and they have collaborated with engineering teams that address personal health monitoring technologies. They hold the Seedworks Endowed Associate Clinical Professorship of Social Justice in Nursing at the Elaine Marieb College of Nursing.
**Dr. Yeonsik Noh, PhD**
- Assistant Professor (joint), Department of Electrical and Computer Engineering & Elaine Marieb College of Nursing
- Adjunct, Department of Biomedical Engineering
Dr. Yeonsik Noh’s research utilizes the Nurse-Engineer approach to proper disease and symptom management and therapy, including the development of wearable health monitoring devices and personalized healthcare in daily life. His latest research focuses on the development of underwater biometric devices by using polymer electrodes as the basis for a body sensor network; this research will contribute to monitoring and analyzing bio-related parameter during aquatic activity.
**Dr. Amanda Paluch, PhD**
- Assistant Professor, Kinesiology, School of Public Health and Health Sciences
Dr. Paluch is a physical activity epidemiologist and kinesiologist with a focus on the translational application of technology to monitor and promote health in clinical care and public health. She applies technology to identify and understand the benefits of physical activity in the setting of observational epidemiologic studies and as a tool for interventions. Her research targets adult populations and the prevention of chronic disease.
**Dr. Shannon Roberts, PhD**
- Associate Professor, Department of Mechanical & Industrial Engineering
Dr. Shannon Roberts is a trained Human Factors engineer and Co-director of the Human Performance Laboratory with expertise in studying and evaluating the interaction between humans and systems within the domain of transportation safety. Her research is focused on three areas: studying and improving young drivers’ behavior, developing feedback and warning systems to improve driving behavior, and examining how advanced technology (e.g., driving automation systems) alters driver behavior.
**Dr. Lindiwe Sibeko, PhD**
- Associate Professor, Department Chair of Nutrition, School of Public Health and Health Science
Dr. Lindiwe Sibeko is an Associate Professor of Nutrition in the Department of Nutrition, School of Public Health and Health Sciences. Her maternal and child health research is focused on improving survival and well-being of Black mother-infant dyads by tackling breastfeeding inequities in local communities. Through community engagement, Dr. Sibeko seeks to tackle structural racism, social and systemic dynamics identified as significant barriers to successful breastfeeding among Black mothers. She uses participatory methodologies to test culturally specific interventions supportive of Black breastfeeding; a health behavior that can mitigate early mother/infant risk factors, and improve health outcomes of Black infants, mothers, and young children.
**Dr. Govind Srimathveeravalli, PhD**
- Assistant Professor, Department of Mechanical and Industrial Engineering
Dr. Govind Srimathveeravalli’s research group studies multi-scale biological response to electromagnetic fields, leveraging knowledge gained for applications in tumor ablation, immunotherapy, tissue engineering, and drug delivery. His group has yielded two patents and 50 peer-reviewed publications and is supported by grants from the National Institutes of Health, Department of Defense Congressionally Directed Medical Research Program and others. Dr. Srimathveeravalli has served as a consultant for multiple medical device companies.
**Postdoctoral Researchers**
**Dr. Jeannine Blake, PhD, RN**
- EMCNEI Postdoctoral Fellow
Dr. Jeannine Blake is a nurse scientist with a background in biochemistry and critical care nursing. She received her PhD in 2022 from the Elaine Marieb College of Nursing at the University of Massachusetts Amherst and is now a Post-Doctoral Research fellow in the Elaine Marieb Center for Nursing and Engineering Innovation and the Department of Mechanical and Industrial Engineering. The mission of Jeannine’s work is to improve patient outcomes and nursing workflow by disrupting the status quo using innovative research methods and device development strategies. Jeannine’s program of research focuses on the safety of intravenous smart pumps through study of fluid flow accuracy and usability.
**Graduate Students**
**Kourosh Alimohammadbeik, RN, MS**
- Graduate Researcher
Kourosh Alimohammadbeik, RN, is a doctoral student at the Elaine Marieb College of Nursing, and he joined the Elaine Marieb Center for Nursing and Engineering Innovation as a research assistant in 2022. His projects include a remote home healthcare monitoring platform based on cloud native architecture and assessing sleep quality in dyads which one of them has dementia or mild cognitive impairment. He enjoys bridging the nursing and engineering disciplines to provide better care to patients.
Gina Georgadarellis, MS
- Graduate Researcher
Gina Georgadarellis is a doctoral candidate at the Department of Mechanical and Industrial Engineering at the University of Massachusetts Amherst. She joined the Mechatronics and Robotics Research Lab in 2021 and is working with the Elaine Marieb Center for Nursing and Engineering Innovation. Her project focuses on the usability and perception of robotic technology within the clinical setting.
Seonhun (Hoon) Lee, MS
- Graduate Researcher
Seonhun is a doctoral candidate in Mechanical and Industrial Engineering. Specializing in robotics research at the Center, Hoon works on functionality to discover what robots in hospitals will be able to accomplish as nurse assistants. He has found that collaborating with nurses provides him with a fresh perspective on how to work with the functionality of robots.
Brenda Nyarko, MS
- Graduate Researcher
Brenda Abena Nyarko is a PhD student at the Elaine Marieb College of Nursing. She joined the Center for Nursing and Engineering Innovation in 2023 and is currently working as a research assistant. Her work revolves around the fascinating intersection of nursing and engineering.
Vidya Sharma, MS
- Social Media Program Assistant
Vidya graduated from UMass Amherst with an MS in Business Analytics and Data Science. As an EMCNEI Program Assistant, she was a part of the social media team and assisted with the creation of posts for multiple platforms and the production and assessment of analytics.
Nicole Anderson, RN, MBA
- EMCNEI Business Innovation Fellow
With an MBA from the UMass Isenberg School of Business, an extensive clinical background in critical care nursing, and experience in medical product development, Nicole is passionate about innovation to improve patient outcomes, workflow efficiency, and nursing satisfaction. She has a deep interest in nurse-engineer collaboration and was a Business Innovation Fellow at the Center throughout 2022.
Anushree Patil '24
- Core Summer Intern '22
An undergraduate in the Department of Electrical Engineering at the College of Engineering, Anushree worked on two Center initiatives, creating a better bedpan for patients, and working with robotics in the hospital setting, including wound image capturing and robotic oral care for patients. As an engineering student, she found that working with nurses allowed her to achieve her research and design goals. usability into his work.
Ruchi Gupta '24
- Core Intern '23/ Program Assistant
As a core intern, Ruchi found patterns and trends within large databases of medical and hospital data and wrote machine learning code to design algorithms to make predictive analysis. She worked on an IV smart pump project, in which she analyzed participant feedback data, and a project dealing with setting thresholds on ICU bedside alarms, helped create workshops for the Girls Inc. Eureka summer STEM program, and has assisted the Center with events and operations.
Undergraduate Students
Valerie Casimir '25
- Core Summer Intern '22/Program Assistant
An undergraduate student at the College of Nursing, Valerie worked on the IV Smart Pump project to develop new safety standards and make IV Pumps more effective for end-user nurses. She found that interdisciplinary work with engineers demonstrated the creative aspect of product development while broadening her collaboration skills, allowing her to see things from a new vantage point. She continues to work for the Center in her capacity as a program assistant, helping with communications and research.
Joseph Berthiaume '23
- Program Assistant
Joe graduated with a BA in Communications. He assisted with the organization of the Annual Center Symposium of 2022 and created the first annual report. Joe was instrumental in building the official website, kickstarted the development of a database of external stakeholders (including area university nursing and engineering contacts) and then continued to provide communications support to the Center until graduation.
Braedon Feddersen '23
- Core Summer Intern '22
An undergraduate student at the College of Engineering, Braedon worked on the IV Smart Pump to improve safety standards and make IV Pumps more usable and effective for end-user nurses. He found that nurses’ perspectives have allowed him to think outside the typical engineering framework.
Danielle Orlando '23
- Social Media Program Assistant
Danni graduated with a BA in Business Administration from the Isenberg School of Management with a concentration in Sustainable Business Practices and a minor in Environmental Science. As an EMCNEI Program Assistant, she was a part of the social media team and assisted with the creation of posts for multiple platforms and the production and assessment of analytics.
Mackenzie Kelly '23
- Social Media Program Assistant
Mackenzie graduated with a BA in Legal Studies from the Department of Political Science. As an EMCNEI Program Assistant, she was a part of the social media team and assisted with the creation of posts for multiple platforms and the production and assessment of analytics.
Ben Shih '25
- Core Intern '23/ Program Assistant
Ben is a Nursing '25 major with a Theater minor. Some of the projects Ben worked on this summer as a Core Intern included collecting data from testing an IV smart pump for accuracy, designing a chest tube drain holder using CAD software, and creating workshops for the Girls Inc. Eureka! summer STEM program. Ben enjoyed getting to collaborate with his fellow interns and combine his background in nursing with their backgrounds in engineering. He continues to assist with Center operations and research.
Alex is a Biomedical Engineering ‘24 major. As a Center Core intern this summer, Alex worked on an IV smart pump project where she analyzed eye-tracking data; she later moved on to the robotics portion of the project, which included 3D printing and design. Alex shared a weekly presentation on the status of the project with her colleagues, and she also worked on creating workshops for the Girls Inc. Eureka! summer STEM program.
Asmita is a Biomedical Engineering ‘25 major with an Information Technology minor. Every day looked different for Asmita this summer; her projects included creating workshops for the Girls Inc. Eureka! summer STEM program, collecting data on how different medicines go through IV tubing for an IV smart pump project, and creating a cage for a chest tube drainage unit. She got to work with her fellow interns and approach projects from both a nursing and biomedical engineering perspective.
Jenny is a Nursing ‘25 major. Jenny’s day typically started with a short meeting amongst her colleagues to discuss their current projects. From there, she would typically work on some literature review and data analysis, or maybe meet with nurses to hear their concerns about certain technology and get started with research and design to try to solve the problem. Jenny found it very valuable to combine her nursing major with the engineering process.
Bevin attends the University of Hartford as she works toward a BS in Health Sciences, and she assisted with Center Research projects throughout the summer.
David attends Amherst-Pelham Regional High School in Amherst, MA and assisted with Center Research projects throughout the summer.
Marco Vital attends Pope Francis Prep School in Springfield, MA and assisted with Center Research projects throughout the summer.
Blake, J., Butterfield, R., & Giuliano, K. (2021). Clinical implications of IV extension tubing with titratable medications. (2021). AACN Advanced Critical Care 32(2): 153-155.
Blake, J. W., Giuliano, K. K., Butterfield, R. D., Vanderveen, T., & Sims, N. M. (2021). Extending tubing to place intravenous smart pumps outside of patient rooms during COVID-19: An innovation that increases medication dead volume and risk to patients. BMJ Innovations, 7(2).
Choi, J., Cody, J. L., & Fiske, S. (2021). Usability testing of tablet-based cognitive behavioral intervention application to improve a simple walking activity for older adults with arthritis fatigue. Geriatric Nursing, 42(2), 473-478.
Fitzgerald, L. F., Bartlett, M. F., Nagarajan, R., Francisco, E. J., Sup IV, F. C., & Kent, J. A. (2021). Effects of old age and contraction mode on knee extensor muscle ATP flux and metabolic economy in vivo. Journal of Physiology, 599(12), 3063-3080.
Giuliano, K. K., & Blake, J.* (2021) Medication safety at the frontlines: Nurse and pharmacist knowledge of secondary medication administration. Biomedical Instrumentation & Technology, 55(1): 51-58. *Winner of BI&T 2021 Research Paper Publication Award…
Giuliano, K. K., & Blake, J. W. (2021). Nurse and pharmacist knowledge of intravenous smart pump system setup requirements. Biomedical Instrumentation & Technology, 55(1), 51-58.
Giuliano, K. K., Blake, J. W., & Butterfield, R. (2021). Secondary medication administration and IV smart pump setup. AJN The American Journal of Nursing, 121(8), 46-50.
Giuliano, K. K., Penoyer, D., Mahuren, R. S., & Bennett, M. (2021). Intravenous smart pumps during actual clinical use: a descriptive comparison of primary and secondary infusion practices. Journal of Infusion Nursing, 44(3), 128.
Giuliano, K. K., Penoyer, D., Middleton, A., & Baker, D. (2021). Original research: Oral care as prevention for nonventilator hospital-acquired pneumonia: A four-unit cluster randomized study. AJN, American Journal of Nursing 121(6), 24-33, June 2021.
Gregory, D. L., Sup IV, F. C., & Choi, J. T. (2021). Contributions of spatial and temporal control of step length symmetry in the transfer of locomotor adaptation from a motorized to a non-motorized split-belt treadmill. Royal Society Open Science, 8(2),….
Labropoulos, N., Giuliano, K. K., Tafur, A. J., & Caprini, J. A. (2021). Comparison of a nonpneumatic device to four currently available intermittent pneumatic compression devices on common femoral blood flow dynamics. Journal of Vascular Surgery: Venous...
Munro, S. C., Baker, D., Giuliano, K. K., Sullivan, S. C., Haber, J., Jones, B. E., Crist, M., Nelson R., Carey, E., Lounsbury, O., Lucatorto, M., Miller, R., Pauley, B., & Klompas, M. (2021). Nonventilator hospital-acquired pneumonia: A call to action: Recommendations...
Pryor, L., Giuliano, K. K., & Gallagher, S. (2021). Creating a culture of worker safety: Evidence-based safe mobility in the ICU. Association of Occupations Health Professionals in Healthcare (AOHP) Journal.
Francisco, E. J., Boyer, K. A., & Sup IV, F. C. (2021) Clutch-based quasi-passive knee brace to reduce tibio-femoral contact forces. American Society of Biomechanics Annual Conference.
Giuliano, K. K., & Baker, D. (2022). Best practices for cardiac monitoring during neonatal resuscitation. Journal of Neonatal Nursing.
Giuliano, K. K., Baker, D., Thakkar-Samtani, M., Glick, M., Restrepo, M. I., Scannapieco, F. A., ... Frantsve-Hawley, J. (2022). Incidence, mortality, and cost trends in nonventilator hospital-acquired pneumonia in medicaid beneficiaries, 2015-2019. American Journal of...
Giuliano, K. K., Blake, J. W., Bittner, N. P., Gamez, V., & Butterfield, R. (2022). Intravenous smart pumps at the point of care: A descriptive, observational study. Journal of Patient Safety, 18(6), 553-558.
Giuliano, K. K., & Landsman, K. (2022). Collaborative nurse–engineer product innovation. AJN The American Journal of Nursing, 122(7), 59-61...
Giuliano, K. K., & Landsman, K. (2022). Health care innovation: Embracing the nurse–engineer partnership. AJN The American Journal of Nursing, 122(3), 55–56.
Giuliano, K. K., Sup, F. C., IV, Benjamin, E., & Krishnamurty, S. (2022) INNOVATE: Preparing nurses to be healthcare innovation leaders. Nursing Administration Quarterly, 46(30), 255-265.
Modarres-Sadeghi, Y., Carleton, A., Patel, U., Bose, R., & Sup. F. Emulating a human walking gait in a double pendulum interacting with the incoming vortices. Bulletin of the American Physical Society, 2022
Penoyer, D., Giuliano, K., & Middleton, A. (2022). Comparison of safety and usability between peristaltic and pneumatic large-volume intravenous smart pumps during actual clinical use. BMJ Innovations, 8(2).
Scannapieco, F. A., Giuliano, K. K., & Baker, D. (2022). Oral health status and the etiology and prevention of nonventilator hospital-associated pneumonia. Periodontology 2000, 89(1), 51-58.
Wedge, R. D., Sup IV, F. C., & Umberger, B. R. (2022). Metabolic cost of transport and stance time asymmetry in individuals with unilateral transfemoral amputation using a passive prostheses while walking. Clinical Biomechanics, 94, 105632...
2023
Baker, D., Giuliano, K., Desmarais, M., Worzala, C., Cloke, A., & Zawistowich, L. (2023). Impact of hospital-acquired pneumonia on the Medicare program. Infection Control & Hospital Epidemiology, 1-6.
Benjamin, E. ** Giuliano, K. K. (2023). Improving bruise detection in patients with dark skin tone. AJN, American Journal of Nursing, 123(7), 46-47.
Benjamin, E. (2023). A grounded theory of patient flow management within the emergency department. Doctoral dissertation. Elaine Marieb College of Nursing, University of Massachusetts Amherst.
Giuliano, K. K., Baker, D., & Benjamin, E. (2023). Preventing non-ventilator hospital-acquired pneumonia: A survey of medical-surgical urses. Medsurg Nursing, 32(2), 118-133.
Giuliano, K. K., Bilkovski, R. N., Beard, J., & Lamminmäki, S. (2023). Comparative analysis of signal accuracy of three SpO2 monitors during motion and low perfusion conditions. Journal of Clinical Monitoring and Computing, 1-11.
Giuliano, K. K., Mahuren, R. S., & Balyeat, J. (2023). Data-based program management of system-wide IV smart pump integration. American Journal of Health-System Pharmacy, 81(1), e30-e36.
Kainec, K. A., Spencer, R. M., Benjamin, E., Searles, M. E., & Giuliano, K. K. (2023). Maintaining sleep while improving overnight mobility and comfort with a novel lower limb external mechanical compression system. Human Factors in Healthcare, 3, 100043.
Koker, E., Balasubramanian, H., Castonguay, R., Bottali, A., & Truchil, A. (2023). Estimating the workload of a multi-disciplinary care team using patient-level encounter histories. Health Systems, 1-21. https://www.tandfonline.com/doi/abs/10.1080/20476965.2023.2215848
Landsman, K., & Giuliano, K. K. (2023). Nurse–engineer oartnerships in academia. AJN The American Journal of Nursing, 123(3), 44-46.
Vollman, K., Black, J., Giuliano, K. (2023). Early and progressive mobility in the ICU: A balanced approach to improve outcomes for both patients and staff. Journal of Nursing Care Quality. 39(1): 7-9.
Blake, J., & Jimenez, J. Intravenous Smart Pumps: The Technology Behind the Medication Flow. AAMI Exchange 2023, Long Beach CA.
Blake, J. W. C., & Giuliano, K. (2023, April). Peripherally Inserted Venous Access Device Use and Intravenous Smart Pump Accuracy. Clinical Nursing Research Conference, Cleveland, OH.
Blake, J. W. C., & Giuliano, K. (2023, April). IV Smart Pump Accuracy: A COVID-19 Practice Assessment. Infusion Nursing Society Convention, Boston, MA.
Canty, L. Using Research to develop community initiatives to address maternal health. Health Equity Seminar. University of Massachusetts Amherst Institute of Diversity Sciences. Sept. 21, 2023.
Canty, L. Addressing the Maternal Health Crisis Through Nursing Research and Education. Keynote address. 2nd Annual Gunter-Gooding DEI Lecture. Pennsylvania State University. September 25, 2023.
Canty, L. Why EVERY Nurse Should Engage in Nursing Research. Keynote address. 14th Annual Nursing Research Evidence-Based Practice Symposium. So. Burlington, VT. October 27, 2023.
Canty, L. Humanities Fine Arts to Host Interdisciplinary Lightning Talks. Presented with Charmaine A. Nelson (Department of History of Art and Architecture, HFA); Favorite Iradukunda (Nursing, EMCN); Lindiwe Sibeko (Department of Nutrition, SPHHS). “Maternity in Crisis: Redressing the Histories and Present of Black Pregnancy and Childbirth.” December 7, 2023.
Giuliano, K., & Baker, D. (2023). Incidence, Mortality and Cost of Non-Ventilator Hospital-Acquired Pneumonia in Medicaid Beneficiaries, 2015-2019. Eastern Nursing Research Society. March 23, Philadelphia, PA.
Giuliano, K., & Baker, D. (2023). Preventing Non-Ventilator Hospital Acquired Pneumonia: What’s New from SHEA/IDSA/APIC Practice Recommendations. AACN National Teaching Institute, May 24, 2023.
Giuliano, K., Laine, T., Pekarske, M., Lucchetti, M., & Beard, J. W. Byndr™ Communication Protocol Provides Reliable Wireless Physiologic Data Flow. 2023 MIT IEEE Undergraduate Research Technology Conference, Boston MA, October 6, 2023.
Giuliano, K., & Vital, C. (2023). Academic-Clinical Practice Partnerships: Engaging Clinical Nurses in Research and Healthcare Innovation. ANCC Magnet Research Conference, October 11, Chicago IL
Alimohammadbeik, RN, K., Noh, Y., Chung, J., & Jacelon, C. (2024, April 4-5). Feasibility of a cloud-based home healthcare monitoring platform [Poster session]. Eastern Nursing Research Society, Boston, MA, United States.
Aparicio, J. (2024). Soft Magnetic Sensing on a Compliant Support Surface and Contact Mechanics Approximations at the Interface. Doctoral dissertation. Dept. of Mechanical and Industrial Engineering, University of Massachusetts Amherst.
Shen, M., Choi, W. J., & Choi, J. (2024). Usability, Acceptability, and Feasibility of Tablet-Based Lifestyle Modification Intervention for Chinese Older Adults with Osteoarthritis: A Mixed-Methods Study. 36th Annual ENRS Scientific Sessions. Trailblazing Innovative Models of Care in Population Health through Nursing Science Boston, MA. April 4-5, 2024.
Core Summer Intern Ben Shih '25 demonstrates patient care to Eureka! Scholars in the Elaine Marieb College of Nursing Simulation Lab.
Support the Center:
minutefund.uma-foundation.org/project/41101 |
SAP Speaks PDDL: Exploiting a Software-Engineering Model for Planning in Business Process Management
Joerg Hoffmann, Ingo Weber, Frank Kraft
To cite this version:
Joerg Hoffmann, Ingo Weber, Frank Kraft. SAP Speaks PDDL: Exploiting a Software-Engineering Model for Planning in Business Process Management. Journal of Artificial Intelligence Research, Association for the Advancement of Artificial Intelligence, 2012, 44, pp.587-632. <http://www.jair.org/papers/paper3636.html>. <10.1613/jair.3636>. <hal-00765034>
SAP Speaks PDDL: Exploiting a Software-Engineering Model for Planning in Business Process Management
Jörg Hoffmann
Saarland University, Saarbrücken, Germany
Ingo Weber
NICTA, Sydney, Australia
Frank Michael Kraft
bpmnforum.net, Germany
Abstract
Planning is concerned with the automated solution of action sequencing problems described in declarative languages giving the action preconditions and effects. One important application area for such technology is the creation of new processes in Business Process Management (BPM), which is essential in an ever more dynamic business environment. A major obstacle for the application of Planning in this area lies in the modeling. Obtaining a suitable model to plan with – ideally a description in PDDL, the most commonly used planning language – is often prohibitively complicated and/or costly. Our core observation in this work is that this problem can be ameliorated by leveraging synergies with model-based software development. Our application at SAP, one of the leading vendors of enterprise software, demonstrates that even one-to-one model re-use is possible.
The model in question is called Status and Action Management (SAM). It describes the behavior of Business Objects (BO), i.e., large-scale data structures, at a level of abstraction corresponding to the language of business experts. SAM covers more than 400 kinds of BOs, each of which is described in terms of a set of status variables and how their values are required for, and affected by, processing steps (actions) that are atomic from a business perspective. SAM was developed by SAP as part of a major model-based software engineering effort. We show herein that one can use this same model for planning, thus obtaining a BPM planning application that incurs no modeling overhead at all.
We compile SAM into a variant of PDDL, and adapt an off-the-shelf planner to solve this kind of problem. Thanks to the resulting technology, business experts may create new processes simply by specifying the desired behavior in terms of status variable value changes: effectively, by describing the process in their own language.
1. Introduction
Business processes are workflows controlling the flow of activities within and between enterprises (Aalst, 1997). Business process management (BPM) is concerned, amongst other things, with the maintenance of these processes. To minimize time-to-market in an ever more dynamic business environment, it is essential to be able to quickly create new processes. Doing so involves selecting and arranging suitable IT transactions from huge ininfrastructures. That is a very difficult and costly task. Our application supports this task within the software framework of SAP\textsuperscript{1}, one of the leading vendors of enterprise software.
A well-known idea in this context, discussed for example by Jonathan, Moore, Stader, Macintosh, and Chung (1999), Biundo, Aylett, Beetz, Borrajo, Cesta, Grant, McCluskey, Milani, and Verfaillie (2003), and Rodriguez-Moreno, Borrajo, Cesta, and Oddi (2007), is to use technology from the field of \textit{planning}. This is a long-standing sub-area of AI, that allows the user to describe the problem to be solved in a declarative language. In a nutshell, planning problems come in the form of an initial state, a goal, and a set of actions, all formulated relative to a set of (typically Boolean or at least finite-domain) state variables. A solution (or “plan”) is a schedule of actions transforming the initial state into a state that satisfies the goal. The planning technology solves (in principle) any problem described in that language. By far the most wide-spread planning language is the \textit{planning domain definition language (PDDL)} (McDermott, Ghallab, Howe, Knoblock, Ram, Veloso, Weld, & Wilkins, 1998).\textsuperscript{2}
The idea in the BPM context is to annotate each IT transaction with a planning-like description formalizing it as an action. This enables planning systems to compose (parts or approximations of) the desired processes fully automatically, i.e., based on minimal user input specifying from where the process will start (initial state), and what it should achieve (goal). Very closely related ideas have been explored under the name \textit{semantic web service composition} in the context of the Semantic Web community (e.g., Narayanan & McIlraith, 2002; Agarwal, Chafle, Dasgupta, Karnik, Kumar, Mittal, & Srivastava, 2005; Sirin, Parsia, & Hendler, 2006; Meyer & Weske, 2006).
Runtime performance is important in such an application. Typically, the user – a business expert wishing to create a new process – will be waiting online for the planning outcome. However the most mission-critical question, discussed for example by Kambhampati (2007) and Rodriguez-Moreno et al. (2007), is: \textit{How to get the planning model?} To be useful, the model needs to capture the relevant properties of a huge IT infrastructure, at a level of abstraction that is high-level enough to be usable for business experts, and at the same time precise enough to be relevant at IT level. Designing such a model is so costly that one will need good arguments indeed to persuade a manager to embark on that endeavor.
In the present work, we demonstrate that this problem can be ameliorated by leveraging synergies with model-based software development, thus reducing the additional modeling overhead caused by planning. In fact, we show that one can – at least in our particular application – \textit{re-use exactly, one-to-one, models that were built for the purpose of software engineering, and thus reduce the modeling overhead to zero}.
It has previously been noted, for example by Turner and McCluskey (1994) and Kitchin, McCluskey, and West (2005), that planning languages have commonalities with software specification languages such as B (Schneider, 2001) and OCL (Object Management Group, 2006). Now, typically such specification languages are mathematically oriented to describe
\textsuperscript{1} \url{http://www.sap.com}
\textsuperscript{2} There are many variants of planning, and of PDDL. All share concepts similar to the short description we just stated. However, that description corresponds best to “classical planning”, where (amongst other things) there is no uncertainty about the action effects. We will discuss some details in Section 2.1. Throughout the paper, unless we refer to one of the particular planning formalisms defined in here, we use the term “planning” in a general sense not targeting any particular variant.
low-level properties of programs. This stands in contrast with the more abstract models needed to work with business experts. But that is not always so.
As part of a major effort developing a flexible service-oriented (Krafgig, Banke, & Slama, 2005; Bell, 2008) IT infrastructure, called SAP Business ByDesign, SAP has developed a model called Status and Action Management (SAM). SAM describes how “status variables” of Business Objects (BO) change their values when “actions” – IT transactions affecting the BOs – are executed. BOs in full detail are vastly complex, containing 1000s of data fields and numerous technical-level transactions. SAM captures the more abstract business perspective, in terms of a smaller number of user-level actions (like “submit” or “reject”), whose behavior is described using preconditions and effects on high-level status properties (like “submitted” or “rejected”). In this way, SAM corresponds to the language of business users, and is in very close correspondence with common planning languages. SAM is extensive, covering 404 kinds of BOs with 2418 transactions. The model creation in itself constitutes a work effort spanning several years, involving, amongst other things, dedicated modeling environments and educational training for modelers.
SAM was originally designed for the purpose of model-driven software development, to facilitate the design of the Business ByDesign infrastructure, and changes thereunto during its initial development and afterwards. Business ByDesign covers the needs of a great breadth of different SAP customer businesses, and is flexibly configurable for these customers. That configuration involves, amongst other things, the design of customer-specific processes, appropriately combining the functionalities provided. Describing the properties of individual processing steps, rather than supplying each BO with a standard lifecycle workflow, SAM is well-suited to support this flexibility. However, the business users designing the processes are typically not familiar with the details of the infrastructure. Using SAM for planning, we obtain technology that alleviates this problem. As its output, our technology delivers a first version of the desired process, with the relevant IT transactions and a suitable control-flow. As its input, the technology requires business users only to specify the desired status changes – in their own language.
The intended meaning of SAM is, to a large extent, the same as in common planning frameworks. There are some subtleties in the treatment of non-deterministic actions. One problem is that many of the non-deterministic actions modeled in SAM have “bad” outcomes that preclude successful processing of the respective business object (example: “BO data inconsistent”). That problem is aggravated by the fact that, in SAM’s “non-determinism”, repeated executions of the same action are not independent (example: “check BO consistency”). We discuss this in detail, and derive a suitable planning formalism. We compile SAM into PDDL, thus creating as a side-effect of our work a new planning benchmark. An anonymized PDDL version of SAM is publicly available.
On the algorithmic side, we show that minimal changes to an off-the-shelf planner suffice to obtain good empirical performance. We adapt the well-known deterministic planning system FF (Hoffmann & Nebel, 2001) to perform a suitable variant of AO* (Nilsson, 1969, 1971). We run its heuristic function on non-deterministic actions simply by acting as if we could choose the outcome, i.e., by applying the “all-outcomes determinization” (Yoon, Fern, & Givan, 2007). We run large-scale experiments with this modified FF, on the full SAM model as used in SAP. We show that runtime performance is satisfactory in the vast majority of cases; we point out the remaining challenges.
We have also integrated our planning technology into two BPM process modeling environments, making the planning functionality conveniently accessible for non-IT users. Processes (and plans) in these environments are displayed in a human-readable format. Users can specify status variable values, for example the planning goal, in simple intuitive drop-down menus. One of the environments is integrated as a research extension into the commercial SAP NetWeaver platform. Having said that, our technology is not yet part of an actual SAP product; we will discuss this in Section 7.
The treatment of non-deterministic actions, in our formalism and algorithms, is specific to our application context. This notwithstanding, it is plausible that these techniques could be useful also in other applications dealing with such actions. From a more general perspective, the contribution of our work is (A) pointing out that it is possible to leverage software-engineering models for planning, and (B) demonstrating that such an application can be realized at one of the major players in the BPM industry, thus providing a large-scale case study. The principle underlying SAM – modeling software artifacts at a level of abstraction corresponding to business users – is not limited to SAP. Thus our work may inspire similar approaches in related contexts.
We next give a brief background on planning and BPM. We then discuss the SAM model in Section 3, explaining its structure, its context at SAP, and the added value of using it for planning. We design our planning formalization in Section 4, explain our planning algorithms in Section 5, and evaluate these experimentally in Section 6. Section 7 describes our prototypes at SAP. Section 8 discusses related work, and Section 9 concludes.
2. Background
We introduce the basic concepts relevant to our work. We start with planning, then overview business process management (BPM) and its connection to planning.
2.1 Planning
There are many variants of planning (for an overview, see Traverso, Ghallab, & Nau, 2005). To handle SAM, we build on a wide-spread classical planning framework, planning with finite-domain variables (e.g., Bäckström & Nebel, 1995; Helmert, 2006, 2009). We will extend that framework with a particular kind of “non-deterministic” actions, whose semantics relates to notions from planning under uncertainty that we outline below.
**Definition 1 (Planning Task)** A finite-domain planning task is a tuple \((X, A, I, G)\). \(X\) is a set of variables; each \(x \in X\) is associated with a finite domain \(\text{dom}(x)\). \(A\) is a set of actions, where each \(a \in A\) takes the form \((\text{pre}_a, \text{eff}_a)\) with \(\text{pre}_a\) (the precondition) and \(\text{eff}_a\) (the effect) each being a partial variable assignment. \(I\) is a variable assignment representing the initial state, and \(G\) is a partial variable assignment representing the goal.
A fact is a statement \(x = c\) where \(x \in X\) and \(c \in \text{dom}(x)\). We identify partial variable assignments with conjunctions (sets, sometimes) of facts in the obvious way. A state \(s\) is a complete variable assignment. An action \(a\) is applicable in \(s\) iff \(s \models \text{pre}_a\). If \(f\) is a partial variable assignment, then \(s \oplus f\) is the variable assignment that coincides with \(f\) on each variable where \(f\) is defined, and that coincides with \(s\) on the variables where \(f\) is undefined.
**Definition 2 (Plan)** Let \((X, A, I, G)\) be a finite-domain planning task. Let \(s\) be a state, and let \(T\) be a sequence of actions from \(A\). We say that \(T\) is a solution for \(s\) iff either:
(i) \(T\) is empty and \(s \models G\); or
(ii) \(T = \langle a \rangle \circ T'\), \(s \models \text{pre}_a\), and \(T'\) is a solution for \(s \oplus \text{eff}_a\).
If \(T\) is a solution for \(I\), then \(T\) is called a plan.
One can, of course, define plans for finite-domain planning tasks in a simpler way; the present formulation makes it easier to extend the definition later on. We remark that, despite the simplicity of this formalism, it is PSPACE-complete to decide whether or not a plan exists (this follows directly from the results in Bylander, 1994).
Unlike in classical planning, there exist *disjunctive effects* in SAM, i.e., actions that have more than one possible outcome. This type of situation is dealt with in *planning under uncertainty*. To model SAM’s disjunctive effects appropriately, we will need a mixture of what is known as *non-deterministic actions* (e.g., Smith & Weld, 1999) and what is known as *observation actions* (e.g., Weld, Anderson, & Smith, 1998).
Non-deterministic actions \(a\) are like usual actions except that, in place of a single effect \(\text{eff}_a\), they have a set \(E_a\) of such effects, referred to as their possible *outcomes*. Whenever we apply \(a\) at plan execution time, any one of the outcomes in \(E_a\) will occur; separate applications of \(a\) are independent. For example, \(a\) might throw a dice. At plan generation time, we do not know which outcome will occur, so we must “cater for all cases”. The most straightforward framework for doing so is *conformant planning* (e.g., Smith & Weld, 1999), where the plan is still a sequence of actions, and is required to achieve the goal no matter what outcomes occur during execution. Note that this does not exploit observability, i.e., the plan does not make case distinctions based on what outcomes actually do occur. To handle SAM, we will include such case distinctions, along the lines of what is known as *contingent planning* (e.g., Weld et al., 1998). In that framework, case distinctions are made by explicit observation actions in the plan. Typically, an observation action \(a\) observes the – previously unknown – value of a particular state variable \(x\) at plan execution time (for example, the value of a dice after throwing it). The plan branches on all possible values of \(x\), i.e., \(a\) has one successor for each value in \(\text{dom}(x)\). Thus the plan is now a tree of actions, and the requirement is that the goal is fulfilled in every leaf of that tree.
The most wide-spread input language for planning systems today is the *Planning Domain Definition Language (PDDL)*, as used in the international planning competitions (IPC) (McDermott et al., 1998; Bacchus, 2000; Fox & Long, 2003; Hoffmann & Edelkamp, 2005; Younes, Littman, Weissman, & Asmuth, 2005; Gerevini, Haslum, Long, Saetti, & Dimopoulos, 2009). We do not get into the details of this language, since for our purposes here PDDL is merely a particular syntax for implementing our formalisms. More important for us, regarding the usability of our PDDL encoding of SAM, is the fact that PDDL has a lot of variants, with varying degrees of support by existing planning systems. Our PDDL syntax for SAM is in the PDDL variant used in the non-deterministic tracks of the IPC, i.e., the tracks dealing with non-probabilistic planning under uncertainty (Bonet & Givan, 2006; Bryce & Buffet, 2008). Specifically, we use only the most basic PDDL constructs (often referred to as “STRIPS”), except in action preconditions where we use quantifier-free formulas (Pednault, 1989; Bacchus, 2000). This PDDL subset is supported by most existing
planners, in particular all those based on FF (Hoffmann & Nebel, 2001) or Fast Downward (Helmert, 2006). The limiting factor for planner support are the non-deterministic actions, for which we use the most common syntax, namely the “(oneof eff\textsubscript{1} … eff\textsubscript{n})” construct from the non-deterministic IPC. Non-deterministic actions are supported by only few planners. Further, the semantics we will give to plans using these actions, as fits our application based on SAM, is non-standard and not supported by existing planners. This notwithstanding, several existing approaches are closely related (cf. Section 8), and, as we show herein, at least one planner – Contingent-FF (Hoffmann & Brafman, 2005) – can be adapted quite easily and successfully to deal with the new semantics.
### 2.2 Business Process Management
According to the commonly used definition (e.g., Weske, 2007), a *business process* consists of a set of activities that are performed in coordination in an organizational and technical environment. These activities jointly realize a business goal. In other words, business processes are how enterprises do business. *Business process models* serve as an abstraction of the way enterprises do business. For example, a business process model may specify which steps are taken, by various entities across an enterprise, to send out a customer quote answering a request for quotation. The atomic steps in such a process model may be both, manual steps performed by employees, or automatic steps executed on the IT infrastructure. We will refer to process models simply as “processes”.
An explicit model of processes allows all sorts of support and automation, addressed in the area of *business process management* (*BPM*). Herein, we are mostly concerned with process creation and adaptation. That is done in *BPM modeling environments*. Importantly, the users of these environments will typically not be IT experts, but *business experts* – the people familiar with, and taking decisions for, the business. The dominant paradigm for representing business processes are *workflows*, also called *control-flows*, often formalized as Petri nets (e.g., Aalst, 1997). Such a control-flow defines an order of execution for the process steps, within certain degrees of flexibility implied, for example, by parallelism. For business experts, the control-flow is displayed in a human-readable format, typically a flow diagram. Our application at SAP uses Business Process Modeling Notation (Object Management Group, 2008), short *BPMN*, which we will illustrate in Section 7.
An alternative paradigm for representing business processes, which relates to the SAM model we consider herein, are *constraint-based representations* (e.g., Wainer & de Lima Bezerra, 2003; van der Aalst & Pesic, 2006; Pesic, Schonenberg, Sidorova, & van der Aalst, 2007). These model processes implicitly through their desired properties, rather than explicitly through concrete workflows. This kind of representation is more flexible, in that, by modifying the model, we can modify the entire process space. For example, we might add a new constraint “archive customer quotes only if all follow-ups have been created”. Such a representation is also more explicit about the reasons for process design, supporting human understanding. The downside is that, for actual automated process execution, a concrete control-flow design is required. One way of viewing our planning technology is that it provides the service of generating such control-flow designs for SAM.
Processes are executed on IT infrastructures, like the one provided by SAP. Such execution coordinates the individual processing steps, prompting human users as appropriate, and performing all the necessary data updating on IT level. This is realized in dedicated
process execution engines (Dumas, ter Hofstede, & van der Aalst, 2005). Clearly, the execution poses high demands on the structure of the workflow. The most basic requirement is that the atomic process steps correspond to actual steps known to the IT infrastructure.\footnote{Another important requirement is an appropriate data-flow (van der Aalst, 2003; Dumas et al., 2005), e.g., sending a manager the documents required to decide whether or not to accept a customer quote. Since, in our case, the data is encapsulated into business objects, this is not a major issue for us.}
The requirements on business processes, such as legal and financial regulations, are subject to frequent updates. The people responsible for adapting the processes – business experts – are not familiar with the IT infrastructure, and may come up with processes whose “atomic steps” are nowhere near what can be implemented easily, partially overlap with whole sets of existing functions, and/or require the implementation of new functions although existing functions could have been arranged to do the job. Thus there is a need for intensive communication between business experts and IT experts, incurring significant costs for human labor and increased time-to-market.
How can planning come to the rescue? As indicated, the basic (and well-known) idea is to use a planning tool for composing (an approximation of) the process automatically, helping the business expert to come up with a process close to the IT infrastructure. The main novelty in our work is that we leverage a pre-existing model, SAM, getting us around one of the most critical issues in the area: the overhead for creating the planner input.
\section{SAM}
We explain the structure of the SAM language, and give a running example. We outline the background of SAM at SAP, and explain the added value of using SAM for planning.
\subsection{SAM Structure and Example}
Status and Action Management (SAM) models belong to business objects (BOs). Each BO is associated with a set of finite-domain “status” variables, and with a set of actions. Each status variable highlights one value that the variable will take when a new instance of the BO is created. Each action is described with a textual label (its name), a precondition, and an effect. The precondition and effect are propositional formulas over the variable values.
\textbf{Definition 3 (SAM BO)} \textit{A SAM business object $o$ is a triple $(X(o), A(o), I(o))$. $X(o)$ is a set of status variables; each $x \in X(o)$ is associated with a finite domain $\text{dom}(x)$. $A(o)$ is a set of actions, where each $a(o) \in A(o)$ takes the form $(\text{pre}_{a(o)}, \text{eff}_{a(o)})$; $\text{pre}_{a(o)}$ (the precondition) is a propositional formula over the atoms $\{x = c \mid x \in X(o), c \in \text{dom}(x)\}$; $\text{eff}_{a(o)}$ (the effect) is a negation-free propositional formula over these same atoms, in disjunctive normal form (DNF). $I(o)$ is a variable assignment representing $o$’s initial state.}
This structure is in obvious correspondence with that of Definition 1. The only differences are that there is no “goal”, and that the preconditions and effects are more complex. In our planning application, the goal is set by the user creating a new process. We discuss in Section 4 how to extend Definitions 1 and 2 to handle SAM preconditions and effects.
Note that there are no cross-BO constraints in SAM – each BO $o$ refers only to values of its own variables. This is a shortcoming of the current version of SAM: in reality, BOs do interact. We will get back to this further below.
| Action name | precondition | effect |
|-----------------------------|---------------------------------------------------|---------------------------------------------|
| Check CQ Completeness | CQ.archiving:notArchived | CQ.completeness:complete OR CQ.completeness:notComplete |
| Check CQ Consistency | CQ.archiving:notArchived | CQ.consistency:consistent OR CQ.consistency:notConsistent |
| Check CQ Approval Status | CQ.archiving:notArchived AND CQ.approval:notChecked AND CQ.completeness:complete AND CQ.consistency:consistent | CQ.approval:necessary OR CQ.approval:notNecessary |
| Decide CQ Approval | CQ.archiving:notArchived AND CQ.approval:necessary | CQ.approval:granted OR CQ.approval:notGranted |
| Submit CQ | CQ.archiving:notArchived AND (CQ.approval:notNecessary OR CQ.approval:granted) | CQ.submission:submitted |
| Mark CQ as Accepted | CQ.archiving:notArchived AND CQ.submission:submitted | CQ.acceptance:accepted |
| Create Follow-Up for CQ | CQ.archiving:notArchived AND CQ.acceptance:accepted | CQ.followUp:documentCreated |
| Archive CQ | CQ.archiving:notArchived | CQ.archiving:archived |
Figure 1: Our SAM-like running example, modeling the behavior of “customer quotes” CQ.
For illustration, Figure 1 gives a SAM-like model for a BO called “customer quote (CQ)”, that will be our running example. For confidentiality reasons, the shown object and model are artificial, i.e., they are not contained in SAM as used at SAP. By “CQ.x:c” we denote the proposition \( x = c \), in the object CQ. The initial state \( I(CQ) \) is:
- “CQ.archiving:notArchived”,
- “CQ.completeness:notComplete”,
- “CQ.consistency:notConsistent”,
- “CQ.approval:notChecked”,
- “CQ.submission:notSubmitted”,
- “CQ.acceptance:notAccepted”,
- “CQ.followUp:documentNotCreated”.
When using this example below, where relevant we will assume that the goal entered by the user is “CQ.followUp:documentCreated AND CQ.archiving:archived”.
The reader should keep in mind that this is merely an illustrative example, which necessarily is simple. In particular, the intended life-cycle workflow is rather obvious, given the action descriptions in Figure 1. This is very much not the case in general. The Business Objects modeled in SAM have up to 15 status variables, yielding up to 12 million possible states (combinations of variable values) even for a single BO. In other words, SAM is a flexible model – after all, that was its main design purpose – and describes a large number of combination possibilities in a compact way. Furthermore, in two of the application scenarios for planning ((A) and (C) in Section 3.3 below), we are actually looking not for entire life-cycles but for process fragments that may begin or end at any BO status values.
3.2 SAM@SAP
SAM was created by SAP as part of the development of the IT infrastructure supporting SAP Business ByDesign. That infrastructure constitutes a fully-fledged SAP application. Its key advantage over traditional SAP applications is a higher degree of flexibility, facilitating the use of SAP software as-a-service. Individual system functions are encapsulated as software services, using the service-oriented architectures paradigm (Krafcig et al., 2005; Bell, 2008). The software services may be accessed from standard architectures like BPM process execution engines, thus enabling their flexible combination with other services. To further support flexibility, the Business ByDesign IT infrastructure is model-driven. IT artifacts at various system levels are described declaratively using SAP-proprietary modeling formats. Business objects are one such IT artifact, and SAM is one such format.
The original purpose of SAM was to facilitate the design, and the management of changes, during the development of the Business ByDesign infrastructure (a formidably huge enterprise). Of course, SAM also serves the implementation of changes to the infrastructure later on, should changes be required. New developments are first implemented and tested on the model level. Then parts of the program code are automatically generated from the model. Straightforward code skeletons contain the status variables, as well as function headers for the available actions (similar to what Eclipse does for Java class definitions). In addition, the skeletons are filled with code fragments performing the precondition checks and updates on status variables. Changes pertaining to the status variable level can thus be implemented in SAM models, and automatically propagated into the code. In this sense, the original semantics of SAM is as follows:
(I) When a BO $o$ is newly created, the values of the status variables are set to $I(o)$.
(II) BO actions $a(o)$ whose precondition $pre_{a(o)}$ is not fulfilled are either disallowed, or raise an exception when executed; which one is true depends on the part of the architecture attempting to execute the action.
(III) Upon execution of an action $a$, the status variables change their values as prescribed by one of the disjuncts in the effect DNF $eff_{a(o)}$. The only aspect controlled outside SAM is which disjunct is chosen: that choice is made based on BO data content not reflected in SAM.
The intention behind SAM is to formulate complex business-level dependencies between individual processing steps, using simple modeling constructs that facilitate easy modification. The formulation in terms of preconditions and effects relative to high-level status variable values was adopted as a natural means to meet these requirements. Of course, this design also took some inspiration from traditional software modeling paradigms (Schneider, 2001; Object Management Group, 2006).
Leveraging SAM for planning is a great opportunity because of the effort it takes to build such a model. SAM was developed continuously along with Business ByDesign, across a time span of more than 5 years. Throughout this time, around 200 people were involved (as a part-time occupation) in the development. SAP implemented a dedicated graphical user interface for this development. There are design patterns for typical cases, there are naming conventions, there is a fully-fledged governance process, and there even is educational training for the developers. A council of senior SAP architects supervises the development.
3.3 Applications of SAM-Based Planning
The Business ByDesign infrastructure is designed to be very general and adaptable, covering the needs of a great breadth of different SAP customers’ business domains. To adapt the infrastructure to their practice, SAP customers may choose to create their own processes as compositions of the functionalities provided (as Web services), in a way tailored to their needs. Indeed, a second motivation behind SAM, beside its role for software development, was to facilitate such flexibility, by describing the possible process space in a declarative manner, rather than imposing standard workflows as is a common methodology in other contexts such as artefact-centric business process modeling (e.g., Cohn & Hull, 2009). SAM shares this motivation with constraint-based process representation languages. It also shares their downside, in that the actual workflows still need to be created. In this context, there are at least three application scenarios for planning based on SAM:
(A) **Development based on SAM.** During model-driven development based on SAM, planning enables developers to examine how their changes affect the process space. This greatly facilitates experimentation and testing. For example, planning can be used for debugging, testing whether or not the goal can still be reached, or whether the changes opened any unintended possibilities, like, reaching an undesired state of the BO (e.g., “CQ.consistency:notConsistent AND CQ.acceptance:accepted”). More than such reachability testing (essentially a model checking task), planning serves to generate entire processes, which as we shall see take the form of BPMN process models with parallelism and conditional splits. Developers can examine the space of processes generated in this way, determining for different combinations of start/end conditions how these can be connected. Note that the generality offered by the planning approach is an absolute requirement here – the process generation tool must be at least as general as SAM, handling propositional formula preconditions and effects.
(B) **Designing extended/customized processes.** Individual SAP customers have individual requirements on their processes, and thus may use the same BOs in different ways. For example, even if the end state of customer quotes (which in practice are much more complex than our illustrative example) always involved being archived, different businesses may differ on the side conditions: one organization only archives POs if all follow-ups have been created; another archives only POs that were successful; a third organization archives POs immediately and automatically after getting a response; a fourth only based on an explicit user-request. Part of the motivation behind SAM is to provide such flexibility. Planning based on SAM can be used to automatically generate a first version of the desired process.\(^4\)
(C) **Process redesign.** Sometimes the best option is to design a new process from scratch. If the business experts doing so are not aware the underlying IT infrastructure, then this incurs huge costs at process implementation time. SAM opens the possibility for business experts to “explain” the individual steps in the new process in terms of status variable value changes, i.e., in terms of a start/end state corresponding to what the business user considers to be an atomic processing step. Planning then shows if and how these status changes can be implemented using existing transactions. In
\(^4\) The alternative – equipping each BO with a standard life-cycle or a set thereof – would come at the prize of a flexibility loss for complex BOs, and is not the choice made by SAP.
particular, the planner can be called for some business object X (e.g., a sales order) from within a process being created for some other object Y (e.g., a customer quote). Hence, despite the mentioned absence of cross-BO constraints in the current version of SAM, planning can help to create non-trivial processes spanning several BOs.
All these use cases are supported by our prototype at SAP; we will illustrate its use for (C), in a cross-BO situation as mentioned, in Section 7.3.
An obvious requirement for the planner to be useful is instantaneous response. Typically a user will be sitting at the computer and waiting for the planner to answer. Further, all functionality must be accessible conveniently. In particular, each time a user wants to call the planner, she needs to provide the planning goal (and possibly the initial state). It is essential that this can be done in a simple and intuitive manner, without in-depth expertise in IT or about the BO the question. Thus we limit ourselves to conjunctive goals in the sense of “I want these status variables to have these values at the end of the process”, like the goal “CQ.followUp:documentCreated AND CQ.archiving:archived” in our illustrative example. In our prototype, such goals are specified using simple drop-down menus.
SAM was not originally intended to do planning, and is of course not perfect for that purpose. We will discuss the main limitations in Section 9, but we need to briefly touch on two points here already. The absence of cross-BO constraints in the current version of SAM has implications for planner setup and performance, and will play a role in our experiments.\footnote{As we shall discuss in Section 9, BO interactions do exist. An according extension of SAM is planned, which could in principle be tackled using the exact same planning technology as presented herein.} Another issue is plan quality. The duration/cost of the actions may differ vastly, but SAM does not contain any information about this: it is not relevant to SAM’s original purpose, software engineering. We will not address plan quality measures herein. Our planning algorithm of course attempts to find small plans. But it gives no quality guarantee in that regard, and the practical value of such a guarantee would be doubtful.
4. Planning Formalization
We design the syntax and semantics of a suitable planning formalism capturing SAM, and we illustrate that formalism using our running example.
4.1 SAM Planning Tasks: Syntax
Given the close correspondence of SAM business objects (Definition 3) with finite-domain planning tasks (Definition 1), it is straightforward to extend the latter to capture the former.
**Definition 4 (SAM Planning Task)** A SAM planning task is a tuple \((X, A, I, G)\) whose elements are the same as in finite-domain planning tasks, except for the action set \(A\). Each \(a \in A\) takes the form \((\text{pre}_a, E_a)\) with \(\text{pre}_a\) being a propositional formula over the atoms \(\{x = c \mid x \in X, c \in \text{dom}(x)\}\), and \(E_a\) being a set of partial variable assignments. The members \(\text{eff}_a \in E_a\) are the outcomes of \(a\).
As discussed above, we keep the goal as simple as possible. For the effects, in place of the negation-free propositional DNF formulas of Definition 3, we now have sets \(E_a\) of outcomes. The action preconditions are as in Definition 3. This generalizes the partial variable
assignments from Definition 1 – which are equivalent to negation-free conjunctions over the atoms \(\{x = c \mid x \in X, c \in dom(x)\}\) – to arbitrary propositional formulas over these atoms. That generalization poses no issue for defining the plan semantics; at implementation level, most current planning systems compile such preconditions into negation-free conjunctions, using the methods originally proposed by Gazen and Knoblock (1997).
To obtain a SAM planning task \((X, A, I, G)\), when given as input a SAM business object \(o = (X(o), A(o), I(o))\) along with a goal conjunction \(G(o)\), we first set \(X := X(o)\), \(I := I(o)\), and \(G := G(o)\). For each \(a(o) \in A(o)\) we include one \(a\) into \(A\), where \(pre_a := pre_{a(o)}\). As for \(eff_{a(o)}\), we create one partial variable assignment \(eff_a\) for each disjunct in that DNF formula, and we define the possible outcomes \(E_a\) as the set of all these \(eff_a\).
By convention, we denote with \(A^d := \{a \in A \mid |E_a| = 1\}\) and \(A^{nd} := \{a \in A \mid |E_a| > 1\}\) the sets of deterministic and non-deterministic actions of a SAM planning task, respectively. If \(a \in A^d\), then by \(eff_a\) we denote the single outcome of \(a\).
### 4.2 SAM Planning Tasks: Semantics
SAM action preconditions \(pre_{a(o)}\) are in direct correspondence with usual planning preconditions, cf. point (II) in Section 3.2. By contrast, SAM’s disjunctive effects \(eff_{a(o)}\) require to create a mix of two different kinds of planning actions – non-deterministic actions and observation actions – from the literature. To understand this, reconsider the role of SAM action effects \(eff_{a(o)}\) in their original environment, i.e., point (III) in Section 3.2. Any one of the disjuncts will occur, and at plan generation time we do not know which one. At plan execution time, the SAP system executing the action will observe the relevant data content, and will decide which branch to take. In the example from Figure 1, “Check CQ Completeness” will answer “CQ.completeness:complete” if the BO data is complete, and will answer “CQ.completeness:notComplete” otherwise. Of course, the SAP system keeps track of which outcomes occurred. In other words, (a) SAM’s disjunctive effects correspond to observation actions, that (b) internally observe environment data not modeled at the planning level. Due to (a), it makes perfect sense to handle such actions by introducing case distinctions at plan generation time, one for each outcome. Due to (b), there is no direct link of the “observation” to a reduction of uncertainty at planning level. During execution, the values of the “observed variables” are known prior to the “observation” already, and change as a result of applying that action. For example, “CQ.completeness.notComplete” is considered to be true prior to the first application of “Check CQ Completeness”, and may be changed to “CQ.completeness.complete” by that action. In that respect, and in that the outcome set (an arbitrary DNF) is more general than the domain of a particular variable, SAM’s disjunctive effects are more similar to the common notions of non-deterministic actions.
For simplicity, we will henceforth refer to SAM’s disjunctive-effects actions as non-deterministic actions. Another important point regarding these actions is that data content is not allowed to change while the process is running; the data is filled in directly upon creation of the BO. Thus the outcome of a non-deterministic action will be the same throughout the plan execution, and it makes no sense to execute such an action more than once in a plan. For example, there is no point in repeatedly applying “Check CQ Completeness”.
A final issue is to decide what a “plan” actually is. Cimatti, Pistore, Roveri, and Traverso (2003) describe the three most common concepts, in the presence of non-deterministic actions: strong plans, strong cyclic plans, and weak plans. We will discuss the latter two below; the most desirable property is the first one. A strong plan guarantees to reach the goal no matter which action outcomes occur. We now define this formally, for our setting.
An action tree over $A$ is a tree whose nodes are actions from $A$, and whose edges are labeled with partial variable assignments. Each action $a$ in the tree has exactly $|E_a|$ outgoing edges, one for (and labeled with) each $\text{eff}_a \in E_a$. In the following definitions, $A_{av}^{nd}$ refers to the subset of non-deterministic actions that have not yet been used, and are thus still available, at any given state during plan execution. Recall that $s \oplus \text{eff}_a$, defined in Section 2, over-writes $s$ with those variable values defined in $\text{eff}_a$, and leaves $s$ unchanged elsewhere.
**Definition 5 (Strong SAM Plan)** Let $(X, A, I, G)$ be a SAM planning task with $A = A^d \cup A^{nd}$. Let $s$ be a state, let $A_{av}^{nd} \subseteq A^{nd}$, and let $T$ be an action tree over $A \cup \{\text{STOP}\}$. We say that $T$ is a strong SAM solution for $(s, A_{av}^{nd})$ iff either:
(i) $T$ consists of the single node STOP, and $s \models G$; or
(ii) the root of $T$ is $a \in A^d$, $s \models \text{pre}_a$, and the sub-tree of $T$ rooted at $a$’s child is a strong SAM solution for $(s \oplus \text{eff}_a, A_{av}^{nd})$; or
(iii) the root of $T$ is $a \in A_{av}^{nd}$, $s \models \text{pre}_a$, and, for each of $a$’s children reached via an edge labeled with $\text{eff}_a \in E_a$, the sub-tree of $T$ rooted at that child is a strong SAM solution for $(s \oplus \text{eff}_a, A_{av}^{nd} \setminus \{a\})$.
If $T$ is a strong solution for $(I, A^{nd})$, then $T$ is called a strong SAM plan.
Compare this to Definition 2. Item (i) of the present definition is essentially the same, saying that there is nothing to do if the goal is already true. In difference to Definition 2, we then distinguish deterministic actions (ii) and non-deterministic ones (iii). In the former case, $a$ has a single child and we require the remainder of the tree to solve that child, similarly as in Definition 2. In the latter case, $a$ has several children all of which need to be solved by the respective sub-tree. This corresponds to the desired case distinction observing action outcomes at plan execution time.
Note that, throughout the plan, there is no uncertainty about the current variable values. Note also that we solve, not a state, but a pair consisting of a state and a subset of non-deterministic actions. This reflects the fact that whether or not an action tree solves a state depends not only on the state itself, but also on which non-deterministic actions are still available. The maintenance of the set $A_{av}^{nd}$ ensures that we allow each non-deterministic action only once, on each path through $T$ (but the action may occur several times on separate paths). Thus any one execution of the plan applies the action at most once.
The problem with Definition 5 is that strong plans typically do not exist. To illustrate this, consider Figure 2, showing a weak SAM plan, a notion we will now formally define, for our running example from Figure 1. Recall that the goal is assumed to be “CQ.followUp:documentCreated AND CQ.archiving;archived”. If either of “Check CQ Completeness” or ““Check CQ Consistency”, as shown at the top of Figure 2, result in a negative outcome (“CQ.completeness:notComplete” or “CQ.completeness:notConsistent”), then the goal becomes unreachable. Thus a strong plan does not exist for this SAM planning task. That phenomenon is not limited to this illustrative example. In our experiments, almost 75% of a very large sample of SAM planning tasks did not have a strong plan.
To address this, one can define more complicated goals, or a weaker notion of plans. For the former option, one could use goals specifying alternatives, preferences, and/or temporal
plan properties (e.g., Pistore & Traverso, 2001; Dal Lago, Pistore, & Traverso, 2002; Shaparau, Pistore, & Traverso, 2006; Gerevini et al., 2009). However, goals will be specified online by business users and it is absolutely essential for this to be as simple as possible. We hence decided to go for the second option.\footnote{Some works on more complex goals can be employed to define alternative notions of “weak plans” (by using trivial fall-back goals). We will discuss this in some detail in Section 8.}
The weak plans of Cimatti et al. (2003) are too liberal for our purposes. They guarantee only that at least one possible execution of the plan reaches the goal, posing no requirements on all the other executions. For example, in Figure 2, this would mean to allow the plan to handle only the left-hand side outcome of “Check CQ Approval Status”, i.e., “CQ.approval:notNecessary”, and to do nothing at all about (attach the empty tree at) its other outcome, “CQ.approval:necessary”.
So what about strong cyclic plans? There, the plan may have cycles, provided every plan state \textit{can}, in principle, reach the goal. This allows to “wait for a desired outcome”, like a cycle around a dice throw, waiting to obtain a “6”. Alas, repetitions of non-deterministic SAM actions will always produce the same outcome. It is futile to insert a cycle at the top of Figure 2, waiting for the desired outcome of “Check CQ Completeness”. While it is plausible to prompt a user to edit the BO content and then repeat the check (placeholders for such cycles could be inserted as a planning post-process), this is not a suitable exception handling in general. Exception handling depends on the business context, and typically depends on the actual customer using the SAP system. This is impossible to reflect in a model maintained centrally by SAP.
In conclusion, from the perspective of SAM-based planning there is not much one can do other than to highlight the bad outcomes to the user, so that the exception handling can
be inserted manually afterwards. Of course, a non-deterministic action should have at least one successful outcome, or else it would be completely displaced in a process. Further, it is essential to highlight outcomes as “bad” only if they really are “bad”, i.e., to not mark as failed any outcomes that could actually be solved. Our definition reflects all this:
**Definition 6 (Weak SAM Plan)** Let \((X, A, I, G)\) be a SAM planning task with \(A = A^d \cup A^{nd}\). Let \(s\) be a state, let \(A^{nd}_{av} \subseteq A^{nd}\), and let \(T\) be an action tree over \(A \cup \{STOP, FAIL\}\). We say that \(T\) is a weak SAM solution for \((s, A^{nd}_{av})\) iff either:
(i) \(T\) consists of the single node \(STOP\), and \(s \models G\); or
(ii) the root of \(T\) is \(a \in A^d\), \(s \models pre_a\), and the sub-tree of \(T\) rooted at \(a\)’s child is a weak SAM solution for \((s \oplus eff_a, A^{nd}_{av})\); or
(iii) the root of \(T\) is \(a \in A^{nd}_{av}\), \(s \models pre_a\), and, for each of \(a\)’s children reached via an edge labeled with \(eff_a \in E_a\), we have that either: (a) the sub-tree of \(T\) rooted at that child is a weak SAM solution for \((s \oplus eff_a, A^{nd}_{av} \setminus \{a\})\); or (b) the sub-tree of \(T\) rooted at that child consists of the single node \(FAIL\), and there exists no action tree \(T'\) that is a weak SAM solution for \((s \oplus eff_a, A^{nd}_{av} \setminus \{a\})\); where (a) is the case for at least one of \(a\)’s children.
If \(T\) is a weak solution for \((I, A^{nd})\), then \(T\) is called a weak SAM plan.
Compared to Definition 5, the only difference lies in item (iii), which no longer requires every child to be solved. Instead, the arrangement of options (a) and (b) means that failed nodes – leaves in the tree that stop the plan without success – are tolerated, as long as at least one child is solved, and every failed node is actually unsolvable. This is in obvious correspondence with our discussion above. In Figure 2, the failed nodes, i.e., the sub-trees consisting only of the special \(FAIL\) action, are crossed out (in red). Note the difference to Cimatti et al.’s (2003) definition of “weak plans” discussed above: we are not allowed to cross out the right-hand side outcome of “Check CQ Approval Status”, i.e., “CQ:approval:necessary”, because that outcome is solvable.
We remark that allowing non-deterministic actions only once (or, more generally, having an upper bound on repetition of non-deterministic actions) is required for Definition 6 to make sense. In item (iii), the definition recurses on itself when stating that some children may be unsolvable. While such recursion occurs also at other points in Definitions 5 and 6, at those points the action tree \(T\) considered is reduced by at least one node. For unsolvable children of non-deterministic actions in Definition 6 (iii) (b), such a reduction is not given – the quantification is over any action tree \(T'\) that may be suitable to solve the child. What makes the recursion sound, instead, is that the set of available non-deterministic actions is diminished by one. Without this, the notion of “weak SAM plan” would be ill-defined: the recursion step may result in the same planning task over again, allowing the construction of planning tasks that are considered “solvable” if they are “unsolvable”.
---
7. Concretely, say we obtain Definition 6’ from Definition 6 by considering states \(s\) only, removing the handling of \(A^{nd}_{av}\). Consider the example with one variable \(x\) whose possible values are \(A\) and \(B\), with initial state \(I : x = A\), with goal \(G : x = B\), and with a single action \(a\) with two possible outcomes, \(x = A\) or \(x = B\). Say \(T\) consists only of \(a\). Then the “bad” outcome of \(a\), i.e., the state \(x = A\), is identical to the original initial state \(I\). If \(I\) is “unsolvable” according to Definition 6’, then this outcome of \(a\) qualifies for Definition 6’ (iii) (b), and thus the overall task – the same state \(I\) – is considered to be “solvable”. By contrast, using Definition 6 as above, the plan must solve the state/available-non-deterministic-actions pair \((x = A, \{a\})\), and the bad outcome of \(a\) is the different pair \((x = A, \emptyset)\). That pair is unsolvable, and hence \(a\) is a weak plan.
The following observation holds simply because Definition 5 captures a special case of Definition 6:
**Proposition 1 (Weak SAM Plans Generalize Strong SAM Plans)** Let \((X, A, I, G)\) be a SAM planning task with \(A = A^d \cup A^{nd}\). Let \(s\) be a state, let \(A^{nd}_{av} \subseteq A^{nd}\), and let \(T\) be an action tree over \(A \cup \{\text{STOP}\}\). If \(T\) is a strong SAM solution for \((s, A^{nd}_{av})\), then \(T\) is a weak SAM solution for \((s, A^{nd}_{av})\).
In other words, any strong SAM plan is also a weak SAM plan, and hence in particular any SAM planning task that is solvable under the strong semantics is also solvable under the weak semantics. The inverse is obviously not true. A counter-example is our running example in Figure 2.
We remark that, trivially, deciding whether a plan exists is hard for both, Definition 5 and Definition 6. The special case where all actions are deterministic is a generalization of Definition 2, where as mentioned that problem is PSPACE-complete.
### 4.3 SAM Planning Tasks: Running Example
For illustration, we encode our running example, Figure 1, into a SAM planning task \((X, A, I, G)\). We set \(X := \{\text{Arch}, \text{Compl}, \text{Cons}, \text{Appr}, \text{Subm}, \text{Acc}, \text{FoUp}\}\), abbreviating the status variable names mentioned in Figure 1. For example, \(\text{Arch}\) stands for the variable “CQ.archiving”. The domain of each of \(\text{Arch}, \text{Compl}, \text{Cons}, \text{Subm}, \text{Acc}, \text{and FoUp}\) is \(\{\text{true}, \text{false}\}\). This serves to abbreviate the various names used for the respective variable values in Figure 1. The domain of \(\text{Appr}\) is \{\text{notChecked}, \text{nec}, \text{notNec}, \text{granted}, \text{notGranted}\}\). In what follows, for brevity we write facts, i.e., variable/value pairs, involving true/false valued variables like literals. For example, we write \(\neg \text{FoUp}\) instead of \((\text{FoUp}, \text{no})\).
The initial state of the SAM BO, and thus the SAM planning task, is:
- \(I = \{\neg \text{Arch}, \neg \text{Compl}, \neg \text{Cons}, (\text{Appr}, \text{notChecked}), \neg \text{Subm}, \neg \text{Acc}, \neg \text{FoUp}\}\)
The goal is “CQ.followUp:documentCreated AND CQ.archiving:archived”:
- \(G = \{\text{FoUp}, \text{Arch}\}\)
The deterministic actions \(A^d\) are:
- “Mark CQ as Accepted”: \((\neg \text{Arch} \land \text{Subm}, \{\text{Acc}\})\)
- “Create Follow-Up for CQ”: \((\neg \text{Arch} \land \text{Acc}, \{\text{FoUp}\})\)
- “Archive CQ”: \((\neg \text{Arch}, \{\text{Arch}\})\)
- “Submit CQ”: \((\neg \text{Arch} \land ((\text{Appr}, \text{notNec}) \lor (\text{Appr}, \text{granted}))), \{\text{Subm}\}\)
Note here that action effects are sets of partial variable assignments, i.e., sets of sets of facts. For the deterministic actions, there is just one partial variable assignment so we omit the second pair of set parentheses to avoid notational clutter. Note also that we do not have “delete effects”. The effects assign new values to the affected variables, implicitly removing their old values, cf. the meaning of \(s \oplus \text{eff}_a\) as defined in Section 2.
The non-deterministic actions \(A^{nd}\) are:
- Check CQ Completeness: \((\neg \text{Arch}, \{\{\text{Compl}\}, \{\neg \text{Compl}\}\})\)
- Check CQ Consistency: \((\neg \text{Arch}, \{\{\text{Cons}\}, \{\neg \text{Cons}\}\})\)
• Check CQ Approval Status:
\[ (\neg Arch \land (Appr, notChecked) \land Compl \land Cons, \{(Appr, nec)\}, \{(Appr, notNec)\}) \]
• Decide CQ Approval:
\[ (\neg Arch \land (Appr, nec), \{(Appr, granted)\}, \{(Appr, notGranted)\}) \]
Figure 2 shows a weak SAM plan for this example. For presentation to the user, a simple post-process (outlined in Section 7.1) transforms such plans into BPMN workflows.
5. Planning Algorithms
We design an adaptation of FF (Hoffmann & Nebel, 2001), using a variant of the AO* forward search from Contingent-FF (Hoffmann & Brafman, 2005), as well as a naïve extension of FF’s heuristic function. We assume that the reader is familiar with heuristic search in general, and we refer to the literature (e.g., Pearl, 1984) for that background.
5.1 Search
For strong SAM planning – Definition 5 – we use AO* tree search (Nilsson, 1969, 1971). For weak SAM planning – Definition 6 – we use a variant of that search that we refer to as SAM-AO*. We focus in what follows mainly on SAM-AO*, since AO* is well-known and will become clear as a side effect of the discussion.
Search is forward in an AND-OR tree whose nodes are states (OR nodes) and actions (AND nodes). The OR’ed children of states are the applicable actions, the AND’ed children of actions are the alternative outcomes (for deterministic actions, there is a single child so the AND node trivializes). Like in AO*, we propagate “node solved” and “node failed” markers. The mechanics for this are the usual ones in the case of OR nodes, i.e., the marker of the node is the disjunction of its childrens’ markers. For AND nodes, SAM-AO* differs from the usual conjunctive interpretation, implementing the weak SAM planning semantics of Definition 6: amongst other things, an AND node is failed only if all its children are failed. Figure 3 provides an overview of SAM-AO*, highlighting the differences to AO*. Figure 4 illustrates the algorithm on a simplification of our running example.
One feature of the algorithm that is immediately apparent is the book-keeping which non-deterministic actions are still available. Recall here that, in line with Definitions 5 and 6, we allow each non-deterministic action at most once in any execution of a plan. For the search algorithm – both in strong planning (AO*) and in weak planning (SAM-AO*) – this means that OR nodes contain not only a state \( s \), but a pair \((s, A_{av}^{nd})\) giving the state as well as the subset of \( A_{av}^{nd} \) that has not been used up to this node. We will refer to such pairs as search states from now on. The book-keeping of the sets \( A_{av}^{nd} \) is straightforward. For the initial state, all non-deterministic actions are still available. Whenever a non-deterministic action \( a \) is applied, for its outcome states, \( a \) is no longer available. For illustration, consider how the action sets are reduced in Figure 4 (B–D).
The heuristic function \( h \), that we assume as a given here, takes as arguments the search state, i.e., both the state and the available non-deterministic actions. This is because action availability affects goal distance and hence the heuristic estimates. By \( h(s) = 0 \) the heuristic indicates goal states, and by \( h(s) = \infty \) it may indicate that the state is unsolvable. The algorithm trusts the heuristic, i.e., it assumes that \( h \) only returns these values if the state
procedure SAM-AO*
input SAM planning task \((X, A, I, G)\) with \(A = A^d \cup A^{nd}\), heuristic function \(h\)
output A weak plan for \((X, A, I, G)\), or “unsolvable”
initialize \(T\) to consist only of \(N_I\); content\((N_I) := (I, A^{nd})\)
status\((N_I) :=\) solved if \(h(I, A^{nd}) = 0\), failed if \(h(I, A^{nd}) = \infty\), unknown else
while status\((N_I) =\) unknown do
\(N_s :=\) select-open-node\((T)\); \((s, A^{nd}_s) :=\) content\((N_s)\)
for all \(a \in A^d \cup A^{nd}_s\) with \(s \models pre_a\) do
if \(a \in A^d\) and is-direct-duplicate\((N_s, s \oplus eff_a, A^{nd}_s)\) then skip \(a\) endif
insert \(N_a\) as child of \(N_s\) into \(T\); content\((N_a) := a\)
\(A^{nd}_s := A^{nd}_s\) if \(a \in A^d\), else \(A^{nd}_s := A^{nd}_s \setminus \{a\}\)
for all \(eff_a \in E_a\) do
\(s' := s \oplus eff_a\)
insert \(N_{s'}\) as child of \(N_a\) into \(T\); content\((N_{s'}) := (s', A^{nd}_{s'})\)
status\((N_{s'}) :=\) solved if \(h(s', A^{nd}_{s'}) = 0\), failed if \(h(s', A^{nd}_{s'}) = \infty\), unknown else
endfor
status\((N_a) :=\) SAM-aggregate(\(\{\text{status}(N') | N' \text{ is child of } N_a \text{ in } T\}\))
endfor
status\((N_s) :=\) OR-aggregate(\(\{\text{status}(N') | N' \text{ is child of } N_s \text{ in } T\}\))
propagate-status-updates-to-I\((N_s)\)
endwhile
if status\((N_I) =\) failed then return “unsolvable” endif
return an action tree corresponding to a subtree \(T'\) of \(T\) s.t. \(N_I \in T'\) and:
for all inner nodes \(N_s \in T'\): status\((N_s) =\) solved and \(N_s\) has exactly one child \(N_a\) in \(T'\);
for all nodes \(N_a \in T'\): all children \(N_{s'}\) of \(N_a\) in \(T\) are contained in \(T'\)
is-direct-duplicate\((N, s', A^{nd}_s) := \begin{cases}
true & \exists \text{ predecessor } N_0 \text{ of } N \text{ s.t. content}(N_0) = (s', A^{nd}_s) \\
false & \text{else}
\end{cases}\)
SAM-aggregate\((M) := \begin{cases}
solved & \exists m \in M : m = \text{solved}, \text{and} \\
& \forall m \in M : (m = \text{solved} \text{ or } m = \text{failed}) \\
failed & \forall m \in M : m = \text{failed} \\
unknown & \text{else}
\end{cases}\)
OR-aggregate\((M) := \begin{cases}
solved & \exists m \in M : m = \text{solved} \\
failed & \forall m \in M : m = \text{failed} \\
unknown & \text{else}
\end{cases}\)
Figure 3: Pseudo-code of SAM-AO*, highlighting the differences to AO*.
\(s\) is indeed a goal state/unsolvable. Detecting unsolvable states is within the capabilities of FF’s heuristic, and is of paramount importance for planning with SAM models. Its behavior is like that of the heuristic in Figure 4 (B–D), which immediately marks all unsolvable nodes as being such. We get back to this in Section 5.2.2 below.
The overall structure of SAM-AO* is the same as that of AO*. Starting with the initial search state (compare Figure 4 (A)), we iteratively use select-open-node to select an OR node \(N_s\) in the tree that has not yet been expanded and whose status is unknown; the selection criterion is based on the node’s f-value, as explained below. We expand the selected node with the applicable actions (in Figure 4 (B), we omit “Check CQ Consistency” to save space), and we insert one new node for each possible outcome of these actions (“Comp” vs.
“-Comp” in Figure 4 (B)); we will discuss the *is-direct-duplicate* function below. Each new node, i.e., the corresponding search state, is evaluated with the heuristic function. Once all outcomes of an action $a$ were inserted, the status of $a$ is updated. Once all actions applicable to the current node $N_s$ are inserted, the status of $N_s$ is updated. The latter update, reflected in the *OR-aggregate* equation in Figure 3, is exactly as in AO*. A key difference to AO* lies in the former update, reflected in the *SAM-aggregate* equation. That equation is in obvious correspondence with Definition 6. In Figure 4 (B), neither of these updates yields any new information, because the status of one of the action outcomes, “Comp, Cons; CheckCons”, is “unknown”. That changes in Figure 4 (C), where the status of both outcomes becomes definite (solved/failed), and the updates propagate this information to the action $a$ and the search state node $N_s$ it was applied to.
After the status of $N_s$ and $a$ has been set, *propagate-status-updates-to-I(N_s)* performs a backward iteration starting at $N_s$, updating each action and search state along the way to $I$ using the same two functions, *OR-aggregate* and *SAM-aggregate*. This is necessary since the status of $N_s$ may have changed, and that may affect the status of any predecessors. This happens, for example, in Figure 4 (D) where the status of the first node and action now change from “unknown” to “solved”. The algorithm terminates when the initial node
(the search tree root) is solved or failed. In the former case, a solved sub-tree is returned. That happens in Figure 4 (D), and the sub-tree returned is equivalent to the start (top two actions) of our example plan in Figure 2.
In addition to status markers, SAM-AO* also annotates search states with their $f$-values, as well as the current best action. This is not shown in Figure 3 since it is (almost) identical to what is done in AO*. The $f$-value of a search state node is the minimum of those of its children, plus 1 accounting for the cost of applying an action; a minimizing child is the best action. The $f$-value of an action node is the maximum of its children, except – and herein lies the only difference to AO* – that we do not set the action value to $\infty$ unless all its children are marked as failed. The select-open-node procedure starts in $N_I$ and keeps choosing best actions until it arrives at a non-expanded state, which is selected as $N_s$.
The is-direct-duplicate function in Figure 3 disallows the generation of search states that are identical to one of their predecessors in the tree. We refer to this as direct duplicate pruning. Note that the method prunes duplicates only within deterministic parts of the search tree. If a predecessor node $N_0$ as in Figure 3 is found, then all actions between $N_0$ and $N$ are deterministic, because otherwise content($N_0$) would contain strictly more non-deterministic actions than $N$. Obviously, direct duplicate pruning preserves soundness and completeness of the search algorithm. We have:
**Proposition 2 (SAM-AO* is Complete and Sound)** Let $(X, A, I, G)$ be a SAM planning task, and let $h$ be a heuristic function. SAM-AO* terminates when run on the task with $h$. Provided $h(s) = 0$ iff $s$ is a goal state, and $h(s) = \infty$ only if $s$ is unsolvable, SAM-AO* terminates with success iff there exists a weak SAM plan for the task, and the action tree returned in that case is such a plan.
This follows from the known results about AO*, by definition, and by two simple observations. First, eventually, on any tree path no non-deterministic actions will be available anymore. Second, direct duplicate pruning allows only finitely many nodes in a SAM planning task without non-deterministic actions.
The reader might wonder whether stronger duplicate pruning methods could be defined, across the non-deterministic actions in the tree. A naïve approach, asking only whether a predecessor of $N$ contains the same state – and ignoring the sets of available non-deterministic actions – does not work for SAM-AO*. It renders that algorithm unsound. This is because such a pruning method may mark solvable search states as failed, and failed nodes can be part of the solution in a weak plan. For illustration, consider the simple example where we have a variable $x$ with values $A, B, C$, the initial state $A$, the goal $C$, and three actions: $a_1$ has precondition $A$ and the two possible outcomes $B$ and $C$; $a_2$ has precondition $B$ and outcome $A$; $a_3$ has precondition $A$ and outcome $C$. Say the search has chosen to apply $a_1$ first. Consider $a_1$’s unfavorable outcome $B$. At this point, in order to obtain a plan, we must apply $a_2, a_3$ to achieve the goal $C$. However, the outcome state $s : x = A$ of $a_2$ is the same as the initial state. Hence $s$ is pruned, hence $a_1$’s outcome $B$ is marked as failed, hence the algorithm wrongly concludes that $a_1$’s outcomes qualify for Definition 6 (iii), and that $a_1$ on its own is a plan.
5.2 Heuristic Function
To compute goal distance estimates, we use “all-outcomes-determinization” as known in probabilistic planning (Yoon et al., 2007) to get rid of non-deterministic actions, then run the FF heuristic (Hoffmann & Nebel, 2001) off-the-shelf. For the sake of self-containedness, we next explain this in some detail. The reader familiar with FF may skip to Section 5.2.2.
5.2.1 Relaxed Planning Graphs
FF’s heuristic function is one out of a range of general-purpose planning heuristics based on a relaxation widely known as “ignoring delete lists” (McDermott, 1999; Bonet & Geffner, 2001). Heuristics of this kind have emerged in the late 90s and are still highly successful. In what follows, we assume that action preconditions and the goal are conjunctions of positive atoms, and are thus equivalent to sets of facts. For more general formulas, such as the preconditions in SAM planning tasks, one can apply known transformations (Gazen & Knoblock, 1997) to achieve this.
The name “delete lists” comes from a Boolean-variable representation of planning tasks. Translated to our context, the relaxation means that variables accumulate, rather than change, their values. For illustration, say we have “CQ.archiving:notArchived” in our running example, and we apply the action “Archive CQ” whose effect is “CQ.archiving:archived”. Then, in the relaxation, the resulting state will be “CQ.archiving:notArchived, CQ.archiving:archived”, containing both the old and the new value of the variable “CQ.archiving”. Thus the other actions of this BO, that all require the customer quote to not be archived yet, remain applicable, and in difference to the plan from Figure 2, a relaxed plan can archive the CQ right at the start and then proceed to the rest of the processing.
Viewing variable assignments as sets of facts, relaxed action application is equivalent to taking the set union of the current state with the action effect. This yields a strictly larger set of facts than the real application of the action (“CQ.archiving:notArchived, CQ.archiving:archived” instead of “CQ.archiving:archived”). Satisfaction of preconditions and the goal is then tested by asking for inclusion in that set. Bylander (1994) proved that, within the relaxation, plan existence can be decided in polynomial time. He also proved, however, that optimal relaxed planning, i.e., finding the length of a shortest possible relaxed plan, is still NP-hard. Therefore, the heuristics used by practical planners approximate that length. Specifically, the FF heuristic we build on herein computes some not necessarily optimal relaxed plan. The algorithm doing so consists of two phases. First, it builds a relaxed planning graph (RPG) to approximate forward reachability. Then it extracts a relaxed plan from the RPG.
Figure 5 shows how an RPG is computed in our planner. The algorithm gets the state $s$ as well as the remaining non-deterministic actions, $A_{av}^{nd}$. It then determinizes the latter actions, inserting each of their possible outcomes as an individual new deterministic action into the new action set $A'$. The following loop is a simple fixed point operation over sets of facts. The initial set $F_0$ is equal to the state $s$ whose goal distance shall be estimated. Each loop iteration then increments $F_t$ with the effects of all actions whose preconditions have been reached. In case all goals are reached, the algorithm stops with success and returns the iteration index $t$. If a fixed point occurs before that happens, the algorithm returns $\infty$.
procedure RPG
input SAM planning task \((X, A, I, G)\) with \(A = A^d \cup A^{nd}\), state \(s\), available non-deterministic actions \(A^{nd}_{av}\)
output Number of relaxed parallel steps needed to reach the goal, or \(\infty\)
\(A' := A^d \cup \{(pre_a, \{eff_a\}) \mid a \in A^{nd}_{av}, eff_a \in E_a\}\)
\(F_0 := s\), \(t := 0\)
while \(G \not\subseteq F_t\) do
\(A'_t := \{a \in A' \mid pre_a \subseteq F_t\}\)
\(F_{t+1} := F_t \cup \bigcup_{a \in A'_t} eff_a\)
if \(F_{t+1} = F_t\) then return \(\infty\) endif
\(t := t + 1\)
endwhile
return \(t\)
Figure 5: Pseudo-code for building a relaxed planning graph (RPG).
If the RPG returns \(t < \infty\), the heuristic function algorithm enters its second phase, relaxed plan extraction. This is a straightforward backchaining procedure selecting supporting actions for the goals, and then iteratively for the supporting actions’ preconditions. The backchaining makes sure to select only feasible supporters by exploiting the reachability information encoded in the sets \(F_t\). Note here that \(t\) itself is not a good heuristic estimator because it counts parallel action applications – we could make transactions on 1000 BOs in parallel and still count this as a single step.
Consider our simplified example from Figure 4. For the root node “Comp, Cons; Check-Comp, CheckCons”, the relaxed plan returned will be (“Check CQ Completeness+”, “Check CQ Consistency+”), where the superscript “+” indicates that these are actions from the determinized set \(A'\) in Figure 5, choosing the positive outcome of each of these actions. The heuristic value returned is 2, as in Figure 4. Indeed, all the heuristic values from Figure 4 are as would be returned by FF’s heuristic. In particular, if “-Comp”, i.e., “CQ.completeness:notComplete”, holds in a state, but the action “Check CQ Completeness” is no longer available, then the heuristic value returned is \(\infty\) because no action in \(A'\) can achieve the goal “Comp”, and thus the RPG fixed point does not contain the goal. Similarly if “-Cons” holds in a state but “Check CQ Consistency” is no longer available.
5.2.2 Detecting Failed Nodes
It follows directly from, e.g., the results of Hoffmann and Nebel (2001), that the RPG stops with success iff there exists a relaxed plan for the task \((X, A', s, G)\). From this, we easily get the following result which is relevant for us:
**Proposition 3 (RPG Dead-End Detection in SAM is Sound)** Let \((X, A, I, G)\) be a SAM planning task with \(A = A^d \cup A^{nd}\). Let \(s\) be a state, and let \(A^{nd}_{av} \subseteq A^{nd}\) be a set of non-deterministic actions. If the RPG run on these inputs returns \(\infty\), then there exists no weak SAM solution for \((s, A^{nd}_{av})\).
To see this, note that the action set \(A'\) of Figure 5 is, from the perspective of plan existence, an over-approximation of the actual action set \(A^d \cup A^{nd}_{av}\) we got available. \(A'\) allows us to choose, for any non-deterministic action, the outcome that we want. Thus, from a plan using \(A^d \cup A^{nd}_{av}\) we can trivially construct a plan using \(A'\). So if no plan using
$A'$ exists then neither does a plan using $A^d \cup A_{av}^{nd}$. From here it suffices to see that non-existence of a relaxed plan (based on $A'$) implies non-existence of a real plan (based on $A'$). That is obvious, concluding the argument.
It is of course a very strong simplification to act as if one could choose the outcomes of non-deterministic actions. Part of our motivation for doing so is to demonstrate that it is not necessary, at least in this application context, to dramatically enhance off-the-shelf planning techniques. The simplistic approach just presented suffices to obtain good performance. This is particularly true regarding the ability to detect dead-ends. We experimented with a total of 548987 planning instances based on SAM. Of these (within limited time/memory) we found a weak plan for 441884 instances. Around half of the actions in these plans are non-deterministic, and these typically yield failed nodes in the plan. For every one of these failed nodes, in every one of the 441884 solved instances, the RPG returned $\infty$.
### 5.2.3 Helpful Actions Pruning
We also adopt FF’s *helpful actions pruning*. Aside from the goal distance estimate, the relaxed plan can be used to determine a most promising subset $H(s)$ – the helpful actions – of the actions applicable to the evaluated state $s$. Essentially, $H(s)$ consists of the actions that are applicable to $s$ and that are contained in the relaxed plan computed as described above in Section 5.2.1.\footnote{FF’s definition of $H(s)$ is a little more complicated, adding also some actions that were not selected for the relaxed plan but achieve a relevant sub-goal. We omit this for brevity. Recent variants of helpful actions pruning, for different heuristic functions like the causal graph heuristic (Helmert, 2006), do not make such additions, selecting $H(s)$ based on membership in abstract solutions only.} This action subset is used as a pruning method simply by restricting, during search, the expansion of state $s$ to consider only the actions $H(s)$. This kind of heuristic action pruning is of paramount importance for planner performance (Hoffmann & Nebel, 2001; Richter & Helmert, 2009).
In the SAM setting, one important aspect of FF’s helpful actions pruning is that it is accurate enough to distinguish relevant BOs from irrelevant ones. That is to say, if a BO is not mentioned in the goal, then no action pertaining to it will ever be considered to be helpful. This is simply because, as pointed out previously, SAM currently does not model cross-BO interactions. So if a BO $Y$ is not in the goal then relaxed plan extraction will never create any sub-goals pertaining to $Y$.
The obvious – and well-known – caveat of helpful actions pruning is that it does not preserve completeness. $H(s)$ may not contain any of the actions that actually start a plan from $s$. If that happens, then search may stop unsuccessfully even though a plan exists. This pertains to classical planning just as it pertains to AO* and SAM-AO* as used herein.
Importantly, helpful actions pruning in SAM-AO*, i.e., for weak SAM planning as per Definition 6, has another more subtle caveat: *it does not preserve soundness*. Consider again the example where we have a variable $x$ with values $A, B, C$, the initial state is $A$, the goal is $C$, and we have three actions of which action $a_1$ has precondition $A$ and two possible outcomes $B$ and $C$, $a_2$ has precondition $B$ and outcome $A$, and $a_3$ has precondition $A$ and outcome $C$. Say the search has applied $a_1$. Say that $N_s := (s, A_{av}^{nd})$ is the node corresponding to $a_1$’s unfavorable outcome $B$. The only way to complete $a_1$ into a plan is to attach $a_2, a_3$ to $N_s$. Presume that helpful actions pruning, at the node $N_s$, removes $a_2$. Then $N_s$ is marked as failed, and we wrongly conclude that $a_1$ on its own is a plan.
Using helpful actions pruning, one may incorrectly mark a node $N_s$ as failed. If such $N_s$ is a leaf in a weak plan $T$, marked as failed even though it is solvable by action tree $T'$, then $T$ is not a valid plan. This can be fixed, in a plan-correction post-process, by attaching $T'$ to $N_s$, where $T'$ is found by running SAM-AO* without helpful actions pruning on $N_s$. We did not implement such a post-process because, according to our experiments, it is unnecessary in practice: as discussed at the end of the previous sub-section, all failed nodes $N_s$ in our 441884 weak plans have heuristic value $\infty$, and are thus proved to be, indeed, unsolvable.
6. Experiments
We will describe our prototype at SAP in the next section. In what follows, we evaluate our planning techniques in detail from a scientific point of view. Our experiments are aimed at understanding three issues:
(1) What is the applicability of strong respectively weak planning in SAM?
(2) Is the runtime performance of our planner sufficient for the envisioned application?
(3) How interesting is SAM as a planning benchmark?
We first explain the experiments setup. We then describe our experiments with FF for strong plans, and with FF for weak plans; we summarize our findings with blind search. While these experiments consider instances pertaining to a single BO, we finally examine what happens when scaling the number of relevant BOs.
6.1 Experiments Setup
All experiments were run on a 1.8 GHz CPU, with a 10 minute time and 0.5 GB memory cut-off. Our planner is implemented in C as a modification of FF-v2.3. The source code, the problem generator used in our experiments, and the anonymized PDDL encoding of SAM are available for download at http://www.loria.fr/~hoffmanj/SAP-PDDL.zip. Our SAM-AO* implementation is modified from the AO* implementation in Contingent-FF (Hoffmann & Brafman, 2005). Like that planner, we weight heuristic values by a factor of 5 (we did not play with this parameter).
We focus on the case where the initial state is set as specified in SAM. Thus a SAM planning instance in what follows is identified by its goal: a subset of variable values. The number of such instances is finite, but enormous; just for choosing the subset of variables to be constrained we have $2^{110}$ options. In what follows, we mostly consider goals all of whose variables belong to a single BO. This is sensible because, as previously stated, SAM currently does not reflect interactions across BOs. We made an instance generator that allows to create instance subsets characterized by the number $|G|$ of variables constrained in the goal (this parameter is relevant for business users, and as we shall see it also heavily influences planner performance). For given $|G|$, the generator enumerates all possible variable tuples, and allows to randomly sample for each of them a given number $S$ of value tuples. The maximum number of variables of any BO, in the current version of SAM, is 15. We created all possible instances for $|G| = 1, 2, 3, 13, 14, 15$ where the number of instances is up to around 50000. For all other values of $|G|$, we chose a value for $S$ so that we got around 50000 instances each. The total number of instances we generated is 548987.
Since SAM currently does not model cross-BO interactions, for a single-BO goal we can in principle supply to the planner only those actions pertaining to that BO. We will henceforth refer to this option as using the *BO-relevant* actions. Contrasting with this, the *full* actions option supplies to the planner all actions (no matter which BO they pertain to). We will use the BO-relevant actions in some experiments where we wish to enable the planner to prove the planning task to be unsolvable – with the full actions, this is always impossible because the reachable state space is much too vast. In our baseline, however, we use the full actions. The motivation for this is that helpful actions pruning will detect the irrelevant actions anyway (cf. Section 5.2), and in the long term, it is likely that SAM will model cross-BO interactions.
### 6.2 Strong SAM Plans
In our first experiment, we evaluate the performance of strong planning on SAM, i.e., we run FF in a standard AO* tree search forcing all children of AND nodes to be solved. We identify two parameters relevant to the performance of FF: the kind of BO considered, and $|G|$. Figure 6 shows how coverage and state evaluations (number of calls to the heuristic function) depend on these parameters.
Consider first Figure 6 (a). The $x$-axis ranges over BOs, i.e., each data point corresponds to one kind of BO.\footnote{Note that we do not include all 404 BOs. Precisely, we consider 371 of them. The remaining 33 BOs are not interesting for planning: all variable values not true in the initial state are unreachable because these values are set by procedures not encoded in SAM.} The ordering of BOs is by decreasing percentage of solved instances. For each BO, the $y$-axis shows the percentage of solved, unsolved, and failed instances within that BO. The overall message is mixed. On the one hand, coverage is perfect for 194 of the 371 BOs, so for more than half of the BOs all the tested instances have a strong plan which is found by FF. On the other hand, for the other 177 BOs, coverage is rather bad. For 51 of the BOs, not a single instance is solved. On the 126 BOs in between, coverage declines steeply. Importantly, when counting unsolved cases in total (across BOs), it turns out that 88.83% of the instances are unsolved. In other words, **for almost 90% of the tested cases FF’s search space does not contain a strong SAM plan**. Of course, this percentage pertains to the particular distribution of test cases that we used. Still this result indicates that the applicability of strong planning in SAM is quite limited.
Another interesting aspect of Figure 6 (a) is that failed cases are rare: they constitute only 0.8% of the total instance set. That is, due to helpful actions pruning, FF’s search spaces are typically small enough to be exhausted within the given time and memory.
Consider now Figure 6 (b), which shows coverage on the $y$-axis over $|G|$ on the $x$-axis. Again, the message is mixed. On the one hand, with a single goal ($|G| = 1$), 58.85% of the instances are solved with a strong plan. On the other hand, the number of solved cases declines monotonically, and quite steeply, over growing $|G|$. With 2 goals we are at 36.95%, with 4 goals at 29.03%, with 5 goals at 23.86%. For $|G| \geq 10$, the number of solved cases is less than 5%, and for $|G| \geq 13$, the number is less than 1%.
One may wonder at this point whether it is FF’s helpful actions pruning that is responsible for the frequent non-existence of strong plans. The answer is “no”. In a second experiment with strong planning, we ran FF without helpful actions, giving as input only
the BO-relevant actions in order to enable proofs of unsolvability. The result is very clear: the number of solved cases hardly changes at all. The total percentage of solved cases in the previous experiment is 10.38%, the total percentage in the new experiment is 10.36%. This low success rate is due to unsolvability, not to prohibitively large search spaces. In total, 74.1% of the instances are proved unsolvable; FF fails only on 15.54%.
Given the above, the applicability of strong planning in SAM appears limited unless we can restrict attention to BOs with few variables and/or to single-goal planning tasks. From a more general perspective, the best option appears to be to:
(I) Try to find a strong SAM plan (using FF with AO*).
(II) If (I) fails, try to find a weak SAM plan (using FF with SAM-AO*).
In this setting, it is relevant how long we will have to wait for the answer to (I). Figures 6 (c) and (d) provide data for this. We consider the instances where FF terminated regularly (plan found or helpful actions search space exhausted), and we consider performance in terms of the number of evaluated states, i.e., the number of calls to the heuristic function.
The ordering of BOs in Figure 6 (c) is by increasing $y$-value for each curve individually; otherwise the plot would be unreadable. The most striking observation is that, for 351 of the 371 BOs, the maximum number of state evaluations is below 100. For solved instances, this even holds for 369 BOs, i.e., for all but 2 of the BOs. The maximum number of state evaluations done in order to find a plan is 521, taking 0.22 seconds total runtime. The mean behavior is even more good-natured, peaking at 9.85 state evaluations. Waiting for a “no” can be more time consuming, with a peak at 42954 evaluations respectively 110.87 seconds. However, since all the “yes” answers are given very quickly, for practical use in an online business process modeling environment it seems feasible to simply give the strong planning one second (or less), and switch to weak planning in case that was not successful.
Consider Figure 6 (d). Like in (c), we can observe the very low number of state evaluations required for the solved cases. Somewhat surprisingly, there is no conclusive behavior over $|G|$. The reasons are not entirely clear to us. The “UNSOLVED MAX” curve is flat at the top because larger search spaces lead to failure. The discontinuities around $|G| = 12$ are presumably due to BO structure. Few BOs have more than 12 variables, so the variance in the data is higher in this region. For the sharp drops in “UNSOLVED MAX” and “SOLVED MAX”, an additional factor is that there are very few strong plans for such large goals (cf. Figure 6 (b)): those strong plans that do exist are found easily; disproving existence of a strong plan can be easier for larger goals, since this increases the chance that the relaxed plan will identify at least one unsolvable goal.
Summing up our findings regarding the issue (1) [applicability of strong vs. weak planning in SAM] we wish to understand in these experiments, SAM does not admit many strong plans, but those instances that do have them tend to be solved easily by FF.
### 6.3 Weak SAM Plans
We will now see that **weak SAM planning can solve 8 times as many instances as strong SAM planning** – namely around 80% of our test cases. Precisely, of the 548987 instances, 441884 are solved; all but 43 of these are solved by the default configuration of our planner. The average percentage of non-deterministic actions, across all weak plans, is 48.29%; the maximum percentage is 91.67%. Figure 7 shows our results, giving the same four kinds of plots as previously shown for strong planning in Figure 6.
Consider first Figure 7 (a). We see that, now, coverage is perfect in 274 of the 371 kinds of BOs, as opposed to the 194 BOs of which that is true for strong planning. The latter BOs are a subset of the former: wherever strong planning has perfect coverage the same is true of weak planning. Whereas strong planning has 0 coverage – no instance solved at all – for 51 BOs, we have no such cases here. The minimum coverage is 18.07%, and coverage is below 50% only for 9 BOs. In total, while strong planning solves only 10.38% of the test cases, we now solve 80.48%. That said, we still have 17.12% unsolved cases and 2.4% failed cases, and this gets much worse for some BOs. Per individual BO, the fraction of unsolved instances peaks at 81.92%, and the fraction of failed instances peaks at 14.98%.
Consider now Figure 7 (b). $|G| = 1$ is handled perfectly – 100% coverage as opposed to 58.85% for strong planning – but this is followed by a fairly steady decline as $|G|$ grows. An explanation for the discontinuity at $|G| = 3, 4$ could be that for $|G| = 3$ our experiment
is exhaustive while for $|G| = 4$ we only sample. As above, the higher variance for large $|G|$ can be explained by the much smaller number of BOs in this region.
Figures 7 (c) and (d) provide a deeper look into performance on those instances where FF terminated regularly (plan found or helpful actions search space exhausted). Like in Figure 6 (c), the ordering of BOs in Figure 7 (c) is by increasing $y$-value for each curve individually. The number of state evaluations is typically low. The phenomenon is not quite as extreme as shown for strong planning in Figure 6 (c). This is likely because the more generous definition of plans makes it more difficult for FF to prove AND nodes unsolvable, and hence to prune large parts of the search space. In detail, for 350 of the 371 BOs, the maximum number of state evaluations is below 100; for solved instances, this holds for 364 BOs (the corresponding numbers for strong planning are 351 and 369). The maximum number of state evaluations done in order to find a plan is 54386, taking 27.41 seconds total runtime (521 and 0.22 for strong planning). From the “SOLVED MEAN” curve we see that this maximal case is very exceptional – the mean number of state evaluations per BO peaks
at 47.78. Comparing this to the “UNSOLVED MEAN” curve, we see that a large number of search nodes is, as one would expect, much more typical for unsolved instances.
In Figure 7 (d), we see that the overall behavior of state evaluations over \(|G|\) largely mirrors that of coverage, including the discontinuity at \(|G| = 3, 4\). The most notable exception is the fairly consistent decline of “SOLVED MAX” for \(|G| > 3\). It is unclear to us what the reason for that is.
What is the conclusion regarding the issues (2) [planner performance] and (3) [benchmark challenge] we wish to understand? For issue (2), our results look fairly positive. In particular, consider only the solved instances (weak plan found). As explained above, the number of state evaluations is largely well-behaved. In addition, the heuristic function is quite fast. As stated, the maximum runtime is 27.41 seconds. The second largest runtime is 2.6 seconds, and the third largest runtime is 1.69 seconds; all other plans are found in less than 0.3 seconds. So a practical approach could be to simply apply a small cut off, like 0.5 seconds, or perhaps a minute if time is not as critical. This yields a quick step (II) as a follow-up of the similarly quick step (I) determined above for strong planning.
What this strategy leaves us with are, in total, 17.12% unsolved instances and 2.4% failed ones. Are those an important benchmark challenge for future research? Answering this question first of all entails finding out whether we can solve these instances when not using helpful actions pruning, and if not, whether they are solvable at all.
We ran FF without helpful actions pruning on the unsolved and failed instances of Figure 7, slightly more than 100000 instances in total. We enabled unsolvability proofs by giving as input only the BO-relevant actions, and we facilitated larger search spaces by increasing the time/memory cut-offs from 10 minutes and 0.5 GB to 30 minutes and 1.5 GB respectively. All failed instances are still failed in this new configuration. Of the previously unsolved instances, 47.43% are failed, and 52.52% are proved unsolvable.\(^{10}\) Only 0.05% – 43 instances – are now solved (the largest plan contains 140 actions). The influence of \(|G|\) and the kind of BO is similar to what we have seen. The number of state evaluations is vastly higher than before, with a mean and max of 10996.72 respectively 289484 for *solved* instances. But the heuristic is extremely fast with only the BO-relevant actions, and hence finding a plan takes a mean runtime of only 0.12 seconds. The max, second max, and third max runtimes are 2.94 seconds, 0.7 seconds, and 0.53 seconds respectively; all other plans are found in less than 0.15 seconds. Thus, with the above, all but 6 of the 441884 weak plans in this experiment are found in less than 0.5 seconds.
All in all, changing the planner configuration achieves some progress on the instances not solved by the default configuration, and it appears that many of them are unsolvable anyway. But certainly they are a challenge for further research.
### 6.4 Blind Search
To explore to what extent heuristic techniques are actually required to successfully deal with this domain – and thus to what extent the domain constitutes an interesting benchmark
---
\(^{10}\) Unsolvability of certain goal value combinations, i.e., partial assignments to a BO’s variables, occurs naturally since these variables are not independent. For example, some of the unsolvable instances required a BO to simultaneously satisfy “BO.approval:In Approval” and “BO.release:Released”. In our running example, this kind of situation arises, e.g., when requiring “CQ.approval:notChecked” together with “CQ.acceptance:accepted”.
for such techniques – we ran an experiment with blind search. We used AO* with a trivial heuristic that returns 1 on non-goal states and 0 on goal states. Since weak planning is much more applicable than strong planning in SAM, we used the weak planning semantics. We provided as input only the BO-relevant actions – otherwise, blind forward search is trivially hopeless due to the enormous branching factor.
For the sake of conciseness, we do not discuss the results in detail here. In summary, blind search is quite hopeless. It runs out of time or memory on 79.36% of our test instances. It solves 19.04% of them – as opposed to the 80.48% solved based on FF. Interestingly, due to FF’s ability to detect dead-ends via relaxed planning graphs, blind search is worse than heuristic search even at proving unsolvability: in total, this happens in 5.99% of the cases using FF, and in 1.60% of the cases using blind search.
That said, for BOs that have only few status variables and/or status values, and for goals of size 1 or 2, blind search fares well, if not as well as heuristic search. The interesting benchmarks lie outside this region – which is the case for more than 90% of our test instances.
### 6.5 Scaling Across BOs
FF does not scale gracefully to planning tasks with several BOs. We selected, for each BO, one solved instance $m(BO)$ with maximum number of state evaluations. Since that will be of interest, we include here all 404 BOs, i.e., also those 33 BOs all of whose planning goals are trivial (either unsolvable or true in the BO’s initial state already); $m(BO)$ is 1 for these BOs. We generated 404 planning tasks $COM_k$, for $1 \leq k \leq 404$, combining the goals $m(BO)$ for all BOs up to number $k$, in an arbitrary ordering of the BOs. We compared the data thus obtained against data we refer to as $ACC_k$, obtained by summing up the state evaluations when running FF in turn on each of the individual goals $m(BO)$. This comparison is valid since the BOs are mutually independent, and a plan for $COM_k$ can be obtained as the union of the plans for the individual goals. Figure 8 shows the data.

**Figure 8:** *Weak planning with FF when scaling the number of relevant BOs.* State evaluations plotted over the number of BOs for which a goal is specified. “COMBINED” means that FF is run on the conjunction of all these goals ($COM_k$ in the text). “ACCUMULATED-INDIVIDUAL” gives the sum of the data when running FF individually on each single goal ($ACC_k$ in the text).
As Figure 8 shows quite clearly, FF does not scale gracefully to planning tasks with several BOs.\footnote{The “vertical” part of the plot for $ACC_k$ is because, as we noticed before, the globally maximal number of state evaluations, 54386 for BO 347, is an extreme outlier.} The largest instance solved is $k = 103$, with 38665 evaluations. The sum of state evaluations when solving the 103 sub-tasks individually is 529. A possible explanation is that, adding more goals for additional BOs, more actions are helpful. The increased number of nodes may multiply over the search depth. Interestingly, the disproportionate search increase occurs even when the new goal added is trivial. For example, $ACC_{98}$ has just 1 more state evaluation than $ACC_{97}$, while for $COM_{98}$ and $COM_{97}$ that difference amounts to 251 state evaluations. On the other hand, for up to $k \leq 14$ BOs, $ACC_k$ is still below 1000; the difficulties arise only when $k$ becomes quite large.
7. Application Prototypes at SAP
We have integrated our technology into two BPM modeling environments. We next briefly explain how we transform the planner output into a BPM format. We then outline the positioning of our prototypes at SAP, and illustrate the business user view of our technology. We close with a few words on how the prototypes have been evaluated SAP-internally.
7.1 Transforming Plans to Business Processes
Business users expect to get a process model in a human-readable BPM workflow format. We use BPMN (Object Management Group, 2008). The BPMN process model corresponding to Figure 2 is depicted in Figure 9. This process model makes use of alternative (“x”) and parallel (“+”) execution, unifies redundant sub-trees (“Submit CQ” … “Archive CQ”), removes failed outcomes, and highlights in red those nodes that may have such outcomes. These changes are obtained using the following simple post-process to planning.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{bpmn_diagram.png}
\caption{Final BPM process created for the running example.}
\end{figure}
First, we remove each failed node together with the edge leading to it. In our running example, Figure 2, this concerns the “N” branches of “Check CQ Completeness” and “Check CQ Consistency”, and the “notGranted” branch of “Decide CQ Approval”. Next, we separate property checking from directing the control flow. We do this for each node that has more than 1 child. We replace each such node with a process step bearing the same name, followed by an XOR split (BPMN control nodes giving a choice to execution). In the example, this concerns “Check CQ Approval Status”. We then re-unite XOR branches using XOR joins (BPMN control nodes leading alternative executions back together), avoiding redundancies in the process by finding pairs of nodes that root identical sub-trees. In Figure 2, this pertains to the two occurrences of “Submit CQ”. We introduce a new XOR join node taking the incoming edges of the found node pair. We attach one copy of the common sub-tree below that XOR join. We insert a BPMN start node, join all leaves via a new XOR join, and attach a BPMN end node as the new (only) leaf of the plan. Finally, we introduce parallelism by finding non-interacting sub-sequences of actions in between the XOR splits and joins that were introduced previously. (This is a heuristic notion of parallelism, that does not guarantee to detect all possible parallelizations in the process.)
7.2 Positioning of our Prototypes at SAP
As part of the effort to transfer our research results to the SAP product development organization, we integrated our planning approach into two BPM prototypes.
The first one, called *Maestro for BPMN* (Born, Hoffmann, Kaczmarek, Kowalkiewicz, Markovic, Scicluna, Weber, & Zhou, 2008, 2009), is a BPMN process modeling tool developed by SAP Research primarily for the purpose of research and early prototyping. We focus in what follows on our other prototype, which is implemented in the context of the commercial SAP NetWeaver platform (SAP, 2010). NetWeaver is one of the most prominent software platforms at SAP, and is the central platform for SAP’s service-oriented architecture. It encompasses all the functionalities required to run an SAP application. Our prototype is implemented as a research extension of *SAP NetWeaver BPM*.
SAP NetWeaver BPM consists of different parts for process modeling and execution. Our planning functionality is integrated into the *SAP NetWeaver BPM Process Composer*, NetWeaver’s BPM modeling environment targeted at the creation of new processes. The process modeling is done in BPMN notation; that notation is given an execution semantics by NetWeaver BPM’s process execution engine.
7.3 Demonstration of our NetWeaver BPM Prototype
We briefly illustrate what using our planning functionality will look like to business users. We consider application scenario (C) as described in Section 3.3, where the business user redesigns a new process from scratch. For application scenarios (A) and (B) from Section 3.3 – using planning during SAM-based development, respectively generating a process template at the beginning of the modeling activity – the same interface can be used.
In designing a new process, the user chooses the “atomic” process steps according to his/her intuition. At IT level, this is no more than drawing a box and inserting a descriptive text. To align this intuitive design with the actual IT infrastructure the process should run on, our planner allows to check how the atomic steps can be implemented based on existing
transactions. Say the user has designed the process model shown in Figure 10. Amongst others, the process contains a step “Release Purchase Order”, whose intention is to order the purchase of a special part required to satisfy the customer demand, after the customer quote has been accepted. The user now wishes to inflate “Release Purchase Order” to an actual IT-level process fragment having the intended meaning. A double click on the step opens the shown interface for entering the planning initial state and goal, i.e., the desired status variable value changes, associated to this step. The status variable values are chosen via drop-down menus, selecting “initial conditions” on the left-hand side and “goals” on the right-hand side. In the present case, the goal is “PO.Status:Ordered”, and the initial condition is “PO.Status:Created” because the purchase order (PO) was already created beforehand and shall now be released.
Once the status variable values are entered, the user clicks on “Call composer”. This invokes the planner, using the specified initial condition/goal to define the SAM planning
The returned plan is transformed to BPMN, and is inserted into the process model in place of the atomic step that the user had been working on; see the illustration in Figure 11. In the shown case, the plan is a process snippet containing five atomic transactions with one XOR split and two possibly failed outcomes, showing that releasing the PO entails to first process its data, then check this data, and then invoke an approval process similar to that of our illustrative example; finally, the PO is being ordered.
Note here that, while cross-BO interactions are not part of the SAM model, the planner helps to create a process that spans multiple BOs, and that indeed ties together functionality that cuts across departmental boundaries. The process snippet shown in Figure 10 invokes the purchase of goods from a supplier as soon as a customer accepts a quote. This is relevant for companies that sell highly customized goods (e.g., special-purpose ships), and who in
---
12. In the current implementation of the prototype, if the value of a variable $x$ is not specified in the “initial condition” given by the user, then the planner does not make use of $x$, i.e., all preconditions on $x$ are assumed to be false until $x$ is set by an action effect. One could of course easily make this more comfortable, by assuming SAM’s initial values as a default, and by propagating the effects of earlier SAM transactions (on the same BO) along the process structure. First investigations into the latter have been performed (May & Weber, 2008).
13. Indeed, approval is one of the design patterns that SAP applied throughout SAM. The actual pattern is more complicated than our illustrative version here.
turn must procure customized parts from their suppliers (e.g., ship engines). Such processing requires to combine services from BOs belonging to Customer Relationship Management (CRM) and Supplier Relationship Management (SRM), and hence from the two “opposite ends” of the system (and company). The designer of the process will typically be intimately familiar with only one of these two, making it especially helpful to be able to call the planner to obtain information about the other one.
7.4 Evaluation of our Prototype at SAP
Our prototype was part of a research transfer project with the NetWeaver BPM group, and shaped during several feedback rounds with developers and architects. The evaluation within SAP consisted mainly of prototype demonstrations at various SAP events. For example, an early version of the tool was demonstrated at the 2008 global meeting of SAP Research “BPM and SI” (SI stands for “Semantic Integration”), which included participants from SAP partners and development. The demonstrations received positive feedback from SAP software architects. The perception was that this functionality would significantly strengthen the link between SAP BPM and the underlying software infrastructure, making it much easier to access the services provided in an effective manner. Most critical comments were focused on some choices in the user interface design, such as “non-logicians will not understand the meaning of the NOT symbol”, or “the list of several hundred BOs is too long for a drop-down box”.
We do not have customer evaluation data, and it is not foreseeable when we (or anyone else) will be able to obtain such data. When our first prototype became available, a partner organization committed to perform a pilot customer evaluation. That commitment was retracted in the context of the 2008/2009 financial crisis. Anyway, real customer evaluation data may be impossible to come by, let alone publish, due to privacy reasons.
There are also some issues arising from the positioning of our prototype inside the SAP software architecture. The NetWeaver process execution engine currently does not connect to the actual IT services that implement SAM’s “actions”. While SAM is in productive use within Business ByDesign, NetWeaver BPM is built on a different technology stack. A connection could in principle be established relatively easily – after all, service-orientation is intended to do exactly this sort of thing – however this connection has not as yet been on SAP’s agenda, and it involves some SAP-internal political issues. The fact that the main drivers of the presented technology – the authors of this paper – have left the company in the meantime does of course not help to remedy this problem.
8. Related Work
The basic idea explored in this paper – using planning systems to help business experts to come up with processes close to the IT infrastructure – has been around for quite a long time. For example, it was mentioned more than a decade ago by Jonathan et al. (1999). It is also discussed in the 2003 roadmap of the PLANET Network of Excellence in AI Planning (Biundo et al., 2003). More recently, Rodriguez-Moreno et al. (2007) implemented the idea in the SHAMASH system. SHAMASH is a knowledge-based BPM tool targeted at helping with the engineering and optimization of process models (Aler, Borrajo, Camacho, & Sierra-Alonso, 2002). The tool includes, amongst other things, user-friendly interfaces allowing
users to conveniently annotate processes with rich context information, in particular in the form of “rules” which roughly correspond to planning actions. These rules (and other information) then form the basis for translation into PDDL, and planning for creation of new process models.
The largest body of related work was performed during the last decade under a different name, *semantic web service composition* (*SWSC*), in the context of the Semantic Web Community (e.g., Ponnekanti & Fox, 2002; Narayanan & McIlraith, 2002; Srivastava, 2002; Constantinescu, Faltings, & Binder, 2004; Agarwal et al., 2005; Sirin, Parsia, Wu, Hendler, & Nau, 2004; Sirin et al., 2006; Meyer & Weske, 2006; Liu, Ranganathan, & Riabov, 2007). In a nutshell, the idea in SWSC is (1) to annotate web services with some declarative abstract explanation of their functionality, and (2) to exploit these “semantic” annotations to automatically combine web services for achieving a more complex functionality. While SWSC terminology differs from what we use in this paper, the idea is basically the same (although most SWSC works do not address BPM specifically).
The key distinguishing feature of the present work is our approach to obtaining the planning input (the “semantic annotations”). Ours is the first attempt to address the planning/SWSC problem based on SAM, and more generally based on any pre-existing model at all. Since modeling is costly (Kambhampati, 2007; Rodriguez-Moreno et al., 2007), this shift of focus gets us around one of the major open problems in the area. The modeling interfaces in SHAMASH (Rodriguez-Moreno et al., 2007), and some related works attempting to support model creation (Gonzalez-Ferrer, Fernandez-Olivares, & Castillo, 2009; Cresswell, McCluskey, & West, 2009, 2010), address the same problem, but in very different ways and to a less radical extent. Whereas these works attempt to ease the modeling overhead, our re-use of SAM actually removes that overhead completely.
Having said that, of course there are relations between SAM planning and previous work, at the technical level. In particular, the planning and SWSC literature contains a multitude of works dealing with actions that, like SAM’s disjunctive effect actions, have more than one possible outcome. Our semantics for such actions, as detailed already in Section 4.2, is a straightforward mixture of two wide-spread notions in planning: observation actions, where one out of a list of possible observations is distinguished, and non-deterministic actions, where one out of a list of possible effects occurs (e.g., Weld et al., 1998; Smith & Weld, 1999; Bonet & Geffner, 2000; Cimatti et al., 2003; Bryce & Kambhampati, 2004; Hoffmann & Brafman, 2005; Bonet & Givan, 2006; Bryce, Kambhampati, & Smith, 2006; Bryce & Buffet, 2008; Palacios & Geffner, 2009).
A prominent line of research in web service composition, known as “the Roman model” (e.g., Berardi, Calvanese, De Giacomo, Lenzerini, & Mecella, 2003, 2005; De Giacomo & Sardiña, 2007; Sardiña, Patrizi, & De Giacomo, 2008; Calvanese, De Giacomo, Lenzerini, Mecella, & Patrizi, 2008), also deals with a notion of “non-determinism” in the component web services, however the framework is very different from ours. The web services in the Roman model are *stateful*. That is, each service has a set of possible own/internal states, and the service provides a set of *operations* to the outside world, which are responsible for transitions in the service’s internal state. The composition task is to create a scheduler (a function choosing one service for each operation demanded) interacting with the component services in a way such that they implement a desired goal transition system. Similarly, the web service composition techniques developed by Marco Pistoire and his co-workers (e.g.,
Pistore, Marconi, Bertoli, & Traverso, 2005; Bertoli, Pistore, & Traverso, 2006, 2010) deal with this form of non-determinism in stateful component services formalized as transition systems. In their work, the composition task is to create a controller transition system such that the overall (controlled) behavior satisfies a planning-like goal (expressed in the EAGLE language, cf. below). In that latter aspect – attempting to satisfy a planning goal – their framework is slightly closer to ours than the Roman model.
The main distinguishing feature of our formalism is its notion of “weak SAM plans”, allowing failed action outcomes but only if they are proved unsolvable, and only if at least one outcome of each action is successful. Some other works have also proposed notions of plans that do not guarantee to achieve the goal in all cases, and some notions of more complex goals can be used to achieve similar effects. We now briefly discuss the notions closest to our own approach.
The notions of weak and strong plans, as discussed in Section 4.2, were first introduced by Cimatti, Giunchiglia, Giunchiglia, and Traverso (1997) and Cimatti, Roveri, and Traverso (1998b), respectively. Strong cyclic plans were first introduced by Cimatti, Roveri, and Traverso (1998a). That notion is orthogonal to weak SAM plans in that neither implies the other. For example, if a “bad” action outcome invalidates, as a side effect, the preconditions of all actions in the task, then that action may form part of a weak SAM plan, but never of a strong cyclic plan. Vice versa, strong cyclic plans have a more general structure, in particular allowing non-deterministic actions to appear more than once in an execution.
Pistore and Traverso (2001) generalize weak, strong, and strong cyclic plans by handling goals taking the form of CTL formulas. However, as pointed out by Dal Lago et al. (2002), such goals are unable to express that *the plan should try to achieve the goal, and give up only if that is not possible*. Dal Lago et al. design the goal language EAGLE which addresses (amongst others) this shortcoming. EAGLE features a variety of goal operators that can be flexibly combined to form goal expressions. One such expression is “TryReach $G_1$ Fail $G_2$”, where $G_1$ and $G_2$ are alternative goals. The intuition is that the plan should try to achieve $G_1$, and resort to $G_2$ if reaching $G_1$ has become impossible. More precisely, a plan $T$ for such an EAGLE goal is optimal if, for every state $s$ it traverses: (1) $T$ is a strong plan for $G_1 \lor G_2$; (2) if there exists a strong plan for $G_1$ from $s$, then $T$ is such a strong plan; and (3) if there exists a weak plan for $G_1$ from $s$, then $T$ is such a weak plan. Applying this to our context, say we restrict plans to execute each non-deterministic action at most once. It is easy to see that, within this space of plans, every weak SAM plan is optimal for the EAGLE goal “TryReach $G$ Fail TRUE”: weak SAM plans mark $s$ as failed only if reaching $G$ from $s$ is impossible. However, not every action tree $T$ that is optimal for “TryReach $G$ Fail TRUE” is a weak SAM plan. That is because (2) and (3) do not force every action to have at least one solved outcome. In tasks for which no weak SAM plan exists, *any* action tree (e.g., the empty tree) is optimal for “TryReach $G$ Fail TRUE”. In tasks with a weak SAM plan, “TryReach $G$ Fail TRUE” forces each solvable action outcome to provide a solution, but imposes no constraints below failed outcomes (which may thus be continued by arbitrarily complex sub-plans).
Shaparanu et al. (2006) define a framework that has a similar effect in our context. They consider contingent planning in the presence of a linear preference order over alternative goals. Action trees $T$ are plans if they achieve at least one goal in every leaf, i.e., they are strong plans for the disjunction of the goals. Plan $T$ is at least as good as plan $T'$ if, in every
state common to both, the best possible outcome achievable using $T$ is at least as good as that achievable using $T'$. $T$ is optimal if it is at least as good as all other plans. Given this, like for the EAGLE goal “TryReach G Fail TRUE” discussed above, every weak SAM plan is optimal for the goal preference “G, TRUE”, but the inverse is not true because, in unsolvable tasks and below unsolvable outcomes in solvable tasks, “G, TRUE” permits the plan to do anything.
Mediratta and Srivastava (2006) define a framework also based on contingent planning, but where the user provides as additional input a number $K$. Then, (1) a “plan” is any tree $T$ with $\geq K$ leaves achieving the goal; and (2) if such $T$ does not exist, then a “plan” is any tree whose number of such leaves is maximal. Due to (1), failed nodes are not necessarily unsolvable (the plan may simply stop once it reached $K$). Due to (2), even a task where the goal cannot be reached at all, i.e., no matter what the action outcomes are we cannot achieve the goal, has a “plan”. In addition, in our application context there is no sensible way, for the human modeler, to choose a meaningful value for $K$.
Summing up, related notions of weak plans exist, but none captures exactly what we want in SAM. On the algorithmic side, all the works listed here use symbolic search (based on BDDs), and are thus quite different from our explicit-state SAM-AO* search. The single exception is the planner described by Mediratta and Srivastava (2006), which is based on a variant of A*, but for a different plan semantics as described.
Research has been performed also into alternative methods, not based on planning, for automatically generating processes. For example, Küster, Ryndina, and Gall (2007) describe a method computing the synchronized product of the life-cycles of a set of business objects, and generating a process corresponding to that product. That process serves as the basis for customer-specific modifications. Clearly, that motivation relates to ours; but the intended meaning of the output (the generated process), and the input assumed for its generation, are quite different. As for the input, Küster et al.’s life-cycles are state machines describing all possible behaviors of the object, which in our formulation here corresponds to the space of all reachable states. That space can be generated based on SAM, but it can be huge even for single BOs, not to mention their product (cf. our results for blind search and scaling across BOs, Sections 6.4 and 6.5). Heuristic search gets us around the need to enumerate all these states. As for the output, Küster et al.’s generated processes guarantee not only to comply with the BO behaviors (which ours do as well), but also to cover them, essentially representing all that could be done. This is very different from the plans we generate, whose intention is to show specifically how to move between particular start and end states. Altogether, the methods are complementary. Planning has computational advantages if the involved objects have many possible states, as is often the case in SAM.
9. Conclusion
We have pointed out that SAP has built a large-scale model of software behavior, SAM, whose abstraction level and formalization are intimately related to planning models in languages such as PDDL. We have shown how to base a promising BPM application of planning on this fact. Getting the planner input for free, we avoid one of the most important obstacles for making this kind of planning application successful in practice. Our solution is specific to our particular context in its treatment of non-deterministic actions and failed outcomes,
but such phenomena are quite common in both planning and web service composition, and our novel approach to dealing with them might turn out to be relevant more generally.
The main open issue is to obtain concrete data evaluating the business value of our application. Some other points are:
- Our modification of FF successfully handles many SAM instances, finding plans within runtimes small enough to apply realistic online-setting cut-offs. About 15% of the instances we encountered still present challenges. These instances could serve as an interesting benchmark for approaches dealing with failed outcomes in ways related to what we do here (cf. Section 8).
- The current SAM model does not reflect dependencies across BOs. Such dependencies do, however, exist in various forms. For example, some BOs form part of the data contained in another kind of BO, some actions on one kind of BO create a new instance of another kind of BO, and some actions must be taken by several BOs together. There is an ongoing activity at SAP Research, aiming at enriching SAM to reflect some of these interactions, with the purpose of more informed model checking. All the interactions can easily be modeled in terms of well-known planning constructs (object creation, and preconditions/effects spanning variables from several BOs), so we expect that this extended model will enable us to generate more accurate plans. As the results from Section 6.5 indicate, additional planning techniques may be required to improve performance in case the number of interacting BOs becomes large. But for smaller numbers (up to around a dozen BOs) the performance of our current tool should still be reasonable.
- SAM currently provides no basis for automatically creating a number of additional process aspects. An important aspect is exception handling, for which at the moment we can only highlight the places (failed nodes) where it needs to be inserted. Another issue is data-flow. This is mostly easy since the application data is already pre-packaged in the relevant BOs, but there are some cases, like security tokens, not covered by this. An open line of research is to determine how these aspects could be modeled, in a way that can be exploited by corresponding planning algorithms.
- It may also be interesting to look into methods presenting the user with a set of alternative processes. For discerning between relevant alternatives, such methods require extensions to SAM, like action duration, action cost, or plan preferences.
From a more general perspective, the key contribution of our work is demonstrating the potential synergy between model-based software engineering and planning-based process generation. Re-using some (or even all) of the required models, the human labor required to realize the planning is dramatically reduced. SAM’s methodology – business-level descriptions of individual activities within a software architecture – is not specific to SAP. Thus, exploiting this synergy is a novel approach that may turn out fruitful far beyond the particular application described herein.
**Acknowledgments**
We thank the anonymous JAIR reviewers, whose comments helped a lot in improving the paper.
Most of this work was performed while all authors were employed by SAP. Part of this work was performed while Jörg Hoffmann was employed by INRIA (Nancy, France), and while Ingo Weber was employed by The University of New South Wales (Sydney, Australia).
NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program.
References
Aalst, W. (1997). Verification of Workflow Nets. In *Application and Theory of Petri Nets 1997*.
Agarwal, V., Chafle, G., Dasgupta, K., Karnik, N., Kumar, A., Mittal, S., & Srivastava, B. (2005). Synthy: A system for end to end composition of web services. *Journal of Web Semantics, 3*(4).
Aler, R., Borrajo, D., Camacho, D., & Sierra-Alonso, A. (2002). A knowledge-based approach for business process reengineering: SHAMASH. *Knowledge Based Systems, 15*(8), 473–483.
Bacchus, F. (2000). *Subset of PDDL for the AIPS2000 Planning Competition*. The AIPS-00 Planning Competition Comitee.
Bäckström, C., & Nebel, B. (1995). Complexity results for SAS$^+$ planning. *Computational Intelligence, 11*(4), 625–655.
Bell, M. (2008). *Service-Oriented Modeling: Service Analysis, Design, and Architecture*. Wiley & Sons.
Berardi, D., Calvanese, D., De Giacomo, G., Lenzerini, M., & Mecella, M. (2003). Automatic composition of e-services that export their behavior. In Orlowska, M. E., Weerawarana, S., Papazoglou, M. P., & Yang, J. (Eds.), *Proceedings of the 1st International Conference on Service-Oriented Computing (ICSOC’03)*, Vol. 2910 of *Lecture Notes in Computer Science*, pp. 43–58. Springer.
Berardi, D., Calvanese, D., De Giacomo, G., Lenzerini, M., & Mecella, M. (2005). Automatic service composition based on behavioral descriptions. *International Journal of Cooperative Information Systems, 14*(4), 333–376.
Bertoli, P., Pistore, M., & Traverso, P. (2006). Automated web service composition by on-the-fly belief space search. In Long, D., & Smith, S. (Eds.), *Proceedings of the 16th International Conference on Automated Planning and Scheduling (ICAPS-06)*, Ambleside, UK. AAAI.
Bertoli, P., Pistore, M., & Traverso, P. (2010). Automated composition of web services via planning in asynchronous domains. *Artificial Intelligence, 174*(3-4), 316–361.
Biundo, S., Aylett, R., Beetz, M., Borrajo, D., Cesta, A., Grant, T., McCluskey, L., Milani, A., & Verfaillie, G. (2003). PLANET Technological Roadmap on AI Planning and Scheduling. http://planet.dfki.de/service/Resources/Roadmap/Roadmap2.pdf.
Bonet, B., & Geffner, H. (2000). Planning with incomplete information as heuristic search in belief space. In Chien, S., Kambhampati, R., & Knoblock, C. (Eds.), *Proceedings of the
Bonet, B., & Geffner, H. (2001). Planning as heuristic search. *Artificial Intelligence, 129*(1–2), 5–33.
Bonet, B., & Givan, B. (2006). 5th international planning competition: Non-deterministic track – call for participation. In *Proceedings of the 5th International Planning Competition (IPC’06)*.
Born, M., Hoffmann, J., Kaczmarek, T., Kowalkiewicz, M., Markovic, I., Scicluna, J., Weber, I., & Zhou, X. (2008). Semantic annotation and composition of business processes with Maestro. In *Demonstrations at ESWC’08: 5th European Semantic Web Conference*, pp. 772–776, Tenerife, Spain.
Born, M., Hoffmann, J., Kaczmarek, T., Kowalkiewicz, M., Markovic, I., Scicluna, J., Weber, I., & Zhou, X. (2009). Supporting execution-level business process modeling with semantic technologies. In *Demonstrations at DASFAA’09: Database Systems for Advanced Applications*, pp. 759–763, Brisbane, Australia.
Bryce, D., & Buffet, O. (2008). 6th international planning competition: Uncertainty part. In *Proceedings of the 6th International Planning Competition (IPC’08)*.
Bryce, D., & Kambhampati, S. (2004). Heuristic guidance measures for conformant planning. In Koenig, S., Zilberstein, S., & Koehler, J. (Eds.), *Proceedings of the 14th International Conference on Automated Planning and Scheduling (ICAPS-04)*, pp. 365–374, Whistler, Canada. AAAI.
Bryce, D., Kambhampati, S., & Smith, D. E. (2006). Planning graph heuristics for belief space search. *Journal of Artificial Intelligence Research, 26*, 35–99.
Bylander, T. (1994). The computational complexity of propositional STRIPS planning. *Artificial Intelligence, 69*(1–2), 165–204.
Calvanese, D., De Giacomo, G., Lenzerini, M., Mecella, M., & Patrizi, F. (2008). Automatic service composition and synthesis: the roman model. *IEEE Data Engineering Bulletin, 31*(3), 18–22.
Cimatti, A., Giunchiglia, F., Giunchiglia, E., & Traverso, P. (1997). Planning via model checking: A decision procedure for ar. In Steel, S., & Alami, R. (Eds.), *Recent Advances in AI Planning. 4th European Conference on Planning (ECP’97)*, Vol. 1348 of *Lecture Notes in Artificial Intelligence*, pp. 130–142, Toulouse, France. Springer-Verlag.
Cimatti, A., Pistore, M., Roveri, M., & Traverso, P. (2003). Weak, strong, and strong cyclic planning via symbolic model checking. *Artificial Intelligence, 147*(1-2), 35–84.
Cimatti, A., Roveri, M., & Traverso, P. (1998a). Automatic obdd-based generation of universal plans in non-deterministic domains. In Mostow, J., & Rich, C. (Eds.), *Proceedings of the 15th National Conference of the American Association for Artificial Intelligence (AAAI-98)*, pp. 875–881, Madison, WI, USA. MIT Press.
Cimatti, A., Roveri, M., & Traverso, P. (1998b). Strong planning in non-deterministic domains via model checking. In Simmons, R., Veloso, M., & Smith, S. (Eds.), *Proceedings of the 4th International Conference on Artificial Intelligence Planning Systems (AIPS-98)*, pp. 36–43, Pittsburgh, PA. AAAI Press, Menlo Park.
Cohn, D., & Hull, R. (2009). Business artifacts: A data-centric approach to modeling business operations and processes. *IEEE Data Engineering Bulletin*, 3–9.
Constantinescu, I., Faltings, B., & Binder, W. (2004). Large scale, type-compatible service composition. In Jain, H., & Liu, L. (Eds.), *Proceedings of the 2nd International Conference on Web Services (ICWS-04)*, pp. 506–513, San Diego, California, USA. IEEE Computer Society.
Cresswell, S., McCluskey, T., & West, M. (2010). Acquiring planning domains models using LOCM. *The Knowledge Engineering Review*.
Cresswell, S., McCluskey, T. L., & West, M. M. (2009). Acquisition of object-centred domain models from planning examples. In Gerevini, A., Howe, A. E., Cesta, A., & Refanidis, I. (Eds.), *Proceedings of the 19th International Conference on Automated Planning and Scheduling (ICAPS-09)*, Sydney, Australia. AAAI.
Dal Lago, U., Pistore, M., & Traverso, P. (2002). Planning with a language for extended goals. In Dechter, R., Kearns, M., & Sutton, R. (Eds.), *Proceedings of the 18th National Conference of the American Association for Artificial Intelligence (AAAI-02)*, pp. 447–454, Edmonton, AL, USA. MIT Press.
De Giacomo, G., & Sardiña, S. (2007). Automatic synthesis of new behaviors from a library of available behaviors. In Veloso, M. (Ed.), *Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07)*, pp. 1866–1871, Hyderabad, India. Morgan Kaufmann.
Dumas, M., ter Hofstede, A., & van der Aalst, W. (Eds.). (2005). *Process Aware Information Systems: Bridging People and Software Through Process Technology*. Wiley Publishing.
Fox, M., & Long, D. (2003). PDDL2.1: An extension to PDDL for expressing temporal planning domains. *Journal of Artificial Intelligence Research*, 20, 61–124.
Gazen, B. C., & Knoblock, C. (1997). Combining the expressiveness of UCPOP with the efficiency of Graphplan. In Steel, S., & Alami, R. (Eds.), *Recent Advances in AI Planning. 4th European Conference on Planning (ECP'97)*, Vol. 1348 of *Lecture Notes in Artificial Intelligence*, pp. 221–233, Toulouse, France. Springer-Verlag.
Gerevini, A., Haslum, P., Long, D., Saetti, A., & Dimopoulos, Y. (2009). Deterministic planning in the fifth international planning competition: PDDL3 and experimental evaluation of the planners. *Artificial Intelligence*, 173(5-6), 619–668.
Gonzalez-Ferrer, A., Fernandez-Olivares, J., & Castillo, L. (2009). JABBAH: a Java application framework for the translation between business process models and HTN. In *Proceedings of the 3rd International Competition on Knowledge Engineering for Planning and Scheduling*, Thessaloniki, Greece.
Helmert, M. (2006). The Fast Downward planning system. *Journal of Artificial Intelligence Research*, 26, 191–246.
Helmert, M. (2009). Concise finite-domain representations for pddl planning tasks. *Artificial Intelligence*, 173(5-6), 503–535.
Hoffmann, J., & Brafman, R. (2005). Contingent planning via heuristic forward search with implicit belief states. In Biundo, S., Myers, K., & Rajan, K. (Eds.), *Proceedings of the 15th International Conference on Automated Planning and Scheduling (ICAPS-05)*, pp. 71–80, Monterey, CA, USA. AAAI.
Hoffmann, J., & Edelkamp, S. (2005). The deterministic part of IPC-4: An overview. *Journal of Artificial Intelligence Research*, 24, 519–579.
Hoffmann, J., & Nebel, B. (2001). The FF planning system: Fast plan generation through heuristic search. *Journal of Artificial Intelligence Research*, 14, 253–302.
Jonathan, P. J., Moore, J., Stader, J., Macintosh, A., & Chung, P. (1999). Exploiting ai technologies to realise adaptive workflow systems. In *Proceedings ot the AAAI'99 Workshop on Agent-Based Systems in the Business Context*.
Kambhampati, S. (2007). Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain models. In Howe, A., & Holte, R. C. (Eds.), *Proceedings of the 22nd National Conference of the American Association for Artificial Intelligence (AAAI-07)*, Vancouver, BC, Canada. MIT Press.
Kitchin, D. E., McCluskey, T. L., & West, M. M. (2005). B vs ocl: Comparing specification languages for planning domains. In *Proceedings of the ICAPS'05 Workshop on Verification and Validation of Model-Based Planning and Scheduling Systems*.
Krafgiz, D., Banke, K., & Slama, D. (2005). *Enterprise SOA: Service-Oriented Architecture Best Practices*. Prentice Hall.
Küster, J. M., Ryndina, K., & Gall, H. (2007). Generation of business process models for object life cycle compliance. In Alonso, G., Dadam, P., & Rosemann, M. (Eds.), *Proceedings of the 5th International Conference on Business Process Management (BPM'07)*, Vol. 4714 of *Lecture Notes in Computer Science*, pp. 165–181. Springer.
Liu, Z., Ranganathan, A., & Riabov, A. (2007). A planning approach for message-oriented semantic web service composition. In Howe, A., & Holte, R. C. (Eds.), *Proceedings of the 22nd National Conference of the American Association for Artificial Intelligence (AAAI-07)*, Vancouver, BC, Canada. MIT Press.
May, N., & Weber, I. (2008). Information gathering for semantic service discovery and composition in business process modeling. In *CIAO!’08: Workshop on Cooperation & Interoperability - Architecture & Ontology at CAiSE’08*, Vol. LNBIP 10, pp. 46–60, Montpellier, France.
McDermott, D., Ghallab, M., Howe, A., Knoblock, C., Ram, A., Veloso, M., Weld, D., & Wilkins, D. (1998). PDDL – the planning domain definition language. Tech. rep. CVC TR-98-003, Yale Center for Computational Vision and Control.
McDermott, D. V. (1999). Using regression-match graphs to control search in planning. *Artificial Intelligence*, 109(1-2), 111–159.
Mediratta, A., & Srivastava, B. (2006). Applying planning in composition of web services with a user-driven contingent planner. IBM Research Report RI 06002.
Meyer, H., & Weske, M. (2006). Automated service composition using heuristic search. In Dustdar, S., Fiadeiro, J. L., & Sheth, A. P. (Eds.), *Proceedings of the 4th International...
Narayanan, S., & McIlraith, S. (2002). Simulation, verification and automated composition of web services. In Iyengar, A., & Roure, D. D. (Eds.), *Proceedings of the 11th International World Wide Web Conference (WWW-02)*, Honolulu, Hawaii, USA. ACM.
Nilsson, N. J. (1969). Searching problem-solving and game-playing trees for minimal cost solutions. In *Information Processing 68 Vol. 2*, pp. 1556–1562, Amsterdam, Netherlands.
Nilsson, N. J. (1971). *Problem Solving Methods in Artificial Intelligence*. McGraw-Hill.
Object Management Group (2006). Object Constraint Language Specification, Version 2. http://www.omg.org/technology/documents/formal/ocl.htm.
Object Management Group (2008). Business Process Modeling Notation, V1.1. http://www.bpmn.org/.
Palacios, H., & Geffner, H. (2009). Compiling uncertainty away in conformant planning problems with bounded width. *Journal of Artificial Intelligence Research*, 35, 623–675.
Pearl, J. (1984). *Heuristics*. Morgan Kaufmann.
Pednault, E. P. (1989). ADL: Exploring the middle ground between STRIPS and the situation calculus. In Brachman, R., Levesque, H. J., & Reiter, R. (Eds.), *Principles of Knowledge Representation and Reasoning: Proceedings of the 1st International Conference (KR-89)*, pp. 324–331, Toronto, ON. Morgan Kaufmann.
Pesic, M., Schonenberg, M. H., Sidorova, N., & van der Aalst, W. M. P. (2007). Constraint-based workflow models: Change made easy. In Meersman, R., & Tari, Z. (Eds.), *OTM Conferences (1)*, Vol. 4803 of *Lecture Notes in Computer Science*, pp. 77–94. Springer.
Pistore, M., Marconi, A., Bertoli, P., & Traverso, P. (2005). Automated composition of web services by planning at the knowledge level. In Kaelbling, L. (Ed.), *Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI-05)*, Edinburgh, Scotland. Morgan Kaufmann.
Pistore, M., & Traverso, P. (2001). Planning as model checking for extended goals in non-deterministic domains. In Nebel, B. (Ed.), *Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI-01)*, pp. 479–486, Seattle, Washington, USA. Morgan Kaufmann.
Ponnekanti, S., & Fox, A. (2002). SWORD: A developer toolkit for web services composition. In Iyengar, A., & Roure, D. D. (Eds.), *Proceedings of the 11th International World Wide Web Conference (WWW-02)*, Honolulu, Hawaii, USA. ACM.
Richter, S., & Helmert, M. (2009). Preferred operators and deferred evaluation in satisficing planning. In Gerevini, A., Howe, A. E., Cesta, A., & Refanidis, I. (Eds.), *Proceedings of the 19th International Conference on Automated Planning and Scheduling (ICAPS-09)*, Sydney, Australia. AAAI.
Rodriguez-Moreno, M. D., Borrajo, D., Cesta, A., & Oddi, A. (2007). Integrating planning and scheduling in workflow domains. *Expert Systems Applications*, 33(2), 389–406.
SAP (2010). SAP NetWeaver.. http://www.sap.com/platform/netweaver/index.epx.
Sardiña, S., Patrizi, F., & De Giacomo, G. (2008). Behavior composition in the presence of failure. In Brewka, G., & Lang, J. (Eds.), *Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning (KR’08)*, pp. 640–650. AAAI Press.
Schneider, S. (2001). *The B-Method: An Introduction*. Palgrave.
Shaparau, D., Pistore, M., & Traverso, P. (2006). Contingent planning with goal preferences. In Gil, Y., & Mooney, R. J. (Eds.), *Proceedings of the 21st National Conference of the American Association for Artificial Intelligence (AAAI-06)*, Boston, Massachusetts, USA. MIT Press.
Sirin, E., Parsia, B., Wu, D., Hendler, J., & Nau, D. (2004). HTN planning for web service composition using SHOP2. *Journal of Web Semantics*, 1(4).
Sirin, E., Parsia, B., & Hendler, J. (2006). Template-based composition of semantic web services. In *AAAI Fall Symposium on Agents and Search*.
Smith, D. E., & Weld, D. S. (1999). Temporal planning with mutual exclusion reasoning. In Dean, T. (Ed.), *Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI-99)*, pp. 326–337, Stockholm, Sweden. Morgan Kaufmann.
Srivastava, B. (2002). Automatic web services composition using planning. In *Knowledge Based Computer Systems (KBCS-02)*, pp. 467–477.
Traverso, P., Ghallab, M., & Nau, D. (Eds.). (2005). *Automated Planning: Theory and Practice*. Morgan Kaufmann.
Turner, J., & McCluskey, T. L. (1994). *The Construction of Formal Specifications: an Introduction to the Model-Based and Algebraic Approaches*. McGraw Hill Software Engineering series.
van der Aalst, W. (2003). Business process management demystified: A tutorial on models, systems and standards for workflow management. In *Lectures on Concurrency and Petri Nets in ACPN’04: Advanced Courses in Petri Nets*, pp. 1–65.
van der Aalst, W. M. P., & Pesic, M. (2006). Decserflow: Towards a truly declarative service flow language. In Bravetti, M., Núñez, M., & Zavattaro, G. (Eds.), *WS-FM*, Vol. 4184 of *Lecture Notes in Computer Science*, pp. 1–23. Springer.
Wainer, J., & de Lima Bezerra, F. (2003). *Groupware: Design, Implementation, and Use*, Vol. 2806 of *LNCS*, chap. Constraint-based flexible workflows, pp. 151–158. Springer-Verlag.
Weld, D. S., Anderson, C. R., & Smith, D. E. (1998). Extending graphplan to handle uncertainty & sensing actions. In Mostow, J., & Rich, C. (Eds.), *Proceedings of the 15th National Conference of the American Association for Artificial Intelligence (AAAI-98)*, pp. 897–904, Madison, WI, USA. MIT Press.
Weske, M. (2007). *Business Process Management: Concepts, Languages, Architectures*. Springer-Verlag.
Yoon, S. W., Fern, A., & Givan, R. (2007). FF-Replan: A baseline for probabilistic planning. In Boddy, M., Fox, M., & Thiebaux, S. (Eds.), *Proceedings of the 17th International Conference on Automated Planning and Scheduling (ICAPS-07)*, Providence, Rhode Island, USA. AAAI.
Younes, H., Littman, M., Weissman, D., & Asmuth, J. (2005). The first probabilistic track of the international planning competition. *Journal of Artificial Intelligence Research*, 24, 851–887. |
Dear Parents and Caregivers
SCHOOL MANTRA 2019
“Find happiness in making others happy” – Mary MacKillop
Dear Parents and Caregivers
Dr Debra Sayce, Executive Director of CEWA has requested that I take on the role of School Improvement Advisor (SIA) while the current SIA continues her rehabilitation following a stroke. I will take up this part-time role for Term Two on a Wednesday and Thursday only and Mrs Julie Southwell will be Acting Principal on these two days. The SIA role is the link between the Catholic Education Office and schools, providing support to Principals and their Leadership Teams. In my case I am working with 25 other schools in a condensed SIA role, as it is only part-time. My role as Principal and life at Mater Christi continues as usual and I could not have accepted this position without the support of Mrs Nicole Woodhouse, Mrs Julie Southwell and Mr Mark Clayden.
Year Five Assembly
Congratulations to our Year Five students and their teachers for a very entertaining Assembly. The students worked together to choreograph their dance moves and other aspects of the Assembly which made it even more impressive as it was not solely teacher led but where all worked together creatively to produce a fabulous show.
Dates for Next Week (Wk 9)
| Date | Event |
|------------|----------------------------------------------------------------------|
| Monday 1st April | Food Allergy Incursion (MCCC)
Session 1: 9:00-9:30am (PP & Yr 1)
Session 2: 9:45-10:15am (Yr 2 & 3) |
| Tuesday 2nd April | School Banking |
| Wednesday 3rd April | Good Cup Café Open 8:30-10am at MCCC
Uniform Shop Open 8:00-10:30am
1:20-3:20pm |
| Thursday 4th April | KB/KD Easter Raffle
Free Dress Day (Gold Coin Donation)
P&F Brainstorming Meeting 8:45- 9:20am (Staff Room) |
| Friday 5th April | 9:45am Easter Reflection
6:00pm Easter Reflection (Church)
Easter Raffle KA/KC-Year 6 |
| Saturday 6th April | |
| Sunday 7th April | 5th Sunday of Lent |
Mater Christi Catholic Primary School
340 Yangebup Road
Yangebup WA 6164
PO Box 3077 Success WA 6964
Telephone: (08) 9417 5756
Facsimile (08) 9417 9092
Email: firstname.lastname@example.org
Absentees Email: email@example.com
Website: www.mcpss.wa.edu.au
Holy Week Reflection
Next Friday 5 April at 9.45am in the Church we will have our Holy Week Reflection led by Year 6B. I invite all parents and family members to attend. If your family is looking to do something special for prayer during Lent, then also next Friday at 6pm in the Church the Year 6C students will hold the Holy Week Reflection for the Mater Christi Parishioners. Everyone is most welcome and as we are in the second half of Lent the reflection is a good way to contemplate the last few days in Jesus’ life.
Your Favourite Lenten Memories
Whether we have decided to do something new this Lent, or have continued our own traditions, our Lenten practices transform us and imprint on us memories that carry deep meaning. Let these stories invite you to consider your life and discover your own powerful and meaningful Lenten Moments.
The Power of Symbolic Actions
My favourite memory and the one that still resounds today, happened when I was in 6th grade during the Holy Thursday liturgy. Just watching the Altar as it was stripped bare and realising that Jesus really was dead had a profound effect on me.
Emily, J.
Next week Mr Jenkins our Groundsman will be having surgery on his hip. We wish him a very speedy recovery and look forward in seeing a spring to his step when he returns next term. All the best Anthony!
God Bless,
Toni Kalat
School News
Condolences
We extend our deepest condolences to the Treasure family (6B) on the loss of their Grandfather during the week. We pray for peace and comfort over your family at this very sad time.
Congratulations
Congratulations to Mr & Mrs Jarrett (Isabelle, PPB) on the birth of their daughter, Maddison. We hope you are enjoying lots of cuddles and pray you are all healthy, well and receiving lots of sleep.
Religious Education
The expert in the law replied, “The one who had mercy on him.”
Jesus told him, “Go and do likewise.” Luke 10:27
Sacrament of Confirmation
Students preparing to receive the Sacrament of Confirmation are required to attend either Saturday, 30th March, 6:00pm Mass or Sunday 31st March, 10:00am Mass. During this Mass, the students will pledge their commitment to undertaking the Sacrament. Students will need to collect their commitment brochure from the foyer of the church BEFORE they go into Mass. Each child has a named brochure.
Project Compassion
This week, the students were introduced to Peter. Peter is thrilled to now have clean water on tap at his boarding school. Long walks to unsafe water sources were tiring for Peter, who is living with a disability. With more free time and fewer illnesses caused by dirty water, Peter can fulfil his hope of focusing on his studies, providing him with a brighter future.
Currently, the School has raised $647 for Project Compassion. Next Thursday, 4 April, is a free dress day for the students. To wear free dress a gold coin will be collected from each student. All money collected will be donated to Project Compassion.
T.I.T.U.S ~ Testament In Teachers Using Script
The T.I.T.U.S project is providing our early childhood educators with the opportunity to develop and nurture their personal, spiritual and faith formation through the experience and exploration of God’s Word. The project’s aim is to build the confidence of our early childhood educators in their understanding of scripture stories and application in the classroom. This week our educators participated in a range of play-based scripture strategies that accumulated in their own response to the parable of the Good Samaritan. The project is facilitated by the Catholic Institute of Western Australia, supported by the University of Notre Dame Australia.
God Bless
Julie Southwell
School Banking
As from the 3rd of April the school banking tokens will change slightly. When a student has reached 10 silver tokens they will be given a reward slip to choose a prize and will be given a gold token as a keepsake achievement. Parents please note there is a CommBank Youth App where you can track your child’s balance and Dollarmites tokens.
This is an update from the Commonwealth Bank.
Thank you,
Mel Babich (School Banking Coordinator)
Physical Education
Interschool Swimming Team
All the swimmers selected for the Mater Christi Interschool Swimming Team are required to come to training which will be held at the Cockburn ARC (off Poletti Road) next week.
TRAINING WILL BE HELD BEFORE SCHOOL ON MONDAY (1/4), WEDNESDAY (3/4) AND FRIDAY (5/4) FROM 7:15AM – 8:00AM.
See you all there!
Mr Donnelly, Physical Education
Football Donation
A big thank you goes out to Mrs Jensen’s Dad who donated some ripper Footballs to the school. The footys will be enjoyed by everyone at Mater Christi. Hopefully it will help the Eagles supporters with their skills (they need it) and no doubt keep going through for goals when kicked by a Dockers supporter. Pictured are the Di Silvio boys enjoying the fruits of generosity.
Sustainability Corner – Worm Juice for Sale!
Worm leachate, or “worm juice” is a highly nutritious and organic food for plants. Thanks to the school worm farm, worm juice is now for sale from the Science classroom before school. The cost is $1 per litre, with most bottles being 2 or 3 litres in size. It is an excellent fertiliser for pot plants and vegetable gardens!
Allergy Awareness Incursion
On Monday, 1st April, we will be having a school incursion to promote allergy awareness throughout the school. Parents, please feel free to join us for this informative event.
Edu Dance Costume Information for Pre-Primary to Year 6
**PP - “Jitterbug”**
Plain bright coloured t-shirt with horizontal stripes across the front and school shorts. (Stripes made from black duct tape) **NO headbands or accessories**
**Year 1 - “Let’s Get Loud”**
Plain red t-shirt and school shorts. Black bandana tied around either head or neck
**Year 2 - “Play That Sax”**
Plain blue t-shirt with school shorts. Wooden spoon
**Year 3 - “Hammer and The Frog”**
Plain green t-shirt with school shorts. Cap worn backwards (any colour)
**Year 4 - “Do Your Thing”**
“Teacher” clothes … shirts, ties etc. If girls choose a skirt or dress, they MUST wear leggings underneath
**Year 5 - “Funk Soul Brother”**
Plain white t-shirt with dark shorts. Tie a flannel shirt around your waist (any colour)
**Year 6 - “Hair Up”**
Plain black t-shirt and long black pants.
Horizontal strip of yellow cloth/duct tape across chest, and a vertical stripe down left thigh.
Tape available at Bunnings
*All students must wear sneakers. Please no sandals or thongs – THANK YOU*
---
**Uniform Shop**
Upon return to school in Term 2, Monday 29th April, your child will be required to wear their full winter uniform.
For those in Pre-Primary, Year 1 and any students new to the school, I have scheduled some dates for winter uniform fittings.
- Tuesday 2nd April 8:00-1:00pm
- Wednesday 3rd April 9:30-12:00pm and 1:30pm-2:30pm
- Tuesday 9th April 8:00am-1:00pm
- Wednesday 10th April 9:30am-12:00pm and 1:30pm-2:30pm.
http://materchristicps.permapleat.com.au/schoolbookings
Please use the link above to book your child’s fitting. If you can’t make your appointment time, please delete your booking to create space for another student as bookings are limited.
Thank you, Simone Douglas (Perm-A-Pleat)
Lil' Peeps is a highly specialised Occupational therapy service that provides assessment, support, advice and intervention in a range of areas. We liaise closely with schools and teachers to ensure the best outcome for your child and family.
Lil Peeps provides services in a variety of ways, including assessment and consultation, direct individual therapy with an OT, group-based consultation and parent and teacher workshops. You may be able to claim a rebate for the OT services from private health insurance and other agencies.
Contact Lil' Peeps for pricing details and advice on whether Occupational Therapy may benefit your child.
**Fine motor skills**
The way we use our hands and fingers help us to be efficient in everyday tasks including dressing, eating, unwrapping a sandwich and writing our name.
**Sensory Processing**
The way we take in sensory information from our environment can have a significant impact on our attention, emotion and behaviours.
**Technology**
Some children may need support in producing work in the classroom and technology can be a valuable tool when the physical act of hand writing is proving to be a significant barrier.
**School Readiness**
Starting school is always a little confronting – for both the child and the parents! When children are physically ready for the demands of the classroom, it can make the learning journey a great deal smoother!
**Handwriting**
The bread and butter of OT in the school... being able to produce work through writing in an efficient and effective way is crucial to all areas of performance in the classroom.
**Formal Assessment**
Assessment of motor coordination and visual perception skills can be beneficial in determining specific areas of difficulty and provide insight into strengths and barriers to your child's participation at school.
**Gross motor skills**
Being physical active is a lifelong skill that we all need. When children have challenges with their motor skills they are less likely to be physically active, and this can have an impact on both their physical health and emotional wellbeing.
Good Cup Café
The Good Cup Café is back and it is open Wednesday 3rd April 8:30-10am at MCCC.
Start your day on a good vibe with a morning catch up with friends or a chance to meet new people after you have dropped the kids in class. Toys and a play area are available for non-school age children.
Rebecca Rowland, Toni and Father Joe will be popping in for a chat.
P&F Brainstorming Meeting
We will be hosting a brainstorming meeting to look at ways we can use P&F Funds.
It will be held in the staffroom from 8.45am-9.25am on 5th April in the staffroom prior (to the whole school Easter Reflection).
Easter Colouring-In Competition and Raffle
This is proudly sponsored by Leah and Mark Rheinberger of Inspirations Paint (Melville & Canning Vale) and Mater Christi P&F.
Entry forms have been distributed to all children and will be judged across 4 age categories (Kindy/Pre-Primary, Year 1/2, Year 3/4 and Year 5/6). A prize will be awarded to the best entry in each year level from Kindy to Yr6. Completed entries must be returned to School Admin by Tuesday 2nd April. Each child will also be entered in a random draw to win 1 of 2 additional prizes given out to each classroom. Prizes will be distributed and the list of winners published here on Friday 5th April.
P&F Volunteer Needed:
The Food Coordinator position is currently vacant. This is a great opportunity to be involved in both P&F and school activities, which involves emailing parents to seek donations of food for masses, morning teas and functions, and assisting at these events as your time permits. Please email the P&F for a more detailed description.
A volunteer or two is required to coordinate the Father’s Day Breakfast on Friday 31st August. If you are interested in doing this, please contact the P&F.
Contact the P&F by email at firstname.lastname@example.org
Entertainment Book Pre-Order
2019-2020 Entertainment Book available for pre-order. Bonus Early Bird Offers end soon, pre-order now! Both digital and book versions are available.
https://www.entertainmentbook.com.au/orderbooks/8q6017
MOTHER’S DAY STALL
FRIDAY 10TH MAY
All classes get a chance to visit and purchase a gift for mum
HELPERS REQUIRED FOR THE DAY AND WOULD LOVE IT IF DADS WHO ARE AVAILABLE COULD VOLUNTEER TOO. PLEASE CONTACT CHIARA ON 0419995822 OR email@example.com
Parent Class Representative Social Events
Kindy B Catch Up
Where: Chipmunks, Cockburn
When: 5th April at 9am
RSVP: to Kylie Galipo on 0410 329 853 by Thursday, 4th April
Pre-Primary A Mums Get Together
Where: The Berrigan Bar & Bistro
When: Saturday, 6th April at 2pm
RSVP: Sarah Lentz on 0405 520 431 by Thursday, 4th April
Pre-Primary B Play and Picnic
Where: Honeywood Playing Fields (Opposite Honeywood Primary School)
When: Saturday 30th March at 10:30am
RSVP: For further information, contact Michelle Pozzi on 0405 202 804
Pre-Primary A, B & C School Holiday Catch Up
Where: Manning Park
When: Tuesday, 16th April at 9:30am
RSVP: No need to RSVP. For further information, please contact your Class Representative.
Year 1B Picnic in the Park with the Family
Where: Bibra Lake Regional Playground
When: Saturday 30th March at 12pm
RSVP: Sarah Morris on 0405 343 611
Year 3C Parents Catch-up Drinks and Dinner
Where: The Quarie Bar and Brasserie, 2 Macquarie Blvd, Hammond Park.
When: 4th April at 7pm
RSVP: Mel Babich on 0402 233 708.
Community News
Calling all young artists
The Shaun Tan Award for Young Artists is a visual art award encouraging imagination, innovation and creativity in young people.
The award is open to Western Australian school students in year one to year twelve.
Entries should be:
- an original piece
- no bigger than 1m x 1.5m in size
- two-dimensional (2D)
Entries open at 9.30am on Monday 29 April.
Enter online and submit your artwork to Subiaco Library before 5.30pm, Monday 20 May.
For full details visit www.subiaco.wa.gov.au/shauntanaward
This award is made possible by the generous support of our award patron Shaun Tan and our award sponsors.
Janet Holmes à Court AC PRIME ART NEW EDITION KOOK KREATIVE DYMOCKS THE BOOKSHOP
WAABINY OSHC AUTUMN VACATION CARE 2019
BOOK NOW!
Our vacation care program caters for all children from ages 4 – 12 years. It is fun, exciting, and provides the perfect environment for play based learning during the school holidays. The program is available to all families in the surrounding areas, and bookings are essential. To find out more information, or to make a booking, please contact Jason on the details below:
Email: firstname.lastname@example.org
Ph: 08 9417 1800
Montessori Stepping Stones & Waabiny OSHC are proudly supported by our community partners
THURSDAY 11TH APR
Learn to make a wobbling papier mache Easter bunny
FRIDAY 12TH APR
Storytime - Gumnut babies "Snuggle Pot & Cuddlepie"
MONDAY 15TH APR
Make a creative & fun Easter bonnet
TUESDAY 16TH APR
Capture a falling leaf & make a beautiful autumn leaf art banner
WEDNESDAY 17TH APR
EXCURSION - Bike ride around the amazing Yangebup lake
THURSDAY 18TH APR
INCURSION - Child's Play Music. Explore musical instruments through cooperative play activities
FRIDAY 19TH APR
CLOSED - GOOD FRIDAY PUBLIC HOLIDAY
MONDAY 22ND APR
CLOSED - EASTER MONDAY PUBLIC HOLIDAY
TUESDAY 23RD APR
INCURSION - Active gymnastics. Learn basic gymnastic skills & teamwork
WEDNESDAY 24TH APR
Make felt poppies for Remembrance day
www.mss.edu.au
Triple P Positive Parenting Program
You are invited to attend a Group Triple P. Triple P teaches positive, practical and effective ways to manage common issues which most parents will face.
Parents will learn effective parenting strategies such as ways to encourage behaviour you like, how to promote your child’s development and how to prevent or manage common child behaviour problems.
The next FREE 8-week group is held:
When: Starting Thu 9 May 2019
Location: Coolbellup Community Centre
RSVP: Bookings are essential and places are limited.
To book online
www.healthywa.wa.gov.au/parentgroups
Unable to book online?
Please call 1300 749 869
To find other available programs visit our website
www.healthywa.wa.gov.au/parentgroups |
Thanks! A Strengths-Based Gratitude Curriculum for Tweens and Teens
Four lessons to help students understand the meaning of gratitude and how to cultivate it in their everyday lives.
Greater Good Science Center
ggsc.berkeley.edu
# Table of Contents
| Section | Page |
|------------------------------------------------------------------------|------|
| Introduction | 3 |
| Lesson 1: Discover Your Grateful Self | 10 |
| Lesson 2: See The Good Challenge | 17 |
| Gift of the Magi—Reading | |
| Gratitude Challenge—Activity | |
| Gratitude Journal—Activity | |
| Good Week Reflection—Activity | |
| Subtracting Good Things—Activity | |
| Lesson 3: Seeing The Good In Others | 35 |
| Go Out And Fill Buckets—Activity | |
| Lesson 4: Thank You For Believing In Me | 43 |
| Gratitude Letter | |
Over the past two decades, studies have consistently found that people who practice gratitude report fewer symptoms of illness, including depression, more optimism and happiness, stronger relationships, more generous behavior, and many other benefits.
Further, research convincingly shows that, when compared with their less grateful peers, grateful youth are happier and more satisfied with their lives, friends, family, neighborhood, and selves. They also report more hope, engagement with their hobbies, higher GPAs, and less envy, depression, and materialism.
That’s why the Greater Good Science Center launched the Youth Gratitude Project (YGP) as part of the broader Expanding the Science and Practice of Gratitude, a multiyear project funded by the John Templeton Foundation. In addition to advancing the knowledge of how to measure and develop gratitude in children, the YGP created and tested a new gratitude curriculum for middle and high schoolers.
The main idea of the YGP curriculum is that varied gratitude practices should help students feel more socially competent and connected, be more satisfied with school, have better mental health and emotional well-being, and be more motivated about school and their future. For example, practices like journaling that genuinely build on students’ strengths and guide them to have more meaningful interactions and regular discussion with peers, teachers, and other adults.
Preliminary evidence for the effects of the gratitude curriculum indicate that it is helping to decrease depression, anxiety, and antisocial behavior and increase hope, emotional regulation, and search for purpose.
In describing the design of the gratitude curriculum, lead researcher Dr. Giacomo Bono writes:
Gratitude interventions for students should start by identifying and engaging students’ character strengths and interests, and they should let students appreciate the different benefits and benefactors in their lives for themselves. Let’s go beyond lists and dry journals. When people “get” us and help us through tough times, gratitude grows.
Schools participating in the YGP curriculum have shared anecdotes about students’ and parents’ enthusiasm for the gratitude lessons. Indeed, the character strength and gratitude exercises have not only been affirmational—strengthening pride in students’ achievements and building a sense of community—but, according to Dr. Bono, they have also been hijacking much of the wall space at Open Houses!
We sincerely hope that, as students begin to practice gratitude, they will begin to see the value of altruistic choices and recognize the good intentions of others, helping them to feel supported in reaching for the better.
**How To Use The Lessons**
Each lesson follows a consistent format:
**Time Required:** The time required is a suggested time based on feedback from educators who have taught the lesson. For the full benefit, lessons should be taught in their entirety, which may take one or two class periods.
**Grade Level:** The lessons were designed for both middle and high school students; however, teachers should feel free to adapt the lessons to meet the needs of their students.
**Materials:** The materials listed for each activity are deliberately simple and low-cost. An internet connection and a TV or projector will be required to show the videos. Links to PDFs of handouts and PowerPoint slides are included with the curriculum.
Learning Objectives: The learning objective describes the knowledge, skills, and/or attitudes that are developed in each activity.
Social and Emotional Learning (SEL) Competencies: Social and Emotional Learning (SEL) is the process through which children learn and apply the knowledge, attitudes, and skills necessary to:
- Understand and manage emotions
- Set and achieve positive goals
- Feel and show empathy for others
- Establish and maintain positive relationships
- Make responsible decisions
Five social-emotional competencies have been identified by the Collaborative for Academic, Social, and Emotional Learning (CASEL) as foundational. The table on the next page lists those competencies, and ways in which gratitude practices can support their development.
| SEL Competencies | How Gratitude Practices Support This Competency |
|----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|
| **Self-awareness:** The ability to accurately recognize one’s emotions and thoughts and their influence on behavior. This includes accurately assessing one’s strengths and limitations and possessing a well-grounded sense of confidence and optimism. | Students develop a deeper awareness of their thoughts and feelings when they reflect mindfully on their experience of gratitude. Choosing to express gratitude also enhances students’ confidence and optimism. |
| **Self-management:** The ability to regulate one’s emotions, thoughts, and behaviors effectively in different situations. This includes managing stress, controlling impulses, motivating oneself, and setting and working toward achieving personal and academic goals. | Choosing to respond with gratitude, when experiencing kindness from others, requires students to regulate their thoughts, feelings and actions. |
### How To Use The Lessons (cont’d)
| SEL Competencies (cont’d) | How Gratitude Practices Support This Competency (cont’d) |
|-------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|
| **Social awareness:** The ability to take the perspective of and empathize with others from diverse backgrounds and cultures, to understand social and ethical norms for behavior, and to recognize family, school, and community resources and supports. | By considering the intentions and efforts of those they are grateful to, students develop social awareness. In particular, they develop the ability to take the perspective of others and to empathize with them. |
| **Relationship skills:** The ability to establish and maintain healthy and rewarding relationships with diverse individuals and groups. This includes communicating clearly, listening actively, cooperating, resisting inappropriate social pressure, negotiating conflict constructively, and seeking and offering help when needed. | When they express gratitude, students establish and maintain healthy relationships with others. Planning and carrying out acts of kindness toward others also strengthens relationship skills. |
| **Responsible decision making:** The ability to make constructive and respectful choices about personal behavior and social interactions based on consideration of ethical standards, safety concerns, social norms, the realistic evaluation of consequences of various actions, and the well-being of self and others. | In choosing to express gratitude, students practice responsible decision-making and enhance the well-being of others, themselves and the world around them. |
**Getting Ready for This Activity:** This section offers simple ways a teacher might explore the activity for themselves first before teaching it to students. Indeed, research suggests that teachers who exhibit gratitude feel more satisfied, accomplished, and have more emotional reserves. Experiencing the benefits of gratitude firsthand can enhance your work with students by helping you be more in tune with how they will engage with the activities in this guide.
**How to Do It:** The process of each activity is described in detail. This can be adapted to suit the needs of the group.
Reflection After the Activity: To deepen the experience of the activity, we suggest asking students to reflect on the impact of this activity on themselves.
Key themes in the gratitude lessons
Central to the concept of gratitude are the ideas of intention, benefit, and cost or “benefit appraisals”. According to gratitude researchers Jeffrey Froh and Giacomo Bono:
• Acts of kindness that inspire gratitude are usually done on purpose, with intention. Someone has noticed us, thought about what we need, and chosen to do something to meet that need. Reflecting on the intentions behind these acts deepens our sense of gratitude.
• A related idea is that each act of kindness has a cost to the person who performs it. The cost may include time, effort or something that was given up, as well as any financial cost. When we understand those costs, we gain a deeper appreciation of the person who acted in a caring way.
• Finally, others’ acts of kindness benefit us personally in ways that may be material, emotional, and/or social. Noticing and acknowledging the ways we benefit from others’ actions enhances our gratitude.
Teaching gratitude in a culturally-responsive way
When teaching about gratitude in a school setting, it is important to keep in mind that the school community is made up of adults and children who differ in terms of culture, race, socioeconomic status, and religious background. This may mean that they also differ in the way they express and practice gratitude.
In some cultures, and contexts, verbal expressions of gratitude are common, while in others a gesture, a reciprocal act of kindness or caring, a simple or elaborate ritual, or giving a small token or gift may be seen as more appropriate. How gratitude is expressed to another might differ depending on how familiar one is with the other person.
Gratitude may also be expressed differently to a peer, as opposed to someone with a different social status. Welcoming discussion of these and other differences in the classroom will deepen students’ understanding of gratitude.
In conversations about gratitude, it is essential to be mindful that some children may be living with significant challenges. These may include illness, family stress, the loss of a loved one, abuse, neglect, exposure to violence, discrimination, and economic hardship. Children who receive adult support (from their home, school, or community) in dealing with these challenges may have a heightened sense of gratitude for all that is in their environment that enables them to cope. On the other hand, children with fewer support systems may find it difficult to identify life events they feel grateful for.
Gratitude cannot be imposed from the outside. Suggesting that children “look on the bright side” in the face of personal struggle, community suffering, and/or systemic inequities would be very dismissive. Researchers Jeffrey Froh and Giacomo Bono suggest that an appropriate response to children for whom high levels of stress makes the experience of gratitude challenging is to listen deeply, empathize, and acknowledge their feelings.
An example might be to say, “That sounds really difficult…I can see why you are feeling like it can be hard to think of something to be grateful for.” Allowing children to be seen and heard, even when they are distressed, lets them know that their feelings are valid. By helping them understand and express their emotions, teachers can contribute to building children’s resilience, as well as their capacity to understand and acknowledge the feelings of others—which is essential to gratitude.
Another consideration that may arise when exploring gratitude in the classroom is the influence of materialism. In a society oriented to consumerism, students may tend to focus on material things when considering what they are grateful for. They may feel envious of the possessions of others. Or they may take their possessions for granted,
How To Use The Lessons (cont’d)
finding it difficult to value and appreciate what they do have. Introducing gratitude practices in the classroom can help diminish the sense of entitlement with which some students approach life. Through becoming more mindful of how to express gratitude, or through doing acts of kindness for others, they can experience both “giving” and “receiving” in ways that have a deep emotional impact. This can heighten awareness of the many intangible sources of good in our lives.
For more gratitude activities for grades K-8, click here.
Contributors
Lesson Development and Refinement:
Giacomo Bono and Yvonne Huffaker
California State University, Dominguez Hills
Research Associated with Curriculum Development and Refinement:
Giacomo Bono
California State University, Dominguez Hills
Kendall Bronk, Susan Mangan, Rachel Baumsteiger
Claremont Graduate University
Ancillary Materials:
- Teacher slides: Susan Mangan, Rachel Baumsteiger
- Videos: Giacomo Bono, Rachel Baumsteiger, Susan Mangan
- Exercises and activities: Yvonne Huffaker, Giacomo Bono, Rachel Baumsteiger, Susan Mangan
Giacomo Bono, the Youth Gratitude Project team, and the Greater Good Science Center would like to thank the John Templeton Foundation for the generous grant funding that made this educational resource, and the research it is based on, possible. Special thanks also go out to the schools in Southern California who engaged in the research, the teachers who participated in the initial focus groups and the many educators who provided valuable feedback along the way.
Lesson 1
Discover Your Great Full Self
Students identify their strengths to gain a better understanding of themselves.
Time Required
1 class period
Grade Level
6th – 12th grade
Materials
- Lesson One PowerPoint slideshow
- Computer and monitor or projector to show video
- Computer or other device for every student to take computer-based survey
- Poster paper (one per student), pencil, markers, colored pencils, etc.
- VIA Strengths Poster
Learning Objectives
Students will:
- Identify their top five character strengths
- Gain a greater understanding of themselves
SEL Competencies
- Self-Awareness
- Identifying personal strengths
Getting Ready For This Activity
Educators: Take the adult version of the VIA to explore your own character strengths. Do you agree with the survey results of your top five strengths (i.e., your signature strengths)? Think of a moment in your life when you were performing at your best, and consider how any of your top strengths factored into that successful moment.
How To Do It
Slide 1
Introduce the Lesson
Gratitude Activity
Lesson 1
Discover Your Great Full Self
We have a special opportunity to learn about gratitude, how to practice it and why, and to learn about the gifts we each carry around inside us so that we can use them to make the world better.
• Introduce this program and its purpose:
• These lessons give us a special opportunity to learn about gratitude, how to practice it and why, and to learn about the gifts we each carry around inside us so that we can use them to make the world better.
Slide 2
Character Strengths
Character Strengths
• Introduce character strengths:
Before we get going on gratitude, we want to start this program by talking about YOU. Specifically, what are some of your top strengths?
Character strengths are personal qualities, like honesty and leadership, that help you get along in the world and be a better person. People tend to be stronger at a few of these virtues than others. Knowing your character strengths and using them can help you be happier and more successful in the world. So…
Have students look at the word cloud on the slide and guess which would be their top three strengths.
Have students watch the video “The Science of Character” (8 minutes).
After watching the video, tell students that they will now take an online survey that will help them discover their own character strengths.
- Now it’s your turn to find about YOUR strengths! We’re going to take a survey that will help you identify your character strengths.
- Everyone pull out a device and go to the following website (see slide).
- Under the heading, “Register to Get Started,” enter your name, email, gender, date of birth, and a password. Make sure the second box (“I have read…of this agreement”) is checked, then click “register.”
- On the next page, select, “I want to take the VIA survey for youth” (it’s shorter than the adult version), then click, “Take survey.”
- On the next page, select, “I am taking the survey for myself.” Answer all of the questions.
At the end of the survey, you will come to a page labeled “demographics.” You can fill in the information if you wish, or you can just click, “Complete survey.”
On the next page, click “Download your character strengths profile.”
Teachers: If you haven’t yet taken the survey, please do so now!
After everyone has completed the survey, tally up everyone’s strengths (see the next slide).
Tally up the class’s strengths: Ask the students to look at their top two strengths. Then get a tally of how many students had one of their top two strengths in the wisdom category, courage category, etc.
Then reveal what your class’s top strengths were, e.g., “Looks like our class is really high on justice and courage!” This is a fun way for everyone to get a sense of each other’s strengths.
Students will now have an opportunity to discuss how they might use their strengths.
- Next I want you to get into partners and discuss how you could each use one of your top strengths to help others or society. For example, if one of your top strengths is bravery, then you might make a good firefighter. Or, if you score high in creativity, then you could use it to create music. If you score high in kindness, how might you find opportunities to encourage others to be kind?
Give students a few minutes to discuss their strengths with a partner.
Hand out poster materials and have each student create a poster that lists his or her top five character strengths. This can be as creative as they would like and could include art work that symbolizes their strengths. They could use pictures, images, drawings and words to describe themselves and their top five strengths. (OPTION: This can also be done as homework.)
Teachers can create their own simpler poster that lists two of their top five strengths, one that may be apparent to most students and one that may not be.
Reflection After The Activity
- Ask for a few volunteers to share an idea for how to use a particular strength.
- Ask students to reflect either verbally or in written form about something that they discovered about themselves or that surprised them from this activity.
Lesson 2
See The Good Challenge
Students discuss what gratitude means and why it is important.
Time Required
1 class period
Grade Level
6th – 12th grade
Materials
- Lesson 2 PowerPoint slideshow
- Computer and monitor or projector to show video(s)
- Gratitude Challenge and Journal handout for each student
- Optional: Gift of the Magi handout for each student and Gift of the Magi discussion questions for the teacher
Learning Objectives
Students will:
- Define gratitude and why it’s important
- Understand the costs of kindness and the benefits of receiving it
SEL Competencies
- Social Awareness
- Practicing empathy, including perspective taking
- Responsible Decision-Making
- Understanding the motivations for actions and their realistic consequences
Getting Ready For This Activity
Educators:
Keep a gratitude journal for a week, recording twice a week at least three things or people for whom you are grateful. At least once, consider the cost to someone who did something for you and how his or her action benefited you. How does keeping a gratitude journal make you feel?
How To Do It
Slide 1
Introduce the Lesson
Gratitude Activity
Lesson 2
See The Good Challenge
Let’s learn what gratitude is and why it can make us feel better.
• Introduce the lesson.
- Today we’re going to talk about what gratitude is. Can anyone tell me what gratitude is?
Slide 2
Definition of Gratitude
Gratitude
Gratitude = The ability to recognize and acknowledge the good things, people, and places in our lives.
• After several students offer their definitions of gratitude, offer them this definition.
- Gratitude is the ability to recognize and acknowledge the good things, people, and places in our lives.
- For example, if your friend goes out of their way to do you a favor, you would probably feel grateful towards them.
- Now I know you’ve heard of this before, but what you might not know is that it can have enormous implications for your physical and mental health.
Have students watch this video “Nature. Beauty. Gratitude” (9:47 minutes.)
For a shorter version of the video, start at 3:31.
Please note: In the longer version of the video, there is a brief moment of nudity (:29 to :32).
After watching the videos, share with students what science has discovered about why gratitude is good for us.
- There have been many studies on the effects of gratitude, and they confirm a few main effects.
- First, gratitude is a positive emotion, so it feels good to be grateful. Positive emotions like gratitude can also make you feel more open, creative, and energized.
- Second, feeling grateful has been linked to physical health outcomes such as lower blood pressure and stronger immune system functioning.
- Next, because gratitude involves recognizing other people for their kindness, feeling and expressing gratitude can help strengthen relationships.
- And, because of all these factors, people who feel and express gratitude more often tend to feel happier overall.
In pairs, have students take about one minute to list three things they’re grateful for.
- You can be grateful for big things, like having supportive parents, or small things, like being able to say “hi” to your friend before class started.
After a minute, ask for volunteers to share what they were grateful for.
- Gratitude seems pretty simple, right? Let’s take a closer look at what we might ask ourselves when we feel gratitude…
Discuss with students the intention, cost, and benefit—or “benefit appraisals”—when someone does something kind for you.
- First, did the person do it on purpose? There’s a big difference between someone doing something to help you for selfish reasons (like needing a favor later) versus for selfless reasons (like deciding ahead of time to do something helpful just for you).
- Second, did the person’s help benefit you? Think about it: For someone to help you, he or she has to really think about what you need or want. You wouldn’t be super grateful if someone brought you a tissue when you didn’t need one.
- And finally, what did that act cost the other person? We often think of costs in terms of money, but it also includes people’s time and effort. For instance, if your mom gives you a ride to the mall, she not only spends money on gas, but also spends her time, which she could use to do something more fun for her.
- Altogether, we may feel particularly grateful towards someone who sacrifices his or her own time, money, or effort to do something on purpose that benefits us.
- Now we’re going to watch a video to demonstrate what we’ve been talking about.
Have students watch Sesame Street’s *Gift of the Magi* (9:25) OR read the story.
After the video, discuss how it’s appropriate to feel grateful to people when it COSTS them to give you something and it’s VALUABLE to you.
Introduce the GRATITUDE CHALLENGE, letting them know that their homework for the week is to write in their gratitude journals at least four times about specific people and things for which they feel grateful.
- Watch this [video](#) to introduce the gratitude journal (2:24).
- If there are students skeptical about gratitude journaling, watch this [video](#) (2:04).
Ask students to reflect either verbally or in written form about something that they discovered about gratitude or that surprised them from this lesson.
ONE DOLLAR AND EIGHTY-SEVEN CENTS.
That was all. She had put it aside, one cent and then another and then another, in her careful buying of meat and other food. Della counted it three times. One dollar and eighty-seven cents. And the next day would be Christmas.
There was nothing to do but fall on the bed and cry. So Della did it.
While the lady of the home is slowly growing quieter, we can look at the home. Furnished rooms at a cost of $8 a week. There is little more to say about it.
In the hall below was a letter-box too small to hold a letter. There was an electric bell, but it could not make a sound. Also there was a name beside the door: “Mr. James Dillingham Young.”
When the name was placed there, Mr. James Dillingham Young was being paid $30 a week. Now, when he was being paid only $20 a week, the name seemed too long and important. It should perhaps have been “Mr. James D. Young.” But when Mr. James Dillingham Young entered the furnished rooms, his name became very short indeed. Mrs. James Dillingham Young put her arms warmly about him and called him “Jim.” You have already met her. She is Della.
Della finished her crying and cleaned the marks of it from her face. She stood by the window and looked out with no interest. Tomorrow would be Christmas Day, and she had only $1.87 with which to buy Jim a gift. She had put aside as much as she could for months, with this result. Twenty dollars a week is not much. Everything had cost more than she had expected. It always happened like that.
Only $1.87 to buy a gift for Jim. Her Jim. She had had many happy hours planning something nice for him. Something nearly good enough. Something almost worth the honor of belonging to Jim.
There was a looking-glass between the windows of the room. Perhaps you have seen the kind of looking-glass that is placed in $8 furnished rooms. It was very narrow. A person could see only a little of himself at a time. However, if he was very thin and moved very quickly, he might be able to get a good view of himself. Della, being quite thin, had mastered this art.
Suddenly she turned from the window and stood before the glass. Her eyes were shining brightly, but her face had lost its color. Quickly she pulled down her hair and let it fall to its complete length.
The James Dillingham Youngs were very proud of two things which they owned. One thing was Jim’s gold watch. It had once belonged to his father. And, long ago, it had belonged to his father’s father. The other thing was Della’s hair.
If a queen had lived in the rooms near theirs, Della would have washed and dried her hair where the queen could see it. Della knew her hair was more beautiful than any queen’s jewels and gifts.
If a king had lived in the same house, with all his riches, Jim would have looked at his watch every time they met. Jim knew that no king
had anything so valuable.
So now Della’s beautiful hair fell about her, shining like a falling stream of brown water. It reached below her knee. It almost made itself into a dress for her.
And then she put it up on her head again, nervously and quickly. Once she stopped for a moment and stood still while a tear or two ran down her face.
She put on her old brown coat. She put on her old brown hat. With the bright light still in her eyes, she moved quickly out the door and down to the street.
Where she stopped, the sign said: “Mrs. Sofronie. Hair Articles of all Kinds.”
Up to the second floor Della ran, and stopped to get her breath. Mrs. Sofronie, large, too white, cold-eyed, looked at her.
“Will you buy my hair?” asked Della.
“I buy hair,” said Mrs. Sofronie. “Take your hat off and let me look at it.”
Down fell the brown waterfall.
“Twenty dollars,” said Mrs. Sofronie, lifting the hair to feel its weight.
“Give it to me quick,” said Della.
Oh, and the next two hours seemed to fly. She was going from one shop to another, to find a gift for Jim.
She found it at last. It surely had been made for Jim and no one else. There was no other like it in any of the shops, and she had looked in every shop in the city.
It was a gold watch chain, very simply made. Its value was in its rich and pure material. Because it was so plain and simple, you knew that it was very valuable. All good things are like this.
It was good enough for The Watch.
As soon as she saw it, she knew that Jim must have it. It was like him. Quietness and value—Jim and the chain both had quietness and value. She paid twenty-one dollars for it. And she hurried home with the chain and eighty-seven cents.
With that chain on his watch, Jim could look at his watch and learn the time anywhere he might be. Though the watch was so fine, it had never had a fine chain. He sometimes took it out and looked at it only when no one could see him do it.
When Della arrived home, her mind quieted a little. She began to think more reasonably. She started to try to cover the sad marks of what she had done. Love and large-hearted giving, when added together, can leave deep marks. It is never easy to cover these marks, dear friends—never easy.
Within forty minutes her head looked a little better. With her short hair, she looked wonderfully like a schoolboy. She stood at the looking-glass for a long time.
“If Jim doesn’t kill me,” she said to herself, “before he looks at me a second time, he’ll say I look like a girl who sings and dances for money. But what could I do—oh! What could I do with a dollar and eighty-seven cents?”
At seven, Jim’s dinner was ready for him.
Jim was never late. Della held the watch chain in her hand and sat near the door where he always entered. Then she heard his step in the hall and her face lost color for a moment. She often said little prayers quietly, about simple everyday things. And now she said: “Please God, make him think I’m still pretty.”
The door opened and Jim stepped in. He looked very thin and he was not smiling. Poor fellow, he was only twenty-two—and with a family to take care of! He needed a new coat and he had nothing to cover his cold hands.
Jim stopped inside the door. He was as quiet as a hunting dog when it is near a bird. His eyes looked strangely at Della, and there was an expression in them that she could not understand. It filled her with fear. It was not anger, nor surprise, nor anything she had been ready for. He simply looked at her with that strange expression on his face.
Della went to him.
“Jim, dear,” she cried, “don’t look at me like that. I had my hair cut off and sold it. I couldn’t live through Christmas without giving you a
gift. My hair will grow again. You won’t care, will you? My hair grows very fast. It’s Christmas, Jim. Let’s be happy. You don’t know what a nice—what a beautiful nice gift I got for you.”
“You’ve cut off your hair?” asked Jim slowly. He seemed to labor to understand what had happened. He seemed not to feel sure he knew.
“Cut it off and sold it,” said Della. “Don’t you like me now? I’m me, Jim. I’m the same without my hair.”
Jim looked around the room.
“You say your hair is gone?” he said.
“You don’t have to look for it,” said Della. “It’s sold, I tell you—sold and gone, too. It’s the night before Christmas, boy. Be good to me, because I sold it for you. Maybe the hairs of my head could be counted,” she said, “but no one could ever count my love for you. Shall we eat dinner, Jim?”
Jim put his arms around his Della. For ten seconds let us look in another direction. Eight dollars a week or a million dollars a year—how different are they? Someone may give you an answer, but it will be wrong. The magi brought valuable gifts, but that was not among them. My meaning will be explained soon.
From inside the coat, Jim took something tied in paper. He threw it upon the table.
“I want you to understand me, Dell,” he said. “Nothing like a haircut could make me love you any less. But if you’ll open that, you may know what I felt when I came in.”
White fingers pulled off the paper. And then a cry of joy; and then a change to tears.
For there lay The Combs—the combs that Della had seen in a shop window and loved for a long time. Beautiful combs, with jewels, perfect for her beautiful hair. She had known they cost too much for her to buy them. She had looked at them without the least hope of owning them. And now they were hers, but her hair was gone.
But she held them to her heart, and at last was able to look up and say: “My hair grows so fast, Jim!”
And then she jumped up and cried, “Oh, oh!”
Jim had not yet seen his beautiful gift. She held it out to him in her open hand. The gold seemed to shine softly as if with her own warm and loving spirit.
“Isn’t it perfect, Jim? I hunted all over town to find it. You’ll have to look at your watch a hundred times a day now. Give me your watch. I want to see how they look together.”
Jim sat down and smiled.
“Della,” said he, “let’s put our Christmas gifts away and keep them a while. They’re too nice to use now. I sold the watch to get the money to buy the combs. And now I think we should have our dinner.”
The magi, as you know, were wise men—wonderfully wise men—who brought gifts to the newborn Christ-child. They were the first to give Christmas gifts. Being wise, their gifts were doubtless wise ones. And here I have told you the story of two children who were not wise. Each sold the most valuable thing he owned in order to buy a gift for the other. But let me speak a last word to the wise of these days: Of all who give gifts, these two were the most wise. Of all who give and receive gifts, such as they are the most wise. Everywhere they are the wise ones. They are the magi.
DISCUSSION QUESTIONS for “The Gift of the Magi” by O. Henry
1. What does Della’s hair signify to her? Be specific.
2. What does James’s watch signify to him? Be specific.
3. Are Della and James foolish for selling their most prized possessions? Why or why not?
4. Why might Della place more value in the gold chain for James’s watch than her own hair?
5. Why might James place more value in the combs for Della’s hair than his grandfather’s watch?
6. Do you feel James would have appreciated the gold chain given what it cost?
7. Do you feel Della would have appreciated the combs given what it cost?
8. What does O. Henry want his readers to take away from this story about the following:
a. Gift giving?
b. Altruism?
c. Love?
Gratitude Challenge
Instructions
WHO or WHAT are you GRATEFUL for and WHY? Did something good happen recently that you feel grateful for? Do you feel grateful for someone? It can be something special or important, or it can be something small—as long as it’s a good thing or makes you feel good. Just be SPECIFIC so that you can record and recall meaningful events!
For example,
At home:
“I ate a delicious breakfast this morning because Mom/Dad took the time to cook me breakfast.”
At school:
“A friend (or teacher) held the door open for me because he or she was being nice.”
After school:
“My team won a game today because everyone worked hard all week”.
“Watching something on YouTube” or “Playing a game because I got to a new level and/or figured out something out.”
On the weekend:
“A neighbor (or relative) helped me with something because he or she knew what I needed (or wanted to help me).”
“Watched a movie with family (or friend/s) because it was fun/interesting or something I wanted to see or do.”
Challenge yourself to find or see the good in your life every day. The more the better! It could be big or small things. It could be good people or things or it could be bad things that turned out less bad or bad things that thankfully didn’t happen. You decide. Challenge yourself regularly and let’s see what happens!
Here are different areas in your life that you could challenge yourself to find gratitude in: home, school, health, friendship, things you own, special occasions (for example, a trip or a party), kindness or support from others, an achievement or performance.
Gratitude Journal
Instructions
List 3 THINGS or PEOPLE you are GRATEFUL for today and say WHY. Do this twice a week. For example, “My grandpa unexpectedly gave me a ride home from school because he didn’t want me to walk home in the heat.”
Date: __________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Date: __________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Date: __________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Date: __________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Date: __________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Date: __________
______________________________________________________________
______________________________________________________________
______________________________________________________________
Instructions
Choose an entry or two from your Gratitude Journal to reflect on. Was it someone who did something nice or helpful for you? Why did this person do it? What did he or she do to make it happen? And how did it affect you? If it didn’t involve another person and it was just a good thing that happened, then describe your experience and how it affected you.
For example, “My grandpa unexpectedly gave me a ride home from school because he didn’t want me to walk home in the heat. He didn’t go play cards with his friends because he cares about me. Thanks to this I got home early and wasn’t tired or sweaty. This made me happy because I got to see Grandpa and had extra time to play a game later.”
Instructions
Look back at your Good Week Reflections and choose one to DESCRIBE HOW YOUR LIFE WOULD BE DIFFERENT IF that thing didn’t happen or if someone didn’t help make it happen. To be done once every other week.
For example, “If my Grandpa did not give me a ride home from school the other day, I would have walked home in the horrible heat, carrying my heavy backpack. I would have gotten tired, sweaty, and hungry and would not have wanted to do anything. It would have been hard to do my homework and I would not have had extra time for myself.”
Lesson 3
Seeing The Good In Others
Students look for the good in others by acknowledging each other’s strengths.
Time Required
1 class period
Grade Level
6th – 12th grade
Materials
- Lesson 3 PowerPoint slideshow
- Post-its for every student
- Go Out and Fill Buckets handout for each student OR for Middle School Growing Up with a Bucket Full of Happiness by Carol McCloud
- Students’ strengths posters from Lesson 1
Learning Objectives
Students will:
- Understand the importance of being specific when grateful (e.g., we are thankful to someone for something)
- Understand gratitude as an intentional act
- Appreciate each other for qualities or actions reflecting our character strengths
SEL Competencies
- Social Awareness
- Recognizing individual strengths
- Relationship Skills
- Cultivating connection and friendship
Getting Ready For This Activity
Educators:
Think about a time when someone filled your bucket (brought you joy) or when you filled someone else’s. How did it make you feel?
How To Do It
Slide 1
Introduce the Lesson
Gratitude Activity
Lesson 3
See The Good Challenge
Let’s learn what gratitude is and why it can make us feel better.
• Introduce the lesson.
- Today the focus is on expressing gratitude, but let’s start off with a quick review…
Slide 2
Expressing Gratitude
Gratitude Activity
Lesson 3
Expressing Gratitude
Gratitude is a social glue; it holds relationships together.
• If you like, ask a few volunteers to share something they wrote in their gratitude journals for homework.
Tell students:
- Expressing gratitude, or saying “thank you,” is critical for relationships. It helps each person recognize the other person’s efforts and makes the other person feel appreciated. When we express gratitude, we communicate to the important people in our lives how they matter to us AND, over time, we get closer to these people because they help us reach significant goals in our lives. It’s glue for who and what matters!
Gratitude Activity
Lesson 3
Gratitude Is A Choice
You choose how you think.
It’s your choice to focus on good things or bad things in life.
You also choose how you act.
It’s also your choice whether you want to do things that lift others up or bring them down.
Do the following quick experiment:
- Look around the room for 20 seconds to find all the blue things that you can see… (wait 20 seconds).
- OK, what did you see that was green? After students point out that you said ‘blue,’ reply: “But you looked around, right? So why can’t you tell me what was green?”
- This illustrates how we expand what we focus on in our minds, while everything else (the green) fades away. Our brains rewire this way!
Then explain to students:
- It’s important to realize that you choose how you look at life and what focus you can take throughout the day.
- You can spend all of your time and energy thinking about all the things that go wrong in life, looking at people’s negative characteristics, and doing things to feel better about yourself at the expense of others.
- Or you can choose to appreciate all the good things that you have in your life, recognize people’s positive characteristics, and do things that make others feel better about themselves.
“Bucket Filling”
Things that lift others up (or “fill their buckets”) include:
• Being friendly
• Expressing gratitude
• Complimenting them on their strengths/talents
• Encouraging them to pursue goals
• Showing compassion
Read either the “Go and Fill Out Buckets MS-HS” handout or McCloud book on Bucket Filling.
- Having our buckets full not only makes us happy. It also makes us strong because it’s like having a tank full of gas. With full buckets we keep trying new options to solve problems rather than quit, we can keep going rather than give up. A full bucket feels good now, but keeps us strong when we need to be, too.
• Ask students for examples of ways people have filled their buckets. Discuss with students:
- These are special people who CHOSE to be nice to you.
How do they make you feel? Do they make you feel grateful? You, too, can CHOOSE to appreciate these special people in your life.
• Now it’s time for students to fill each other’s buckets. Introduce the activity during which students will leave post-it “Thanks” that acknowledge/compliment others’ character strengths on their posters.
• Teachers should first share their own strengths poster.
- You may all know about this strength of mine, but maybe you didn’t know about this one. What are ways you’ve noticed them? (Be sure to thank students who offer ideas.)
• Provide some examples of post-it thanks that acknowledge others’ strengths:
- Thanks for helping me carry my project into school. You showed kindness.
- I appreciate your jokes yesterday. Your humor helped pick me up. Thanks.
Challenge students to look for the good in others by acknowledging each other’s strengths. (Note: To make sure that each student gets something written about him or her, you can have students draw names or turn to a neighbor on one side.)
- This helps us appreciate the gifts we all have to share and the good qualities of friends.
Allow students several quick occasions to fill buckets to cultivate a sense of connection to peers and to improve classroom climate.
Introduce the homework, which is to write about and thank people who noticed or supported a strength or talent of yours.
- Write about a time when someone did NOT notice you or a talent of yours. How did that make you feel? Then write about a time someone DID notice you or a talent of yours. How did that make you feel?
Make and share a thank you card with a person who noticed you or your talent (mention WHAT THIS PERSON DOES THAT MATTERS PERSONALLY to you, the person’s EFFORTS ON YOUR BEHALF, and HOW THIS PERSON’S BEHAVIOR MAKES YOU BETTER).
Ask students to reflect on how it felt to have someone acknowledge their strengths and to acknowledge another person’s strengths.
Students might also identify one person in their lives whose “bucket” they would like to fill sometime in the next 24 hours.
Go Out And Fill Buckets
Ideas about Bucket Filling from “How Full is Your Bucket” by Tom Rath and “Growing Up with a Bucket Full of Happiness” By Carol McCloud
You can read excerpts from the book or the entire book, but here is the main idea.
• **The Theory of the Dipper and the Bucket**. Each of us has an invisible bucket. It is constantly emptied or filled, depending on what others say or do to us. When our bucket is full, we feel great. When it’s empty, we feel awful. Each of us also has an invisible dipper. When we use that dipper to fill other people’s buckets – by saying or doing things to increase their positive emotions – we also fill our own bucket. But when we use that dipper to dip from other’s buckets – by saying or doing things that decrease their positive emotions – we diminish ourselves as well. Like the cup that runneth over, a full bucket gives us a positive outlook and renewed energy. Every drop in that bucket makes us stronger and more optimistic. But an empty bucket poisons our outlook, saps our energy, and undermines our will. That’s why every time someone dips from our bucket, it hurts us. So we face a choice very moment of every day: We can fill one another’s buckets, or we can dip from them. It’s an important choice – one that profoundly influences our relationships, productivity, health, and happiness. (p.15 How Full is Your Bucket)
• **Go out and fill buckets**. Give compliments to people, encourage them. When you are at school, look around and see if there is someone who looks like he or she may not be having a very good day and do something nice for that person, such as inviting him or her to hang out. Try each day to fill other people’s emotional buckets.
• **Avoid taking from another person’s bucket**. We take from others when we criticize or bully them or do anything else that brings someone down rather than builds him or her up. Those who have a need to take from someone else’s bucket are really just trying to fill their own empty bucket!
• **Fill a bully’s bucket, too**. Those individuals who take from others (bully, criticize, or treat negatively) are the people who usually need the most bucket filling. They are lacking love and acceptance in their own lives. Unfortunately, treating others negatively never gives us what we are lacking. According to Carol McCloud, we can never fill our own buckets by taking from the bucket of another person. Even though it feels like someone doesn’t deserve it, do something to fill this person’s bucket by
giving him or her a compliment, smile, or some other positive gesture that helps everyone feel better.
• Remember your loved ones. What about filling the buckets of people we know and love? What kinds of things can we do for our parents, siblings, and friends to show them how special they are to us? Doing this really makes you think about how much your loved ones do for you and how much they mean to you. Carol encourages us to think of more gestures we can make to show them how special they are. What can you do to fill the buckets of your loved ones?
Lesson 4
Thank You for Believing in Me
Students learn how to think gratefully.
Time Required
1 class period
Grade Level
6th – 12th grade
Materials
- Lesson 4 PowerPoint slideshow
- Computer and monitor or projector to show video
- Thank You, Mr. Falker by Patricia Polacco
- Handout “HW Gratitude Letter” for each student
Learning Objectives
Students will:
- Understand how benefactors are significant in our lives by learning to think gratefully through the three perceptions that make up gratitude: personal value of benefits, cost to benefactors, and prosocial intentions of benefactors
SEL Competencies
- Social Awareness
- Recognizing one’s needs
- Self-Management
- Advocating for oneself and one’s needs
- Relationship Skills
- Offering and seeking help
Getting Ready For This Activity
Educators:
Think of someone who saw your potential and helped you achieve it. What was the cost to this person for helping you and what did he or she intend for you? How did you benefit from this person’s help? How did it make you feel?
How To Do It
Slide 1
Introduce the Lesson
Gratitude Activity
Lesson 4
Thank You For Believing in Me
How to think and receive gratefully.
• Introduce the lesson.
- Today the focus is on learning to think gratefully. But first, let’s review…
Slide 2
Benefit Appraisals
Gratitude Activity
Lesson 4
Benefit Appraisals
Costs: Time, money, energy, etc. that it takes for one person to help another
Benefits: The advantages that someone receives when another person helps him or her
Intentions: The helper’s goal
• Review benefit appraisals from lesson 2 with students.
- Benefit appraisals refer to the process of evaluating what it means when someone helps another person.
When someone helps another person, it usually costs him or her something—time, effort, or money.
In addition, the person’s help actually benefits the other person, which means that he or she understands what that person needs and decides that it is worthwhile to help out.
Finally, the fact that a person helps another means that he or she cares enough to want to make that other person do or feel better. For example, if your friend helps you study for a big test, then he or she is probably sacrificing his or her own time to help you do better because he or she cares about you.
Recognizing all of these elements can help you feel more grateful.
Before reading *Thank You, Mr. Falker*, ask students:
- Why is it better to face up to challenges and ask people for help rather than avoid challenges?
- How can we reframe struggle to keep trying?
- How can we appreciate our mentors or benefactors?
Read the story *Thank You, Mr. Falker* to the whole class. Wait to read the last two paragraphs to the students until after the following discussion.
After reading the story, discuss with students:
- What did the main character NEED? What difficulty or hardships did she face?
- As reading got harder for Trisha, what did she do instead? Why?
- Who influenced her and how? (Mr. Falker, family, kinder friends)
- Trisha learned to read and write, but what else happened to her? How did she feel? (joyous, proud) How did her life change? (no more teasing/bullying)
- What was the VALUE OF BENEFIT? (she was no longer teased/bullied or alone/ashamed, able to succeed past a major struggle)
- What was the COST TO THE BENEFACTOR? (Mr. Falker’s time and effort)
- What was the BENEFACTOR’S INTENTION? (He saw her strengths of courage and cunning, believed in her, and wanted to help her.)
Now mention that there is a secret ending. Read the last two paragraphs and discuss further:
- What did the girl DO with her new skill? How did it affect her later in her life?
- How did she turn the gift Mr. Falker gave her into an act of gratitude?
- How do you think Mr. Falker felt when he learned about her life?
Your Helpers: “Thank You, ______”
Think about the people who support you.
- How do they help you?
- What do they give up to help you?
- How is their help valuable or beneficial for you?
- Why do they help you?
Pick a significant benefactor in your life and write a thank you letter to him or her. Be sure that you give it to this person as a thank you!
• Give a personal example of a time when someone helped you.
• Break students into small groups to discuss:
- Have you ever had a need like the girl in the story? What is a struggle or hardship you’ve faced? Have you ever not asked for help when you needed it? Why?
- Have you ever overcome a big challenge thanks to someone’s help? Explain.
- What did it COST the person who helped you? (Time? Effort?)
- Why did the person INTEND (want) to help you? What talents did he or she see in you?
- How did the person notice? How did he or she help or encourage you? How did it change you? Why did it matter? (VALUE) How did this event make you feel?
• Have students watch this video: [Science Behind Gratitude Expression](#)
• Introduce the autobiographical part of the homework assignment: writing and delivering a special, personalized GRATITUDE LETTER for a significant person in their lives.
- *Some relationships are special. They’re not all equal.*
*Expressing thanks is like a gift we can give to these special people in our lives.*
• Be sure to have students include the three aspects of grateful thinking in the letter: value of the benefit, cost to the benefactor, and intention of the benefactor. Students should use this Gratitude Letter Template to help them write it.
• After writing the letter, students should add images or symbols that represent their own top character strengths and that inspire them. Students could also choose to draw a short comic strip in their letter to represent the special role the significant person plays in their life.
• This thank you letter will be a special gift for students to give to their benefactors.
Reflection After The Activity
Ask students to reflect on how it felt to write and then personally deliver a thank you letter to someone who benefited them.
Gratitude Letter
Instructions
Choose an adult (preferably a mentor) who you are really grateful for and write him or her a letter to express your gratitude. Remember to be honest and specific. The more effort you put into writing your letter, the more your message will mean to the other person.
You can use the letter template below and fill in the blank spaces, or write the entire letter in your own words. You can include anything you want, but be sure to describe:
- The ways this person helped you
- How this person’s help benefited you and made your life better
- The time or effort it cost that person to help you
- Why this person chose to help you
- How you feel about this person
After you write your letter, give it to the other person. You could deliver your letter in person, read it to him or her over the phone, or send it through email—it’s up to you! But remember, this activity works best if you read the letter in person. We know it may feel a bit awkward, but it’s more likely to make you and the other person feel good!
Gratitude Letter Template
Dear Person’s name,
Thank you so much for (describe the kinds of things this person has done to help you). This has really helped me (describe how this person’s actions have benefited you or what he or she inspires you to do). I also really appreciate how you (describe other things that this person does to help you or make your life better). I realize that (describe what it costs this person to help you in these ways). Your actions show me that (say why you think this person wanted to help you) and (what promise you think this person sees in you). I (describe how you feel about this person). Thanks to you I want to (say what this person motivates you to do).
Gratefully,
Your name
Post-Visit Reflection
In a paragraph, describe your experience of the gratitude visit (how it made the person feel, how it made you feel, what you learned, what you want to take away from it, and any additional detail about what it motivates you to do). Also, from this experience, why do you think gratitude is important to express in relationships?
FOR MORE INFORMATION, GO TO:
Greater Good Magazine:
greatergood.berkeley.edu
Greater Good Science Center Education program:
ggsc.berkeley.edu/who_we_serve/educators
More Greater Good activities:
ggia.berkeley.edu |
Seeing What We Have Never Seen Before: Low-Frequency Radio Astronomy from the Moon
--- Low-frequency radio observations from the radio-quiet lunar farside will allow astronomers to probe the universe from its mysterious dark ages after the Big Bang, to the nature of the magnetospheres of planets around other stars and the outer planets in our Solar System, and to better understand the causes of explosive release of plasma from the Sun's corona.
Written by G. Jeffrey Taylor
Hawai'i Institute of Geophysics and Planetology
Jack O. Burns and teammates at the Center for Astrophysics & Space Astronomy at the University of Colorado, Boulder, NASA Goddard Space Flight Center, the University of California at Berkeley, Caltech, and the University of Michigan have bold plans to address profound questions about the cosmos. They focus on low-frequency radio observations, a region of the electromagnetic spectrum that is troublesome for Earth-based radio observatories because of human-produced radio frequency interference (RFI) combined with atmospheric and ionospheric sources of noise. Burns and colleagues propose sending a series of missions to the lunar farside (which never faces the Earth), a significant portion of which is shielded from the troublesome terrestrial noise. Their goal is to install an array of 128 pairs of antenna nodes on the lunar farside distributed over a spiral 10 kilometers in diameter. This large radio telescope, not bathed in radio noise from Earth, will be able to measure fluctuations in the 21-cm (1420 MegaHertz, MHz) line derived from neutral hydrogen. The measurements will allow astrophysicists to probe a time called the Dark Ages when there were no stars or galaxies in the expanding universe. Measurements of the 21-cm emissions are critical to probing astonishing features such as inflation of the universe, the formation of the first stars and first Black Holes, and the non-standard physics of dark matter. Lunar low-frequency telescopes of more modest dimensions also have other fascinating targets such as studying magnetospheres around the other planets and even in other solar systems, and understanding the behavior of the Sun, our local star. The more modest missions will be trailblazers for the ambitious farside array, and human and robotic missions will build up the infrastructure needed for construction, operation, and repair of advanced radio telescope facilities on the Moon.
References:
- Burns, J. O. (2020) Transformative science from the lunar farside: Observations of the dark ages and exoplanetary systems at low radio frequencies, *Philosophical Transactions of the Royal Society A*, 379:20190564, doi: 10.1098/rsta.2019.0564. [article]
- Burns, J. O., MacDowall, R., Bale, S., Hallinan, G., Bassett, N., and Hegedus, A. (2021) Low radio frequency observations from the Moon enabled by NASA landed payload missions, *The Planetary Science Journal*, 2:44, doi: 10.3847/PSJ/abdfc3. [open access article]
- PSRDpresents: Seeing What We Have Never Seen Before: Low-Frequency Radio Astronomy from the Moon --Slide Summary (with accompanying notes).
Low Frequency Radio Waves and High Value Science
Astronomers (also called cosmologists) who study the origin and evolution of the universe, including its mind-bending space-time relations, have a mostly satisfying story about origin and evolution of the universe. A Cliff Notes version is summarized in the illustration below. By about 400,000 years after the Big Bang (which happened somewhere around 13.8 billion years ago) the originally hot universe had expanded (astronomers call this inflation) and cooled enough to precipitate atoms (almost all of them hydrogen) from protons and electrons, creating a vast, inflating sea of neutral atoms. Hydrogen is ubiquitous during these times but no stars or galaxies had yet formed, hence giving rise to the nickname Cosmic "Dark Ages." The Dark Ages ended, or began to end, when the first stars formed and began to ionize the hydrogen surrounding the stars. This altered the 21-cm spectrum and 21-cm density fluctuation structure, which the researchers reverse-engineer to infer the properties of those first stars and galaxies. At the 600 million years post-Big-Bang mark, we have the cosmic dawn, the beginnings of the formation of copious stars and galaxies, creating the universe we see when we look up at night, and in fact when we just look around at each other.
Big events in the history of the universe. The illustration is not to scale and does not convey the expansion of the universe in all directions after the Big Bang. By about 400,000 years after the Big Bang the originally hot universe expanded and cooled enough to precipitate atoms (almost all of them hydrogen) from protons and electrons, creating a vast, inflating sea of neutral atoms. The hydrogen is ubiquitous during these times but no stars or galaxies had yet formed, hence giving rise to the nickname Cosmic "Dark Ages," and necessitating interrogating it with low-frequency radio waves. The Dark Ages ended when the first stars formed accompanied by re-ionization of the atoms.
The universe is teeming with neutral hydrogen atoms. Collisions between hydrogen atoms adds a bit of energy to each atom. This extra energy is radiated away as electromagnetic energy (photons) at a wavelength of 21 cm, corresponding to a radio frequency of 1420 MHz. The Dark Ages may be dark in most wavelengths, but the low-frequency radio waves provide a wealth of information about conditions then and about the changes that led to cosmic dawn, the formation of stars and galaxies, and the re-ionization of atoms. Perhaps most impressive, shifts in the strength of the 1420 MHz signal could lead to new ideas for the physics of dark matter. An interesting twist in observations of the universe far back in time is that when the 21 cm (1420 MHz) signals reach our radio telescopes, they have been redshifted because the universe is expanding. Thus, the actual observations are of radio waves that have frequencies of about 14 MHz. Astronomy is mind boggling!
Other targets are closer to home (in distance and time) studying magnetospheres around the other planets in our Solar System and understanding coronal mass ejections (CMEs) from the Sun, and from other stars to understand the effects on the planets that orbit them.
Going to the Moon to Avoid Terrestrial Radio Noise
To limit light background, astronomers like to install optical telescopes in places far from sources of light such as cities, towns, and highways. Radio astronomers are equally obsessed about polluting their observations with stray human-made radio waves from broadcasting, communications (including cell phones, but especially powerful military and commercial transmitters), and electrical transmission lines. These sources of annoying noise are collectively called radio frequency interference (RFI). Astronomers also deal with natural sources of noise from the atmosphere and ionosphere, such as lightning. These noise sources are particularly loud at low radio frequencies (0.1 to 40 MHz, corresponding to wavelengths of 3000 meters down to 7.5 meters).
The most convenient place to go to solve this noisy problem is the farside of the Moon. The Moon is tidally locked to Earth, so it makes one revolution per orbit around Earth, hence we see only one hemisphere. This is beneficial for radio astronomical observations that need to avoid all that radio frequency interference. The low-frequency radio silence on the lunar farside has been modeled by Neil Bassett (University of Colorado, Boulder) and colleagues at CU Boulder, NASA Ames Research Center, and NASA Goddard Space Flight Center based on spacecraft data (see map below). The base map shows topographic variations on the farside (from 90 to 270 degrees longitude) determined by the Lunar Observer Laser Altimeter (LOLA). White is highest (about 7 km above the mean lunar surface elevation) and the purple is lowest (about 7 km below the mean lunar surface elevation). The large purple splotch is the immense South Pole Aitken impact basin, which is about 2500 km in diameter. The curves show the decreases in radio noise at 100 kHz in Neil Bassett's modeling. The outer curve represents a suppression of only a factor of 10 compared to the nearside (expressed in decibels, a common logarithmic unit for comparing relative frequencies of all sorts of waves, from radio to sound). This is not adequate for making the observations that Jack Burns and his colleagues want to make. The middle curve does the trick, blocking the signal from Earth noise by 50 decibels, a factor of 100,000 compared to the nearside. The innermost curve, which still covers a significant amount of real estate on the farside, drops the noise signal by 90 decibels compared to the nearside, a factor of 1,000,000,000. Decreasing the noise by a factor of a billion will satisfy astronomical needs, no matter how fussy the astronomers and their instruments are.
This map shows radio frequency interference (RFI) on the lunar farside compared to levels on the nearside. The lunar farside extends from 90 to 270 degrees longitude. The base map shows lunar topography, with the highest levels colored in white and red, the lowest in purple. Yellow stars are labeled with place names of suitable sites for the FARSIDE observatory because they are smooth. The black curves represent the extent to which interference at low radio frequencies is suppressed compared to the Earth-facing lunar hemisphere, in decibel units: -10 is ten times lower than on Earth-facing side, -50 is a hundred thousand times lower, and -90 is a billion times lower. The radio suppression curves are from Burns et al. (2021) based on modeling by Bassett et al. (2020). Base map is from LROC, Arizona State University and NASA.
Deploying a Low-Frequency Radio Observatory Array on the Moon
There are modest ways to begin low-frequency radio astronomy from the Moon. I will explain that below, but I would like to start with describing Jack Burns’ Big Vision (JBBV) of installing an impressive array of radio-wave detectors on the lunar farside. Jack’s vision is to convert a 10 km x 10 km region of the dusty lunar farside into a telescopic array that will look back in time to the Dark Ages of the history of the universe. Not back in time to when Jack Burns and I were in elementary school, not back in time when the Moon formed by a giant impact with the still-accreting Earth about 4.5 billion years ago, not back in time when stars and galaxies had only begun to form about 13.2 billion years ago, but to the Dark Ages after the Big Bang, a time about 13.4 (ish) billion years ago. (Units like billions of years do not bother astronomers, planetary scientists, and geologists!)
Jack Burns’ idea is to use robotic rovers to deploy 128 pairs of antenna nodes on a suitably smooth 10 km x 10 km area on the lunar farside where RFI from Earth does not mask the low-frequency radio signal from the extraordinarily distant past. He and his collaborators call the observatory FARSIDE, an excellent acronym for Farside Array for Radio Science Investigations of the Dark ages and Exoplanets. FARSIDE would be constructed by robotic rovers deploying the 256 dipoles in a spiral pattern within a 10-km square, as illustrated below. The deployment of the array was engineered in collaboration with Blue Origin LLC using the Blue Moon lander in the design.
In the current design FARSIDE consists of three major parts, (1) a spacecraft that will transport all the other parts, including the base station that will provide power and communications, (2) four single-axel rovers to deploy antenna nodes, and (3) the antenna array that the rovers will deploy. There will be 128 pairs of antenna nodes. The array will be deployed in a spiral pattern, see the insert at the upper right that shows a view looking down. Tethers connect the base station to the nodes, providing communications and power. The lowest-frequency antennas are incorporated inside the tethers, and a higher frequency set of dipoles are deployed perpendicular to the tether direction. The deployment plan was engineered in collaboration with Blue Origin LLC using the Blue Moon lander in the design. NASA's lunar-orbiting Gateway is shown in the dark sky near the insert (not to scale) and might be used as a data relay to Earth, although there are other possibilities such as a dedicated relay satellite.
Driving four robotic rovers many kilometers on the Moon while deploying tethers and dipole antennas might seem like an impossible challenge on a dusty, cratered surface. But be assured, robotic rover missions to Mars and the Moon (and, of course, the human driven Apollo rovers) have already shown that rovers can readily traverse a barren planetary surface. The requirement for FARSIDE is an observatory site with a 10-meter elevation gradient over an area 10-km in diameter. Experience with the Apollo missions and the high-resolution images taken by the Lunar Reconnaissance Orbiter Camera show that suitable areas are abundant.
Images of the undulating terrain at the Apollo 12 [TOP] and Apollo 16 [BOTTOM] landing sites, from orbit and from the surface. The Apollo 12 site is in a maria (the Ocean of Storms), which are typically smooth, though decorated with craters that will need to be avoided during emplacement of the FARSIDE radio array. FARSIDE would target smooth maria or smooth highland plains on the lunar farside (see topographic map above). The Apollo 16 site is in the lunar highlands and has craters, but a farside landing site similar to Apollo 16 would be suitable for constructing the array. Apollo 12 surface photo is NASA image AS12-46-6780. Apollo 16 surface photo is NASA image AS16-114-18423. Click on the images for more information.
Paving the Way with Proof-of-Concept Missions
The full FARSIDE observatory plan is a bit dicey for a risk-averse organization like NASA to jump into without some more modest practice runs. Jack Burns and his collaborators have devised three interesting proof of concept missions, two of which are scheduled to fly on NASA's Commercial Lunar Payload Services (CLPS) initiative. This program is devised for commercial companies of any size to bid on delivering payloads for NASA, including payload integration and operations, launching from Earth and landing on the surface of the Moon. As the NASA CLPS website explains, the program was developed "to perform science experiments, test new technologies, and demonstrate capabilities to help NASA explore the Moon and prepare for human missions." Sounds ideal for testing technologies and the extent to which we can really reduce RFI. The two radio astronomy instruments (ROLSES and LuSEE) approved to be included on CLPS missions are briefly described below, along with a third (DAPPER) whose fate is being decided by NASA's Payloads and Research Investigations on the Surface of the Moon (PRISM) program, which will fly science payloads on a CLPS spacecraft. ROLSES and LuSEE are described in Burns et al. (2021).
**ROLSES: Radio Wave Observations at the Lunar Surface of the PhotoElectron Sheath**
Principal Investigator: Robert MacDowall at NASA Goddard Space Flight Center.
**ROLSES** radio receiver will be carried by a **NOVA-C spacecraft** from Intuitive Machines and will land on the lunar nearside in Oceanus Procellarum (Ocean of Storms). ROLSES will measure low-frequency radio spectra in the 10 kHz to 30 MHz range. The measurements will help us understand the effects of exposure to the solar wind and ultraviolet light, which charge the dusty lunar surface, levitating the dust and transporting it around the Moon. This will improve our understanding of the photoelectron plasma sheath 1 to 3 meters above the surface and how the plasma interactions aid in dust transport, which has important implications for human exploration of the Moon. It will also provide vital information on how well a nearside radio observatory could monitor radio bursts from the Sun.
**LuSEE: Lunar Surface Electromagnetic Experiment**
Principal Investigator: Stuart Bale at the University of California at Berkeley.
**LuSEE** will measure radio signals in the 10-45 MHz range, which includes signals from the Sun, Earth, and outer planets. It will land in the Schrödinger Basin on the lunar farside (it is labeled on the farside map shown above). A central goal is to measure the RFI on a place on the lunar farside to test the veracity of the data-based modelling done to characterize the RFI on the farside. The prediction is that there will be measurement noise, so that it can be characterized to help plan other missions. The instrument will, like ROLSES, provide information about the surface plasma sheath, its role in dust transport, and its interactions with the Earth's magnetotail. Perhaps most important, LuSEE will test our ability to detect distant signals from the young...
universe, while measuring radio bursts from other planets in our Solar System.
**DAPPER: Dark Ages Polarimeter PathfindER**
Principal Investigator: Jack Burns at the University of Colorado, Boulder.
*DAPPER* is a proposed instrument to fly with LuSEE to the Schrödinger Basin. It would measure radio signals in the 40-110 MHz range and would be the first direct pathfinder mission for FARSIDE. It would test our ability to peer into the deep distant past to understand the Dark Ages. Team Burns have proposed that the mission could fly to Schrödinger Basin with LuSEE, thereby providing the appropriate range of frequencies to examine the 21-cm radio signals from the Dark Ages (red shifted, of course) in the combined 10 to 110 MHz range. It would be humanity's first crack at doing cosmology from the Moon, paving the way for clearer views provided by ambitious, imaginative projects like FARSIDE.
---
**Science from the Moon**
The astronomical observatories described here and elsewhere, the extensive geological studies aimed at understanding the origin and evolution of the Moon, the visits to hundreds of craters to determine how the flux of impactors varies with time, the search for and extraction of resources to use on the Moon and throughout the inner Solar System, and the entire process of us figuring out how to live on another planetary body with no atmosphere, require an extensive human and robotic infrastructure on the Moon. Human settlement and its infrastructure open vast vistas for science and applied science activities on the Moon. The exciting possibilities from low-frequency radio arrays on the farside are only the beginning of using the Moon to satisfy human curiosity about our origins.
---
**Additional Resources**
Links open in a new window.
- **PSRDpresents**: Seeing What We Have Never Seen Before: Low-Frequency Radio Astronomy from the Moon -- [Slide Summary](#) (with accompanying notes).
- Bassett, N., Rapetti, D., Burns, J. O., Tauscher, K., MacDowall, R. (2020) Characterizing the radio quiet region behind the lunar farside for low-radio frequency experiments, *Advances in Space Research*, v. 66, 1265-1275, doi: 10.1016/j.asr.2020.05.050. [article]
- Burns, J. O. et al. 2019, Probe study report: FARSIDE (Farside Array for Radio Science Investigations of the Dark ages and Exoplanets), NASA. [report pdf]
- Burns, J. O. (2020) Transformative science from the lunar farside: Observations of the dark ages and exoplanetary systems at low radio frequencies, *Philosophical Transactions of the Royal Society A.*, 379:20190564, doi: 10.1098/rsta.2019.0564. [article]
- Burns, J. O., MacDowall, R., Bale, S., Hallinan, G., Bassett, N., and Hegedus, A. (2021) Low radio frequency observations from the Moon enabled by NASA landed payload missions, *The Planetary Science Journal*, 2:44, doi: 10.3847/PSJ/abdfc3. [open access article] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.